Story Points and Velocity in Ansible Projects with Scrum

Guideline for accurate story point estimation and velocity measurement in Scrum-based Ansible projects, focusing on business value from automation content.

Problem

In Scrum teams managing Ansible engineering projects, technical tasks and bugs are often treated as user stories and assigned story points. This distorts velocity metrics by including non-value-adding activities in the measurement of business value. For example, administrative or maintenance tasks get estimated like user stories, creating a false sense of productivity. As a result, velocity fails to accurately reflect the team’s ability to deliver automation content that benefits the organization, such as Ansible collections, roles, modules, inventory updates via GitOps, or improved documentation. Without a focus on business value, story points become ineffective, as they no longer represent meaningful progress toward organizational goals.

Another symptom of this wrong approach is that refinements take a lot of time, and Ansible-related topics are seldom discussed because significant energy is spent refining and estimating technical tasks that do not add business value. These tasks should not be a focus, except to recognize them as potentially “wasted” effort.

A further indicator is not using Fibonacci numbers for estimation but instead using hours (e.g., treating four hours as one SP). This suggests that items labeled as user stories are not true user stories, making proper poker planning sessions using Fibonacci impossible. It becomes challenging to identify small, medium, or large user stories because the items are fundamentally different from actual value-adding stories.

Context

Ansible projects blend development, operations, and maintenance work. In a Scrum setup, teams use ticket types like user stories, bugs, and tasks. Refinement meetings are crucial for discussing and estimating all work types to ensure strong team understanding and planning. The core product in these projects is Ansible automation content, inventory, and documentation, which speed up lifecycle management, decouple operations, and automate admin tasks. Velocity should track progress in building this product, not unrelated technical chores or bug fixes, which can inflate metrics and create perverse incentives (e.g., more bugs seemingly increasing value).

This guideline targets teams new to automation, which often involves more than just automating lifecycle management (LCM) or maintenance—this aspect is frequently overlooked. Key points include:

  • Shifting to DevOps, SRE, and an “automate-first” mindset, which is a significant change.
  • Adopting software engineering discipline with iterative methods like Scrum.
  • Struggles with new concepts, such as distinguishing user stories, epics, tasks, and bugs—without clear separations, planning gets complicated, and velocity fails to show productivity gains as teams build experience.

In contexts like the Dutch government, where experienced Ansible engineers are scarce, teams are often transitioning and learning new skills. To maximize productivity:

  • Have Ansible engineering teams focus on engineering tasks, with separate operations teams handling operational tasks.
  • For example, LCM and maintenance using Ansible should be done by Ansible operators, not engineers.
  • Engineers deliver content, inventory, and documentation, while operations teams use them.
  • Engineers maintain only their Ansible development environment; all other environments are operations’ responsibility.

This focus facilitates learning, increases productivity and engineering capacity, and boosts velocity. As a result, the sprint board and backlog for Ansible engineers should not include work related to other environments, except in rare cases.

Solution

To measure velocity accurately and emphasize business value, adopt a strict approach to categorizing and estimating work for Ansible projects. Follow these steps:

  1. Define the Product Clearly: Identify the core product as Ansible automation content (collections, modules, roles), inventory managed via GitOps as the single source of truth, and supporting documentation. Any work that doesn’t directly modify or enhance these should not be a user story.

  2. Categorize Tickets Appropriately:

    • Use user stories only for items that deliver business value through changes to Ansible content, inventory, or documentation. Assign story points (SP) to these during refinement.
    • Use tasks for technical activities, administrative work, or support tasks (e.g., setting up pipelines, training, or bureaucracy) that don’t add to the core product.
    • Use bugs for fixes that maintain existing content but don’t expand it. Discuss bugs in refinement for planning impact, but do not assign SP.
  3. Refinement Process:

    • Include all ticket types (user stories, bugs, tasks) in refinement meetings to assess overall sprint capacity.
    • Assign SP only to qualifying user stories. For bugs and tasks, estimate effort in hours or discuss qualitatively to inform planning without inflating velocity.
  4. Planning and Velocity Tracking:

    • Base sprint planning on historical velocity from user stories only. Account for bugs and tasks by adjusting committed SP downward if they consume significant capacity.
    • Monitor velocity trends to reflect growing expertise (e.g., faster production of Ansible content over time).
    • Avoid steering solely by metrics; use common sense to evaluate if the work fits capacity, considering all elements.
  5. Retrospective Integration: Conduct retrospectives to review these practices, making implicit choices explicit (e.g., acknowledging that velocity measures a mix if not strictly defined).

Benefits

  • Accurate Business Value Measurement: Velocity reflects true productivity in delivering Ansible automation, helping track improvements like increased content production as teams gain experience.
  • Better Planning and Capacity Management: By separating value-adding work from tasks/bugs, teams avoid overcommitting and gain clarity on impacts to sprints.
  • Reduced Perverse Incentives: Prevents scenarios where introducing more bugs or bureaucracy artificially boosts metrics, encouraging focus on quality and efficiency.
  • Improved Focus on Core Product: Ensures efforts prioritize Ansible content, inventory, and documentation, aligning with project goals.
  • Flexibility with Unfinished Tasks: If technical tasks are not recorded as SP, it won’t be a problem if they are not finished in the sprint, as they won’t impact velocity. This is logical for tasks like giving a workshop for knowledge transfer to external parties, which may depend on calendars and availability, or requesting a service account from an external party, which can cross sprint boundaries without affecting core metrics.

Alternatives

One alternative is assigning SP to all work items, including bugs and tasks, to simplify tracking. However, this is not recommended as it mixes different types of value and distorts metrics. By default, tools like Jira do not allow SP assignment to tasks and bugs, which aligns with best practices to avoid bad habits. Customizing the tool (e.g., modifying screens) to enable this requires extra effort and is a contentious decision—it essentially signals an organizational policy against focusing on business value. This should not be promoted, as it encourages teams to adopt practices that undermine accurate velocity tracking. Another approach is using Kanban for operations work alongside Scrum for development, but in blended projects, the strict categorization above provides a balanced hybrid without splitting boards.

Examples and Implementation

In a setup for an Ansible project (e.g., using Jira or similar tools):

  • User Story Example: “As an operator, I want an Ansible role for deploying SSSD with Active Directory integration so that authentication is automated.” Assign SP based on complexity (e.g., 5 SP). This directly enhances Ansible content.

  • Task Example: “Investigate and document GitLab pipeline improvements for testing Ansible playbooks.” No SP; treat as a task that supports but doesn’t add to core content.

  • Bug Example: “Fix error in existing Ansible module for package installation.” Discuss in refinement for time impact, but no SP; it maintains value without creating new.

File Structure Suggestion:

  • Organize epics around Ansible content themes (e.g., “Inventory GitOps”, “Role Development”).
  • Labels are more flexible for filtering, such as “rfr” for ready for refinement or “rfs” for ready for sprint. Components could be used for categories like “ansible-collection”, “inventory”, or “execution-environment”, but labels suffice for most filtering needs. Consider using both if your Jira setup benefits from structured components.

Implementation Tip: During sprint planning, review a burndown chart focused on user story SP, while noting task/bug hours separately. This ensures visibility into all work without skewing value metrics. For new teams starting Ansible projects, introduce this guideline in the first retrospective to set expectations.



Last modified September 22, 2025: move c2platform/c2/website C2-889 (519ee43)