Simple team metrics to assess improvements
The importance of using actual performance to empower people to assess the impact and value of changes made as they seek to improve how software is delivered.
We recently teamed up with Doug Dockery and colleagues from Rally (www.rallydev.com) for a presentation on essential agile metrics to assess the impact of changes made to software development activities at the team level. The presentation affirmed that simple metrics that encompass all team activities can provide valuable insights to allow teams to self-assess the impact of changes they make. It was recommended that teams track a metric or two across each of the following categories:
- Productivity - How much work your team has completed within a period of time? For teams practicing scrum, this data is tracked as “Velocity” - the number of stories completed each sprint. Many teams at WWT have shifted to use Kanban (rather than scrum), so this metric could be a simple running tally of how many stories are completed each week - for teams using Kanban, you can extract this data from the bottom line of your Cumulative Flow Diagram.
- Predictability - Is your team able to deliver in accordance with set goals? This is a simple metric to track assuming you have some productivity data to look at - have a team discussion based upon your knowledge of the stories in your backlog or ready queue and then set a team goal in terms of stories for an upcoming short period of time. As an example, if your team set out to complete ten stories last week, and the team only finished eight - you could say that the “Percent Complete & Accurate” of the team is 80% - then use that data as input for your team retrospective discussion. The retro would discuss how to get the team’s output to 100% Complete & Accurate - to do this you could discuss decreasing the number of stories in the goal or identify the impediments to remove that are slowing things down.
- Responsiveness - How long does it take your team to get something done? The best metric to track here is “cycle time”, which is how much time does it take for your team to complete a story once you start working on it. Many teams find cycle time provides greater value when it is plotted visually on a control chart - a control chart has one graph point for each story depicting how long it took to complete. Tracking cycle time on a control chart is intended to promote team discussions about improvement since it makes it easy to see stories that took longer than normal (they will literally jump off the chart at you). Suppose most stories on a team take about one day to complete (a steady line on the control chart), then all of the sudden story #232 takes seven days to complete - at your team retro, you look at the control chart, see #232, your team talks through the complex refactor to the data model that was needed that nobody knew about when the story was reviewed, and as a team, everyone “learns” how to break out stories better for future work.
- Quality - Are we building a quality product that aligns to what our customers need? A metric to track here is “defect density”, which is the ratio of defects per feature in a product. The goal of this metric is to determine if specific features in your application have more defects than others - if you find this, it could be due code quality, insufficient test coverage, vague acceptance criteria, etc. “Defect density” data provides insights for the team to inspect the code & tests and then decide upon an action to reduce the defect density for that feature.
Think about metrics for agile teams against the backdrop of the principles of the agile manifesto. Metrics should be simple things that enable self-management and metrics should be implemented incrementally - perhaps you start with a few simple team metrics, as mentioned above and then progress into metrics to support strategic business decisions, such as “failure demand” and the “cost of delay”. The important thing is to measure actual performance to empower people to assess the impact and value of changes made as they seek to improve how software is delivered. Thanks to our colleagues at Rally (www.rallydev.com) for sharing their time with us on their recent trip to St. Louis, MO.