I'd Rather Be Coding: Gathering Metrics

If you think about the questions you want to answer, you will likely focus on the right metrics you should be capturing.

For a long time, I've been wanting to write a series of posts about those little activities that are a critical part of Agile and Lean, but somehow seem to slip by the wayside even on the most competent teams. This series is entitled "I'd Rather Be Coding," and recognizes that the team is trying to do their best but sometimes needs a reminder about what is important.

The fact that so few development teams want to gather metrics on what they are doing has always baffled me. Aren't these people the same ones who calculate the most profitable ways to get gold on World of Warcraft, destroy others at fantasy sports with their predictive algorithms, and optimize their route to work through careful tracking and Monte Carlo simulations? Why wouldn't they want to analyze every facet of how they perform as a team, to make sure they always make the best choices and don't waste time and effort?

The simple answer is that they'd "rather be coding." Gathering metrics takes thought and effort, and ultimately they've got deadlines and customer priorities to think about. Does the customer really want them to spend time on metrics, anyway? Wouldn't they be angry if they were crunching numbers instead of adding that next new feature? The essential problem is this:

  1. No team is perfect. There is always room to improve.
  2. Improvement brings greater productivity, fewer errors, and/or more predictability (all aspects highly valued by the customer).
  3. Without baselines and measurements, there is no way to know whether or not improvement has actually taken place.

Without metrics, even the best team who really wants to improve is left with "gut feelings" and "well, nothing bad has happened yet" to determine whether or not a change in their process, tools, or methods has actually added value. I would maintain that this is the real waste of time. If your team is feeling pain, and then makes a change to deal with that pain but doesn't measure anything before and after the change to evaluate it, then they are effectively just trying out different rain dances every time one they are using doesn't bring rain.

Deep down, teams know this. However, there are several "fear factors" involved with metrics:

  • How do we know what we are measuring is the right thing? What if we spend a lot of time gathering a measurement and it doesn't tell us the right information?
  • What if we start getting evaluated by the measurements we create?
  • What if the measurement tells us something we don't want to hear?

It's important for teams to follow a process when a measurement is constructed, to make sure that it is not just a waste of time. There are several important questions that need to be answered every time the team decides to gather a metric:

What question are we trying to answer with this measurement? Decide what it is you need to know. Is it whether your quality is slipping? Is it a concern for how long features take to develop? Is it an observed slow response time by the customer? It's helpful to relate the problem back to a risk to your project, which usually comes down to either time (deadline being met) or money (budget being exceeded). Getting down to the root cause of the issue helps clarify what needs to be observed.

How will this measurement be gathered? How is it that the team will start to gather this data? Will it need to be done manually, or is there a tool that can do it for you? This is also the time to determine whether or not the measurement is actually worth doing; if you need to spend two weeks building a tracking system to be able to effectively obtain the data, perhaps the benefit of having it is not great enough to overcome that investment. In that case, try to find a simpler method that gathers a lower fidelity of data that can still be useful.

How will the gathered data be stored? It's not enough to gather the data; in almost every case, what you care about is the trend of data, the way it changes, not the information on a particular day. How will you reliably store this information so that you can access it later when you want to analyze it? Hopefully, you have a tool that takes care of this for you, but if you don't, even putting it up on a chart on the wall that everyone can see can add value.

How will the data answer your question? You should have an understanding before you start of how and when you will analyze the data to answer the original question. For a trend, determine what trend direction is "good" or "bad." Decide when the data will be reviewed, and by whom (again, maybe it's an automated process that does this review). Should action be taken when the data hits a certain "bad" level (call a meeting, send an email, break a build)? Should you have times when you evaluate whether or not the measurement still makes sense to gather?

How do we know these decisions are being acted on? It's not enough to make the decisions, you need to follow up to make sure the actions decided on are being executed. Ask someone outside the project to hold you accountable. Announce what you're doing to a group of peers who will expect to see results. Make it a defined part of your agenda at meetings and demos. Otherwise, you really are wasting your time.

(Coincidentally, these are the same questions that CMMI would have you ask yourselves in the Measurement and Analysis process area. So, it's not just coming from me.)

Getting good measurements isn't easy. You need to work to plan the measurement, you need to work to take the measurement, you need to work to analyze and report the results. However, without measurements, your team is truly "flying blind," and without thought put into measurements, you're just relying on "blind faith." Take the time to perform metrics the way you take time to design your code, and the sight that it brings you will pay dividends.