Mike Cohn inspires a lot of my work. In fact, when I began this blog years back, Mike stopped by and offered a piece of advice on my first blog post that has stuck with me. He said he’d often track the number of questions he’d ask and compare it to the number of statements he’d make. If there were more questions than statements, it was a good conversation. And that’s where today’s blog post about metrics for the sprint review begins.
These aren’t metrics in the traditional sense. They’re also not vanity or actionable metrics as Eric Ries writes about in the Lean Startup. Instead, I’d call them trivia metrics. They’re useful talking points based in reality instead of the abstract, and they’re meant to drive home observations from the sprint review.
- Is the Product Owner asking more questions than statements? Make a tick mark for every statement made and every question asked by the Product Owner. This is important since we own every answer we give, and don’t we wish for the team to own as much as possible? The Product Owner is capable of answering most questions, but s/he should make space for the team to answer first. And if a team member doesn’t offer some useful information, the Product Owner can always ask, “Isn’t it true that…?”
- How many new issues were added to the backlog? Our goal at the sprint review is to gain feedback about our new increment. Are we headed in the right direction? Did we miss the mark? What new ideas or thoughts did our increment inspire from our guests? Our hope is that we end up with ideas that will be actioned in future sprints, and if we find none, why not? Could it be that we’re working toward the wrong goal in our sprint reviews?
- How many guests attended? If many, how engaged are they? What features draw their attention? Which don’t? And if we have no guests, why? When we hear that others are “too busy,” this is usually a sign of something deeper. Does this mean the work isn’t important to the business? If so, why are we doing it? If it is important, how come we can’t carve out an hour every other week to showcase the team’s work with leadership? Dig here. Often, we’ll uncover a ton of useful insights if we take the time to talk openly.
- How long into the review was the first question asked? This should be a conversation, not a lecture. Are team members talking to their audience while they show off their work? I hope so. And let’s hope they’re not drowning their guests with words. I prefer to speak succinctly and then clear up the details through questions and conversation. Why?
It’s often better to be accurate than precise.
- How many features were demo’ed in prod instead of other environments? Teams sometimes encounter pragmatic reasons not to demo something in prod, but is everything they show off on their local or in stage? Why? Did they run out of time to deploy to prod before the review? Was there still some testing they wanted to complete before deploying? (Feel free to replace prod with your appropriate target environment.)
- How much work was showcased that didn’t meet the definition of done? I often look at this and the one above together, and again I’ve seen some good reasons to make exceptions, but what happens if you put a tick mark for everything in prod or elsewhere? Or put a tick for everything that met the definition of done or didn’t? If we tracked this over time, would it decrease, increase, or stay the same?
- How was the time spent during the sprint review? Track the number of minutes the conversation was about the current sprint, previous sprints, and about the future. While it’s true that the focus of the conversation is about our ending sprint, we should make room for discussion of the past and future. Each context is different so what’s the right split for you? How does that ideal compare to what we just observed? Why?
So why did I choose to call them trivia metrics? Often, when I bring them up to teams, I ask, “Did you realize that of the five new features we talked about at the review, none of them were shown in prod. Why was that?”
That’s all for now. So what data points do you look for at the sprint review? And have you brought them up to your team to gain their perspectives? I’d love to hear about it in the comments below.