metrics sprint review

Seven Simple Metrics for the Sprint Review

Mike Cohn inspires a lot of my work.  In fact, when I began this blog years back, Mike stopped by and offered a piece of advice on my first blog post that has stuck with me.  He said he’d often track the number of questions he’d ask and compare it to the number of statements he’d make.  If there were more questions than statements, it was a good conversation.  And that’s where today’s blog post about metrics for the sprint review begins.

These aren’t metrics in the traditional sense.  They’re also not vanity or actionable metrics as Eric Ries writes about in the Lean Startup.  Instead, I’d call them trivia metrics. They’re useful talking points based in reality instead of the abstract, and they’re meant to drive home observations from the sprint review.

  • Is the Product Owner asking more questions than statements?  Make a tick mark for every statement made and every question asked by the Product Owner.  This is important since we own every answer we give, and don’t we wish for the team to own as much as possible?  The Product Owner is capable of answering most questions, but s/he should make space for the team to answer first.  And if a team member doesn’t offer some useful information, the Product Owner can always ask, “Isn’t it true that…?” 
  • How many new issues were added to the backlog?  Our goal at the sprint review is to gain feedback about our new increment.  Are we headed in the right direction?  Did we miss the mark?  What new ideas or thoughts did our increment inspire from our guests?  Our hope is that we end up with ideas that will be actioned in future sprints, and if we find none, why not?  Could it be that we’re working toward the wrong goal in our sprint reviews?
  • How many guests attended?  If many, how engaged are they?  What features draw their attention?  Which don’t?  And if we have no guests, why?  When we hear that others are “too busy,” this is usually a sign of something deeper.   Does this mean the work isn’t important to the business?  If so, why are we doing it?  If it is important, how come we can’t carve out an hour every other week to showcase the team’s work with leadership?  Dig here.  Often, we’ll uncover a ton of useful insights if we take the time to talk openly.
  • How long into the review was the first question asked?  This should be a conversation, not a lecture.  Are team members talking to their audience while they show off their work?  I hope so.  And let’s hope they’re not drowning their guests with words.  I prefer to speak succinctly and then clear up the details through questions and conversation.  Why?

It’s often better to be accurate than precise.

  • How many features were demo’ed in prod instead of other environments?  Teams sometimes encounter pragmatic reasons not to demo something in prod, but is everything they show off on their local or in stage?  Why?  Did they run out of time to deploy to prod before the review?  Was there still some testing they wanted to complete before deploying?  (Feel free to replace prod with your appropriate target environment.)  
  • How much work was showcased that didn’t meet the definition of done?  I often look at this and the one above together, and again I’ve seen some good reasons to make exceptions, but what happens if you put a tick mark for everything in prod or elsewhere?  Or put a tick for everything that met the definition of done or didn’t?  If we tracked this over time, would it decrease, increase, or stay the same?
  • How was the time spent during the sprint review?  Track the number of minutes the conversation was about the current sprint, previous sprints, and about the future.  While it’s true that the focus of the conversation is about our ending sprint, we should make room for discussion of the past and future.  Each context is different so what’s the right split for you? How does that ideal compare to what we just observed?  Why?

So why did I choose to call them trivia metrics?  Often, when I bring them up to teams, I ask, “Did you realize that of the five new features we talked about at the review, none of them were shown in prod.  Why was that?”

That’s all for now.  So what data points do you look for at the sprint review?  And have you brought them up to your team to gain their perspectives?  I’d love to hear about it in the comments below.


Do you want to get notified when new posts are published? Leave your email below.

6 thoughts on “Seven Simple Metrics for the Sprint Review”

  1. My 2 cents on data points I have been looking for during the review,

    – Is this the first time PO is getting to see the new features?
    I encourage my teams to conduct interim demos during the course of the sprint. This helps us uncover assumptions and questions, we would not like defer until the review.

    -Who is conducting the demo?
    Ideally, the business SME or PO should have got explored the feature enough to demo it to the other business stakeholders during the review. Not always though.

  2. Hi Tanner, Regarding this point:-

    ####How much work was showcased that didn’t meet the definition of done###

    If some work is NOT meeting DoD criteria, then why would we want to even showcase it in the Sprint Review… because in Agile , 99.9% done is also NOT ‘Done’.
    100% Done is Done (only working software)..

    Please let me know if I misunderstood anything here.

    1. Hi, Ranjit. Prepare for a flood of questions and random thoughts. 😉

      What if the team insists that they should show something that doesn’t meet the DoD? Is it my or your place as a coach to tell them no? Maybe there DoD is flawed. Maybe they don’t see the value in it. If they show something not done, is this a learning opportunity for the team? Or is there something I as the coach misunderstand?

      Let’s remember that one of the purposes of the review is to inspire a conversation with those outside our team (i.e. users, customers, stakeholders). This conversation should help us understand what to do next. What if how we split the work doesn’t meet the DoD but can yield an informative conversation? If the team can argue that a piece of non-done work can do so, should we still tell them no (assuming again it’s our place to)?

      > because in Agile , 99.9% done is also NOT ‘Done’.

      Where do you read that in the manifesto? I’d venture to guess you’re re-interpreting this principle:

      “Working software is the primary measure of progress.”

      Still, it says nothing about any percentages and nothing about DoD. Nor does the manifesto talk about stories. Be careful not to confuse principles with practices or constructs, which I suspect is what you’ve done in the quote above.

      Further info:
      https://www.mountaingoatsoftware.com/blog/only-show-finished-work-during-a-sprint-review-maybe
      https://www.spikesandstories.com/scrum-master-say-no/

  3. This post inspired me to regularly measure “time spend demo’ing / telling” vs “time spent answering questions / listening”. When I’ve shared the data with team members and stakeholders alike it has been universally valued and has helped improve engagement at Sprint Review.

Leave a Comment

Your email address will not be published. Required fields are marked *