I think many of us have been going about this predictability thing all wrong. Don’t misunderstand. It’s not for lack of trying, and we’ve had all the right intentions. Still, in many cases, we’ve done more harm than good, and I’m just as much to blame as anyone else. A colleague had much the same top of mind as well.
I think it starts with where we spend our time and what we attempt to control. But why does that matter? Opportunity cost. When we invest our time attempting to control the uncontrollable, we lose the opportunity to influence something within our grasp. Let me explain.
What Can We Control?
When it comes to predictability, we can influence work in several dimensions. Ironically, we often choose to tackle areas where we have little to no control. Here’s what I typically see.
- We invest heavily in predicting an end date, but can’t we more easily influence the start date?
- We spend a wealth of time outlining the next three months with colorful gantt charts and slide shows. However, shouldn’t we be investing more time deciding on small, actionable steps that can demonstrate end-to-end progress to our customers?
- We have little control over which of our assumptions will be true or false. But are we validating those assumptions in the right order? And quickly?
- We promise features in a predictable order when we have the least amount of information. But did we establish a cadence to check in with our stakeholders?
This last one surprises me. While we can’t reliably predict what work we’ll deliver two months from now, we can set up a reoccurring meeting to sit down with stakeholders every two weeks and show them what we’ve accomplished and why. If we have nothing to show at that checkpoint, we could use the time to explain what we intend to do to ensure we have something for the next. Still, I can appreciate why we struggle with this.
Predictability is a means to explain and quantify uncertainty.
It’s this uncertainty that gets in our way. We tend to deal with it irrationally rather than logically. And while we struggle with this professionally, we accept it in other contexts.
Consider your commute. Rush hour can be killer (especially in Silicon Valley). Many of us use Google Maps to predict how best to get from point A to point B. If anyone can help us come to terms with predictability, isn’t it a product that does nothing but provide predictions? After all, if Google Maps consistently got these predictions wrong, we’d toss it aside for something more reliable.
When Will We Arrive?
We typically answer this question by discussing solutions and the work involved with the team, and we estimate how long the work will take. On the other side, we’ll provide a start and end date. We do all this before we begin the work. Sometimes we do this months before the first line of code is written.
I can think of a few ways this can go sideways. First, even if we were able to accurately predict the amount of work, do we truly know when the work is going to begin? How many of us have faced situations where our team wasn’t staffed by the start date? Or how many of us have been in situations where work began on paper but team members continued to focus their time on wrapping up previous projects? If we can’t even get the start date right, what makes us think we have any control over the end date?
Why Not Apply the Last Responsible Moment?
To avoid this, Google doesn’t provide an ETA until we’ve departed, and that’s a good thing. Don’t we typically figure out our route just as we’re leaving? That’s when it matters most, right? We wouldn’t attempt to decide on a route months in advance. That’s just silly. Nor would we be barreling down the highway entering our destination. That’s unsafe.
This is the essence of the last responsible moment.
After we begin our drive, Google establishes an ETA which is continually refines as we get closer to our destination where more is known.
But What About Future Proofing?
I concede that there are situations where we need to think longer term so let’s see Google Maps handle this. Traffic volume can’t be known for a future time and day, but it can be predicted. To this end, it uses historical data to provide a range. Such a range can help us get ahead of risk.
For example, imagine if we have an interview tomorrow morning, and we see travel time is 55m to 1h 40m. Since we want to ensure we arrive on time, we’d give ourselves at least 1h 40m to get to our destination. Conversely, if we were meeting a friend for lunch, we may be less concerned and give ourselves something less–say 40 minutes. This same mental model can also be applied to differentiate work that truly has some kind of date constraint from those that don’t.
I’ve experimented with ranges in a few organizations, and I’ve found that many leaders struggle with the notion. If I explain that work will be completed in March, some assume I mean March 1st; others assume I mean March 31st. What I really mean is anywhere between March 1st and March 31st, and we intend to update–and hopefully narrow–this expectation as we discover more. I’ve learned that I must be explicit when providing ranges in such a way, which brings us to an interesting thought experiment. Do these same people struggle with the ranges provided by Google Maps? I think not. And if I’m right, why? What can we learn from this?
I suppose Neils Bohr had it right all along:
Prediction is very difficult, especially about the future.
That’s all for today. And special thanks goes to Troy Magennis and his Agile 2018 keynote for inspiring this blog post. You’ve been a tremendous influence on me in the past, and I can’t wait to see what’s next. For a look at some of Troy’s resources, head to GitHub.