It’s not often enough that I see teams experimenting. Well, that’s not entirely true. Most teams experiment by trying new techniques, by adjusting their process, or simply by trying something different. I’d call these anecdotal experiments, and they’re very valuable. However, it’s not these kinds of experiments that I want to talk about today. I’m focused on quantifiable experiments. Consider it a data-driven approach to analyzing team dynamics or individual behaviors. For me, this was borne from my affinity for the burn down chart. If used as intended, the burn down is simple, it’s useful, and it inspires a team to ask intelligent, focused questions. If my experiments do the same, I’ve succeeded.
Before I begin, I have a confession. I’ve rewritten this blog post several times now. Each time, it gets away from me so I scrap what I wrote and begin again. Why? Simplicity. I kept losing sight of it. Because of this, I’ve adjusted my approach. I intend to write this blog post in some rather broad strokes while my next will contain more context and examples of experiments I’ve run in my teams.
With my first broad stroke, let’s talk about some of the advantages of data-driven experiments:
- The act of observing can often create the behavior you intend. It’s called the Hawthorne effect. When teams realize the question they’re trying to answer or the problem they’re trying to solve, they become more aware of it. This awareness alone can sometimes be enough to solve a problem.
- It can diffuse unhealthy conflict. Having conversations about numbers fosters logic. By setting up and ultimately analyzing results as a team, the conversations become about the numbers and not about the emotions.
- Use curiosity as a motivator. Engage the team as we begin crafting the experiment. Let them know your hypothesis include the team in its creation. If we put value in their hypotheses, we’re bound to generate team interest. This will foster some rather riveting team discussions even before the experiment begins, and it will generate curiosity as to what the data will say.
However, with great power comes great responsibility. Be careful, and here’s why:
- People don’t do what you expect; they do what you inspect. Be wary of unintentional consequences that could come by analyzing the wrong data or by analyzing data in the wrong ways. Let’s say the team is analyzing story points, and we create an illusion that completing more story points over the next few sprints define a successful experiment. Intentionally or not, the team may begin inflating their estimates and give a “successful”—yet artificial—result.
- Data must be as unbiased as possible. Numbers can be made to tell any story. For an experiment to be successful, be sure to measure the right things in the right way. Otherwise, our team may not trust the results they see.
- Data is only as valuable as the questions it inspires you to ask. Data is a tool just as a hammer is a tool. It’s not going to swing itself. Conversely, data rarely contains your answers. Instead, it’s a tool to help us ask intelligent and more informed questions.
- Isolate your variable. Before entering an experiment, know exactly what questions we’re trying to answer. One question is ideal. Limit yourself to three and only collect data that directly relates to answering that question or questins. Further, maintain a dogged vision of what we’re attempting to measure. Otherwise, we risk overwhelmed ourselves or the team. Worse yet, we risk the data being interpreted in numerous and conflicting ways.
That’s all for now. Stay tuned for my next blog where I’ll share some experiments I’ve run in my teams over the years. I hope to see you all again soon.
Update: Here’s a link to my follow-up blog on this topic.