This blog has moved. Go to SoftwareDevelopmentToday.com for the latest posts.

Tuesday, June 24, 2014

Humans suck at statistics - how agile velocity leads managers astray

Humans are highly optimized for quick decision making. The so-called System 1 that Kahneman refers to in his book "Thinking fast, thinking slow". One specific area of weakness for the average human is understanding statistics. A very simple exercise to review this is the coin-toss simulation.

Humans are highly optimized for quick decision making.

Get two people to run this experiment (or one computer and one person if you are low on humans :). One person throws a coin in the air and notes down the results. For each "heads" the person adds one to the total; for each "tails" the person subtracts one from the total. Then she graphs the total as it evolves with each throw.

The second person simulates the coin-toss by writing down "heads" or "tails" and adding/subtracting to the totals. Leave the room while the two players run their exercise and then come back after they have completed 100 throws.

Look at the graph that each person produced, can you detect which one was created by the real coin, which was "imagined"? Test your knowledge by looking at the graph below (don't peak at the solution at the end of the post). Which of these lines was generated by a human, and which by a pseudo-random process (computer simulation)?

One common characteristic in this exercise is that the real random walk, which was produced by actually throwing a coin in the air, is often more repetitive than the one simulated by the player. For example, the coin may generate a sequence of several consecutive heads or tails throws. No human (except you, after reading this) would do that because it would not "feel" random. We, humans, are bad at creating randomness and understanding the consequences of randomness. This is because we are trained to see meaning and a theory behind everything.

Take the velocity of the team. Did it go up in the latest sprint? Surely they are getting better! Or, it's the new person that joined the team, they are already having an effect! In the worst case, if the velocity goes down in one sprint, we are running around like crazy trying to solve a "problem" that prevented the team from delivering more.

The fact is that a team's velocity is affected by many variables, and its variation is not predictable. However, and this is the most important, velocity will reliably vary over time. Or, in other words, it is predictable that the velocity will vary up and down with time.

The velocity of a team will vary over time, but around a set of values that are the actual "throughput capability" of that team or project. For us as managers it is more important to understand what that throughput capability is, rather than to guess frantically at what might have caused a "dip" or a "peak" in the project's delivery rate.

The velocity of a team will vary over time, but around a set of values that are the actual "throughput capability" of that team or project.

When you look at a graph of a team's velocity don't ask "what made the velocity dip/peak?", ask rather: "based on this data, what is the capability of the team?". This second question will help you understand what your team is capable of delivering over a long period of time and will help you manage the scope and release date for your project.

The important question for your project is not, "how can we improve velocity?" The important question is: "is the velocity of the team reliable?"

Picture credit: John Hammink, follow him on twitter

Solution to the question above: The black line is the one generated by a pseudo-random simulation in a computer. The human generated line is more "regular", because humans expect that random processes "average out". Indeed that's the theory. But not the the reality. Humans are notoriously bad at distinguishing real randomness from what we believe is random, but isn't.

As you know I've been writing about #NoEstimates regularly on this blog. But I also send more information about #NoEstimates and how I use it in practice to my list. If you want to know more about how I use #NoEstimates, sign up to my #NoEstimates list. As a bonus you will get my #NoEstimates whitepaper, where I review the background and reasons for using #NoEstimates

Subscribe to our mailing list

* indicates required

Labels: , , , , , , , ,

at 06:00
RSS link

Bookmark and Share

3 Comments:

  • Hi Vasco,
    nice post. I've written a javascript simulation of the tossing-the-coin experiment. It's available here
    https://rawgit.com/JonJagger/Fellers-1000-coin-tosses/master/fellers.html
    It also allows you to introduce feedback - every N throws instead of tossing the coin you cheat and nudge the total towards zero. With this you can run experiments such as asking people to estmate the value of N needed to halve the variance of the total.

    By Blogger Jon Jagger, at June 24, 2014 8:48 AM  

  • @Jon

    Thanks for the link to the simulator! :)

    How have you used the simulator in a classroom/coaching setting? I'd be interested in that and how people reacted! :)

    By Blogger Unknown, at June 24, 2014 3:12 PM  

  • I usually explain how Fellers-walk works and then run the simulation with no feedback and make sure everyone understands the results. People are invariably surprised that the randomness does not cancel out the randomness. They think many more walks will end nearer zero than do.
    Then I explain the feedback and ask them to estimate the least amount of feedback needed to halve the variance. They invariably estimate too much. So I ask for a new estimate. They still estimate too much. They are amazed at how little feedback you need.

    By Blogger Jon Jagger, at June 29, 2014 9:56 PM  

Post a Comment

<< Home


 
(c) All rights reserved