There’s a hidden formula in software DeVelopmenT that tells how fast a team can get features DONE and Ready-to-Ship.
The formula is: D = V * T
It reads as: DONE Features = Velocity multiplied by Time
The importance of a software development team’s velocity
The term “velocity” as it applies to software development is simple to explain and to illustrate. Here’s my definition:
Velocity: A team’s velocity is the number of features it can get completely DONE and Ready-to-Ship during a short, fixed time period (2 to 4 weeks)
Velocity is extremely important for business owners and other project stakeholders. Without knowing the velocity of their team, they have no way to reliably plan release dates and coordinate marketing and sales teams. (2) It’s no exaggeration to say that the most important thing a professional software team can do to increase its value to an organization is to become skilled in the arts of estimation and planning. This post introduces the concepts behind velocity measurement, and provides links for more detailed reading.
Are we there yet? Speed racing down the software delivery highway
Building successful software and delivering it on time is an art of numbers. It all boils down to math, like physics or accounting.
Who can forget the familiar high-school formula D = V * T? (also written as D = R * T)
This, of course, is the famous equation for calculating how far you can travel from a given starting point to another when you already know your velocity (or rate) and how long you will be travelling.
Distance = Velocity multiplied by Time
For example: if we have know we are traveling at 50 miles per hour and plan to travel for 3 hours, then we know we will travel 150 miles.
What happens though if we do not know our velocity, but instead know how far we have traveled and how much time it took to get there? Can we derive our velocity from these other two measurements? Of course we can with simple math. In this case, we have D and T, and can derive V by modifying the formula to be V = D / T.
Velocity = Distance divided by Time
For example: if we have traveled 120 miles after 3 hours from point A to point B, then we know our velocity per 1 single hour is 40, or 40 mph.
Figure 1: Two US high school math students calculating how far they can travel before returning to math class in 30 minutes or before being caught by authorities for driving on the wrong side of the road.
Measuring velocity in software development to decrease time-to-market and realize faster ROI
I hear what you’re screaming: enough with the PSAT math prep already, how does this apply to releasing software on time? It’s so simple you’ll kick yourself, or your team, for not doing this already.
Agile teams use a formula that works the same way. It’s calculated differently, because most software teams aren’t very mobile while coding, though it would be relaxing to code on a boat.
Because a team cannot reliably know in advance how quickly it can complete a set of features, it must use the second form of the equation to derive its velocity based upon actual observation of progress first.
Thus, the formula for calculating an agile team’s initial velocity still reads as V = D / T, except the D stands for “DONE Features” instead of distance. T, or time, usually stands for 2 to 4 weeks instead of 1 hour. For this article, we’ll assume it means 3 weeks.
Initial Velocity = DONE Features divided by Time
For example: if we get 6 features DONE in 3 weeks, then we know our velocity is 6 features per 3 weeks. Simplified, we’ll say 2 features per week.
Here is a simple chart depicting this velocity:
Figure 2: Velocity measurement illustration of six features becoming done during a three-week period
It’s tempting to look at this chart and say the velocity is 2 features per week, and that we can now start using the formula DONE Features = Velocity multiplied by Time to plan ahead. We will use this simplification for the purposes of this article, but keep in mind that this may or may not be true, so be careful! Here are two reasons why:
- New Requirements Discovered: During the course of any three-week period, teams will discover new requirements frequently. The new requirements could be bugs, change requests from the business team, or important changes required to match the competition. This is a subject for an entire volume on change management!
- Definition of DONE: It’s extremely important that a team agrees upon what qualifies as a DONE feature. Each team must define what it means by the word DONE. I leave that as an exercise for a future article, but you can find some recommended reading and listening below for reference. (3, 4)
For the rest of this post, we’ll pretend that no new requirements are discovered and we’ll define a feature as DONE if it has successfully passed through each of the following development phases:
- Requirements Definition
- Analysis, Design, and sufficient Documentation
- Unit Testing
- Code Review (for development standards adherence and security design assessment)
- Refactoring (to conform to standards and address security deficiencies)
- Functional Testing
- User Acceptance Testing (preferably automated)
- Performance Testing
- Pilot (beta testing with real or proxy users)
This may sound like a lot of work! And, it certainly is a lot of work. All mission-critical projects consist of a number of features that must go through these steps before they can be considered DONE and Ready-to-Ship.
Pitfalls of using early travel velocity to forecast total road trip duration
Returning to our travel example, suppose we are traveling from our city to the mountains for a conference about software estimation and planning. We know the destination is 500 miles away. We also know that the interstate through our city and into the next state has a speed limit of 70 mph. A simple calculation tells us that it would take 7.14 hours to travel 500 miles at 70 mph.
What if you absolutely had to be at the meeting on time? Would you think it’s wise to turn that back-of-the-napkin estimate into a target to which you could commit?
Most people would say it’s insane to expect that would travel into the mountains at 70 mph, the same velocity as the interstate. What’s more, you’d have to take bathroom breaks and food breaks too. You agree with most people.
You decide to email the mailing list for the conference and ask if anyone has ever traveled from your city to the mountain location, and get a response complete with a chart! Your colleague says she kept track of how many miles she traveled during each hour and came up with chart in figure 3 showing that it took just over 9 hours to complete the 500 miles.
Figure 3: Chart showing total number of miles driven after each hour in red and number of miles driven during each hour in blue
If we round up the number of hours traveled to an even 10, we’ll just call this 50 mph. The reasons we cannot travel at 70 mph during the entire trip is simple: mountains are more curvy and dangerous, and we have to break for food and the bathroom. Only after completing the trip one time can we look back and use the experience as a way to gauge future trips through the same or similar terrain.
Let’s take a beginner's look now at how agile teams can use historical data, combined with estimation, to produce better delivery date forecasts. This will be covered in more depth in my next post.
Producing better software delivery date forecasts using simple, empirical estimation techniques
Similarly, if we know our total number of features is 50, and that our velocity is 2 features-per-week, then it’s tempting to calculate that it should take 25 weeks to complete our project.
Alas, software development is rarely as simple as driving down a straight interstate. Just like the journey into the mountains takes us through a variety of terrain and we must take breaks, all software development takes us through all kinds of unexpected requirements. Stakeholders request new features, markets changes, people get hired, people get fired!
And, most importantly, not all features are the same size or complexity. Because of this, agile teams need to take additional steps to bring predictability to delivery schedules. This is usually done with estimation techniques like Wideband Delphi or Planning Poker. These two techniques have been written about by Steve McConnell and Mike Cohn, respectively. (5, 6, 7)
I will cover Planning Poker in more detail in a future post, but the main idea behind it is that the entire team takes a few hours every three weeks to look ahead at the work to be done and places a relative estimate of size or complexity estimate on each item. They then measure how quickly they can complete each item. So instead our simple count of “50 features”, the team might actually have a number such as 150 “points”, which means that, on average, each feature is roughly 3 points of estimated size or complexity. For now, however, let’s continue to focus on tracking how fast the team moves through 50 features.
Agile teams typically use a chart that is drawn from the top down towards zero, which indicates zero more features outstanding! This is called a burndown chart, and a realistic chart might look as follows in figure 4:
Figure 4: Hypothetical burndown chart illustrating how the amount of actual work, in blue, fluctuates up and down as the total number of UNDONE features approaches zero. The initial estimate of 50 features and the target velocity of burning down 2 features per week is shown in red
This chart shows that the team had 50 features remaining to implement at the start of week 0. The initial target velocity of 2 features per week holds up for a few weeks, shown in red, but then it falls off a bit before speeding up to be faster than 2 per week. Not to be outdone, perhaps the business team feels the team can do more work, and new features get added. This causes the time between week 11 and 24 to remain relatively flat before the velocity picks up again.
By the time the initial 50 features are completed, we can calculate that they burned down at a rate of about 1.5 per week. Now, this simple chart does not actually show how many features were added during the duration of the project, though it’s obvious when you see the blue spikes. There are more sophisticated charts that can help illustrate this, but I’ll leave that for next time.
In the meantime, please visit the suggested resources, starting with Mike Cohn’s excellent presentation about Agile Estimation and Planning, to learn more. (1)
Until next time, stay agile not fragile.
References and Resources
- “Introduction to Agile Estimation and Planning” – by Mike Cohn, PDF presentation about release planning with agile estimation and planning techniques: http://www.mountaingoatsoftware.com/presentations/106-introduction-to-agile-estimating-and-planning
- “Nokia Test : Where did it come from?” – by Jeff Sutherland, about how Nokia uses velocity tracking to assess their teams’ productivity and likelihood to generate future ROI: http://jeffsutherland.com/scrum/2008/08/nokia-test-where-did-it-come-from.html
- “How Do We Know When We Are Done?” – by Mitch Lacey, about how his team defined DONE with the whole team’s participation: http://www.scrumalliance.org/articles/107-how-do-we-know-when-we-are-done
- “Scrum, et al” – by Ken Schwaber about the history of Scrum, presented at Google : http://www.youtube.com/watch?v=IyNPeTn8fpo
- Software Estimation: Demystifying the Black Art – by Steve McConnell, book about lessons learned and best practices for software estimation: http://www.amazon.com/Software-Estimation-Demystifying-Practices-Microsoft/dp/0735605351
- Agile Estimation and Planning – by Mike Cohn, book about how to perform agile estimation and planning using simple estimation techniques and short, fixed time-boxed development iterations: http://www.amazon.com/Agile-Estimating-Planning-Mike-Cohn/dp/0131479415
- ATL ALT.NET Meetup recorded conversation about Agile Estimation and Planning: http://www.meetup.com/AtlAltDotNet/calendar/9525107/?eventId=9525107&action=detail (direct MP3 link: http://apps.ultravioletconsulting.com/audio/ATLAltDotNet/ATLAltDotNet-2009-01-27-AgileEstimationAndPlanningDiscussion.mp3)