18 Feb A Tale of two processes

As an executive of a Software-Intensive Business (SIB), your time has come. From these columns, you’ve learned the dangers of technical debt, the importance of good communication with your technical staff, and the need to select and implement the right features .
How to create software
Let’s create some software value. It’s very simple.
- You tell the programmers what you want the software to do
- They create it
- You verify that it does what it’s supposed to
- You let people start using it, and out pours the value.
Now, prepare to be shocked. About a decade ago, a major upheaval began in the software development community about how to apply these four steps. There are two basic approaches, Waterfall and Agile. Today, both are in widespread use. This article explains their essence, my recommendation, and the reasons for it.
Waterfall and Agile
Waterfall is a straightforward application of the four steps. Step 1—Specification—precisely define all features. Step 2—Coding—create software, using the Specification as a blueprint. Step 3—Testing—compare the software’s behavior against the Specification and resolve differences. Step 4—Release—make the software available, and resolve any remaining problems.
The hallmark of Waterfall is that you go through all the steps once, for all features at the same time.
In contrast, Agile repeats the four steps, over and over, with each pass adding a feature or two to a base of running, tested software.
There are many variations and implications swirling around them, but you now understand the essential difference between Waterfall and Agile.
I’ve done both, I’ve seen success and failure with both, yet I recommend Agile almost every time. Why? Because Agile is inherently more forgiving of estimation errors and requirements changes. But for you to choose Agile and proceed with confidence (there will be critics), you must understand why it’s intrinsically more forgiving. To help you, this paper describes an idealized project, run both ways.
An idealized Waterfall project
Whether Waterfall or Agile, all projects begin with a concept for a set of features, evaluation, and a decision to proceed.
Our idealized project has twelve features, each estimated to take one month to implement. Based on typical experience (and to make the math easier), our project manager allocates effort in the ratio 1:2:1—three months for Specification, six months for Coding, and three months for Testing.
What we don’t know is that we’ve underestimated the project by 1/3, or four months. This is hardly a “bad” estimate, given the inherent uncertainties, and if our way of working can’t absorb it and still yield positive value, most of the time, we’re gambling with our SIB’s profitability.
Specification should be done in three months, but it’s not. But our team does the right thing and takes an extra month to finish and review it. We rationalize that they have eight months to get that month back—they’ve got a solid design, after all.
After two months of Coding, the team starts to realize that they’re not going to finish in the 5-6 months allocated. But it’s just a feeling, and the conversations when Specification was late were unpleasant, so they press on. But after four months, the team is undeniably behind schedule. The Project Manager fires off the red flare.
Tough choices
We have four months left to compensate for what we now know is a four-month shortfall. We can:
- Increase budget and schedule four months
- Cut features
- Cut testing
- Demand overtime
Option A means immediate unpleasantness, so we save it until all other options are exhausted. But by then the damage will be done.
Our good team urges Option B. Drop all features that can’t be completed in another 1 1/3 months of Coding (5 1/3 months in all). That leaves 2 2/3 months for Testing, preserving our 1:2 Testing:Coding ratio and an on-time release.
Best case, the team’s been coding feature-by-feature, starting with the most important ones. They don’t have to throw any code away, and the four dropped features weren’t that important. Worst case, all 12 original features are now half done, and we only have time to finish four of them. If those aren’t the vital four needed to even make the system work, we may have to cancel the project.
I’ll leave the results of cutting testing or imposing 80-hour weeks to your imagination, with one reminder. Our former best programmer, a woman with two small children, quit after the last death march.
So, we do a little of each, and ship a feature-starved system a month late, with a lot of post-release bugs, and a burned-out team. All from nothing more problematic than an initial estimate that was 1/3 too low.
An idealized Agile project
The Agile way, we still set a 12-month budget and schedule, only now we run the features through Specification, Coding, and Testing (and user review and possible release), one feature at a time, one feature per month.
After three months, the team has delivered only two tested features. Simple extrapolation shows that we probably won’t finish all twelve on time and budget. Remember, three months into the Waterfall, all we knew was that Specification wasn’t done, and didn’t yet have the courage to increase the schedule or cut scope.
The choices are the same except we can’t cut Testing—we’re doing that as we go. Let’s explore a scope reduction.
If we drop any four not-yet-implemented features, we’re back on budget and schedule. We lose 1/3 of our original estimated value, and we don’t have to throw any work to date away.
But the news gets better. First, we don’t have to pick which features to drop yet. We get to re-prioritize eight more times, and the features clearly aren’t of equal value. For our not-atypical idealized project, they break down like this:
- Red, four features necessary for a viable system, with 4 units of value for the set
- Green, four high-value but non-essential features that add 2 units of value each
- (Sky) Blue, four low-value features that add ½ unit of value each
Because we get to review our feature selection eight more times, all based on user feedback from running software, we can probably get close to the optimal selection and ship all four Reds and all four Greens, for a total value of 12 out of a possible 14. The quality is high, there’s no death march, and we delivered about 85% of the potential value, on time, for the originally estimated cost, even with an estimation error of 1/3.
Where’s the magic?
With Agile, we get early feedback on estimation accuracy and feature value, we get to defer feature decisions as long as possible, and we get to select the nearly optimal set of features that can actually be delivered (not just predicted) within the original budget and schedule. These intrinsic properties make Agile less brittle than Waterfall in the face of estimation error, changing requirements, technical surprises, and all other forms of uncertainty. The fewer software projects your SIB attempts, the bigger these problems will be, and the less you can afford a failure.
So why hasn’t everyone gone Agile? That would be another article.
Other columns by Robert Merrill
- Robert Merrill: Software sanity: Accurate estimates and other myths
- Robert Merrill: Let’s all join hands: How software intensive businesses make money
- Robert Merrill: Face it, software really IS different
- Robert Merrill: That sinking software feeling
The opinions expressed herein or statements made in the above column are solely those of the author, and do not necessarily reflect the views of Wisconsin Technology Network, LLC.WTN accepts no legal liability or responsibility for any claims made or opinions expressed herein.