05 Nov Software sanity: Accurate estimates and other myths

In the mid-1970s, Dr. Neil Frank, director of the National Hurricane Center, came to a sobering realization. Each decade, forecasts were getting 10 percent more accurate. And each decade, the population in hurricane-prone areas was doubling as people moved to the coasts of the Sun Belt.
Dr. Frank made a controversial choice. He began personally traveling the hurricane coast every spring, preaching the danger of storm surge, not wind, and calling for better planning, decisions, and responses in the face of the hurricane threat. Forecast improvements won’t save us; rather, the actions taken based on admittedly imperfect forecasts can and must improve dramatically. Today, NHC produces sophisticated, probabilistic decision support that drives a massive and complex set of individual, commercial, and governmental responses to an approaching hurricane.
As leaders of Software-Intensive Businesses (SIBs), we must choose projects, set budgets, and commit to release dates based on estimates of costs and benefits. It’s only money and unpleasantness at stake. But for as long as I remember, the estimates have never been good enough. And they never will be.
In That Sinking Software Feeling, I explained how SIB management’s mistaken beliefs about software development can lead to business trouble. Here are several mistaken beliefs about software estimation:
- Estimating is an art, and we can’t get better at it.
- We just need to get better at estimating.
- The more effort we put into an estimate, the more accurate it will be.
- An inaccurate estimate is useless.
Estimating is an art
No, it’s not.
Some organizations are better at estimating than others. They’re intentional. They separate estimates from commitments. They keep records on past estimates and outcomes, and use them. They involve the experts, and use more than one approach. They don’t confuse estimation and negotiation. And they make better decisions based on the estimates they do produce.
Many trees have died to make us better software project estimators. “Software Estimation: Demystifying the Black Art” by Steve McConnell is a great place to start.
We just need to get better
As with the hurricane forecasts, we (meaning “they,” the technical people) are never going to be able to make estimates that are good enough, early enough, to satisfy us. That may have once been possible, but not any more.
First, let’s define “good.” Accurate estimates have small errors. Unbiased estimates are as likely to be high as low. If we average our estimation errors over a series of projects and the result is near zero, we’re producing unbiased estimates. There will still be inaccuracy (scatter). A complete estimate consists of both the estimated value and a realistic statement of the expected scatter, best expressed as a confidence interval. To most people, “our estimates in project charters are within 2x, 90 percent of the time,” means more than a standard deviation.
Most people equate “good “with “accurate.” I equate “good” with “unbiased and complete, with reasonable accuracy given the time spent on estimating.” This is within every organization’s reach, relatively quickly. Getting incrementally more accurate may not even be worth the effort. Instead, we need to learn to account for estimation errors when selecting and running projects. We will never be accurate enough to let us off the governance and methodology hook.

Estimates are inaccurate for three basic reasons (see figure). The pie wedges correspond to sources of uncertainty. The bigger the pie, the less accurate the overall estimate.
In the good old days, the same team was doing a new project for the same sponsor and users on the same mainframe as the last four projects. If we defined the requirements precisely and created a good work breakdown structure and project plan, we could make a pretty accurate estimate. Most of the uncertainty was hiding in the requirements, and our waterfall process went there first.
Then the IT industry gave us more tools, and we discovered outsourcing. Nowadays, many teams have never worked together before, and include people who’ve never worked (or estimated tasks) for us at all, and they’re using at least one tool for the first time. And we like it that way – the unit cost of custom software has gone down since the good old days. But the estimation problem has gotten harder. Just knowing the requirements isn’t enough. We also need to account for the considerable variation in the programmer productivity team and the learning curve and surprises associated with new technology.
To really fly, we turn to even smarter tools and re-use. We pull in third-party (commercial and open source) frameworks, components, or even complete applications. Productivity soars, but we’re constantly on several learning curves, and we often get well into implementation before finding that the technologies don’t play nice, or that a certain requirement is literally un-implementable.
With every new platform, technology, component, framework, service, partner, process, and staffing source, productivity goes up (we hope), but so does uncertainty (count on it).
We’re making the estimation problem harder, faster than we can become better estimators.
We just need to spend more time estimating
“I need a better estimate than that. What else do you need to know?” This mistaken belief is a holdover from the good old days, when most of the uncertainty really was in the requirements. But even complete, detailed, and perfect requirements (which take roughly a quarter of total project effort to get) don’t tell us about uncertainties due to team and technology, and that’s where much of the uncertainty lives nowadays.
Once we have the project scoped – a complete set of user goal-level requirements – we need the actual team to build something in order to get at these other sources of uncertainty. And even then, there are likely to be brick walls out there in the fog.
An inaccurate estimate is useless
Not if it’s unbiased and complete, and our project selection and execution processes don’t depend on unrealistically accurate estimates in order to deliver value. That’s what the next two articles – on project execution and governance, respectively – are about.
The opinions expressed herein or statements made in the above column are solely those of the author, and do not necessarily reflect the views of Wisconsin Technology Network, LLC.WTN accepts no legal liability or responsibility for any claims made or opinions expressed herein.