22 Sep Proactive nightmares
We’re used to weird stories coming out of California, but the failure of flight communications at the Los Angeles flight center has to be one of the weirdest.
The radio system used by controllers to communicate with planes in the region got in a snit when its regular monthly maintenance wasn’t performed and shut itself off. The controllers could see planes on radar but couldn’t talk to them, even as the planes converged well inside FAA minimums for safe distance between flights.
I suppose shutting down a critical system is one kind of error message, but it seems like an atom-bomb-on-an-ant response in this case. Though none of the news stories have carried such an explanation, I’m sure the people who designed the system thought they were building some kind of quality check into it, which is where this becomes an interesting IT story. What was the requirement that led to the bit of intelligence that equated a missed maintenance schedule with the need to shut down the system? And what programmer was so successful in capturing the bureaucratic imperative to follow procedure regardless of the real-world impact?
Did anybody test that particular feature? Did anybody consider the operational implication of passing that test? Each one of these steps was a missed opportunity to reveal this for the bad idea it turned out to be.
As our systems get more complex, it gets easier to slip into specialist mode. I’m just analyzing the business needs. I’m just designing from the requirements. I’m just implementing a piece of the design. I’m just testing what was built. This kind of division of labor and focus is required. However, as we specialize or encourage our staff in that direction, we need to retain the ability to put our work in a larger context.
Automakers figured this out working with Deming a while back. If you take workers who are focused on a single bolt or material and help them place their work in the context of an assembly or even the whole car, the quality of the work improves. I guess this is a variation of the warning to be careful what you ask for. Ask for too fine a focus and you just might get it.
I’m not suggesting that we should spend all our time imaging worst-case scenarios or give everybody responsibility for everything. However, it might prove useful to occasionally ask What’s the worst that could happen? One of the tools I use is a “What’s your nightmare?” brainstorm. The idea is to extrapolate desired system features into undesirable consequences. Such analysis can be incorporated into requirements interviews, design reviews, testing, and any number of other steps in the development process.
As with any other behavior, you’re more likely to get this one if you ask for it. The request can take several forms, but my two favorites are good processes and good metrics. Good processes lay out enough of a framework to get consistent results but leave enough room for judgment to make sure the results have value. Good metrics allow you to assess the progress of a particular effort without obscuring the final objective. It’s not easy figuring out what you don’t know, but with complex systems it is necessary to make the attempt.
As for me, I just hope my car doesn’t start acting like the LAX radio system when I miss regular oil changes.
—–
Byron Glick is a principal at Prairie Star Consulting, LLC, a planning and program-development consulting firm in Madison, Wis. He can be contacted via e-mail at byron.glick@prairiestarconsulting.com or via telephone at 608/345-3958.
The opinions expressed herein or statements made in the above column are solely those of the author, & do not necessarily reflect the views of Wisconsin Technology Network, LLC. (WTN). WTN, LLC accepts no legal liability or responsibility for any claims made or opinions expressed herein.