You Gave Me What I Asked for But Not What I Wanted! (Part 1)

Picture the following scenario: You're the project manager ready to deliver the new system your team has been developing for the past two years. Millions of dollars have been spent during those two years to gather requirements, define the system, build an architecture and models, create a work breakdown structure and project schedule with resource allocation, build the system components, integrate, test, validate, fix problems and finally the day is at hand: delivery. You deliver the system with great fanfare and you are confident the system will deliver as promised: greater efficiencies, cost savings, improved customer service, and increased revenue.

Your systems engineering team did an amazing job collecting and vetting requirements from the users, customers, and even the "C-Suite" crowd. Everyone agreed the requirements were well defined and prescribed what the system should do. Your enterprise and systems architects defined the architecture against the requirements, built models of the system, and re-vetted them with stakeholders. A final design was developed and was handed off to the development team to implement. The development team tested their components worked and validated them against the requirements. Once any discrepancies were fixed and regression tested, the systems components were handed to the integration team to "stich" everything together. Again, more testing and validation are completed, problems fixed, regression testing completed, and the system is packaged up for delivery to the stakeholders. Your work is done, and a promotion and a raise will certainly be in the offing for pulling off such a complex project.

Exception … it quickly turns into a disaster. The users start using the system and find it's not doing what they thought it would do. Processes are radically different, data management requires a new set of skills, some people find themselves overburdened with activities while others are now idle, the newly formed help desk is swamped with calls and trouble tickets, customer service takes a beating, and the C- Suite quickly finds out about the mess you delivered. They order the system be taken down immediately and fixed. Instead of that promotion and raise, you're looking at the real possibility of termination and no income at all. What you quickly learn doing a "post mortem" on the project is a very common problem in requirements gathering and management: the stakeholders will say, "You gave me what I asked for, but it's not what I wanted!" Somehow, the project management and systems engineering roles should have been described as "being able to read minds" rather than collect, structure, and vet requirements for a system. But was it the inability to read minds or the process that replied in this fiasco?

Before I go on, I want to quickly define what I mean by "system." Most people reading this article will assume I'm talking about software. In many cases, the system that fails to meet customer expectations is a new software capability. In business, these systems can be bespoke applications developed for a very specific capacity; In many cases, it can be an Enterprise Resource Planning (ERP) solution based on applications from SAP, Oracle / Peoplesoft, or Microsoft Dynamics, to name some well-known ERP solutions. However, a system can be anything: an airplane, a ship, a spacecraft, a car, a power generation plant, medical devices … really, anything that meets the definition of "a set of interacting or interdependent components forming an integrated whole . " Think of non-software-based systems that failed to deliver as promised or processed delays and cost overruns: The Ford Edsel and Pontiac Aztec, which had market failures (requirements developed in a market vacuum), the Airbus 380 and Boeing 787 (requirements and implementation mismatch), and the Littoral Combat Ship (LCS), to name a few.

Now that I've defined what a system is, I want to return to the fictitious, but very familiar, story I crafted for the introduction to this article. I posited the question of whether project managers and systems engineers need to read minds or should have changed the process to avoid the disaster. Well, it turns out that mind reading is not a prerequisite for either of these positions. What could have avoided the problems is employing chaos theory and emergence as the approach to systems development. These theoretic constructs are what drive Agile development, which I will provide an overview on how Agile methods work and their implementation.

Agile development, which includes such techniques as Scrum, eXtreme Programming (XP), and Design Systems Development Method (DSDM) are rooted in the idea that the end users must be involved in the entire development process, and that nobody really knows what the end state of the system will look like: the end state emerges from the entire process. This article will not discuss how to implement an Agile process; rather, it will describe some key aspects of the approach and why it works.

Agile methods are fundamentally designed to facilitate communication between all the stakeholders in a project. The processes encourage teams to develop as self-organizing systems, with people assuming roles based on their strengths and interests. Communication is facilitated with "information radiators" that have team members post their progress and issues in a "war room" planning board with Post-It notes. It also recognizes that end-users of the system are both instrumental in the development and do not really know what they want until they see it. That encourages the development of small, incremental solutions to a problem, rather than a monolithic solution that often does not satisfy anyone.

It is the antithesis of how organizations typically operate, because there is a great deal of unpredictability in what the final system will look like and the apprehensive lack of control during the development process. Further, Agile processes tend to have sparse documentation: "ornamental" document deliverables, which provide no value value to the team's ability to develop and deliver a solution are almost entirely eliminated; Much of the text-based documentation are replaced by diagrams, models, and storyboards. Metrics that are grouped are more focused on keeping the team aware of what has been completed and what is in the pipeline rather than the classic Gantt chart progress reports and earned-value data that management is always focused on. Instead, daily "stands" are held: these are short meetings with all the team members to discuss three things: accomplishes, today's objectives, and roadblocks / problems that need to be addressed. The team members include the developers, and end-user representative, testers, user experience developers, and testers.

The focus is on delivering small, incremental pieces of functionality rather than large, monolithic solution. Remember: the users are getting things done without any new systems, so even small improvements in their operations will be appreciated. And because they helped develop it, they will be vested in its success. The interesting piece is that there's no prediction on what the final product will do: as needs change, people change, and missions change, the incremental approach allows the system to evolve as it's developed. With this approach, the problem of "You Gave Me What I Asked for But Not What I Wanted!" is a thing of the past.

In Part 2 of this article, I will discuss why an Agile, incremental, and adaptive approach works, why the old ways of doing business fail so often, and how to sell the approach to a C-suite staff that wants a predictable end- state, lots of meters along the way, and reams of documentation.