Project status is often reported using traffic light colors: green for good, yellow for at risk, and red for in trouble. Status reports typically go to executives and key stakeholders to provide assurances that the project is on track and that there will be no surprises like failure to deliver, major quality problems, major schedule delays, or major cost overruns.
Why, then, do I periodically see surprises happen to projects? One obvious sign of this is when status reporting flips from green to red without going through yellow. Or even better - the project 'completes', yet the delivered software fails to meet basic requirements requiring significant enhancements or has significant quality issues.
I think that fundamentally it is all about risk. Software development projects are hard largely because of all the different aspects that can cause problems. I am not talking about classical risk management here, but the day-to-day activities in developing software. Will the business representatives be able to efficiently make decisions and identify / prioritize the important requirements? Will the solution address the requirements? What requirements were missed that the users will really end up needing? Do your developers really understand the technologies they are using? Will performance be bad? How often will your developers misunderstand the requirements? When requirements change and the code is updated, will regressions be introduced? And was everything changed that needed to be? Are testers validating that the software will meet the needs of users in real-life scenarios? Can the system be reliably built and deployed (including database changes, configuration settings, operational scripts, O/S and middleware settings, etc.)? Will there be any inconsistencies in existing data that cause problems for new functionality? Is the existing data being interpreted correctly?
I have never seen a project manager (PM) record all of these risks explicitly, but they exist nevertheless out of sight. For a new project with a new team and/or a new business domain, the likelihood and impact of these risks is unknown. In essence, the team's ability to deliver working software that meets the needs of the business is unproven. Even if the individuals on the team have a good track record, will they jell as a new team and become productive quickly, or stay in a chaotic, poor-performing state? A project manager just does not know at the start of a new project. So what basis is there for reporting a green status? Is not a yellow or red status just as likely?
On the two agile projects I led recently, one with a very tightly constrained budget and the other with a very tightly constrained schedule, one of my biggest uncertainties at the start of each project was the team's velocity: how much high-quality functionality (user stories) they could deliver per iteration. This velocity was the key variable in determining the project schedule or budget, and even minor changes in my estimated velocity moved the project from green to red or vice versa. Scope changes were far less a concern because it is unlikely for scope to vary by more than 100%, so having a scope contingency of 30%, for example, provides a good level of assurance especially when combined with the rigorous scope management techniques that are inherent to agile. Team productivity, on the other hand, can vary by far more than 100%. A 50% or greater deviation in an estimated velocity is entirely possible, especially for smaller teams. In fact, in my budget-constrained project with a single developer, the developer ended up being half as productive as I originally expected.
So just on the basis of not knowing the velocity, the project's budget and schedule status is typically unknown (e.g. equally likely green or red). Add in all the other areas of risk raises the question of how a green status can be honestly claimed? Should not every new software development project start off with a status of red?
If you find this article helpful, please make a donation.