Introduction
The success of any project is likely to be judged by how well it achieves a defined outcome while meeting performance expectations. Most projects would not be undertaken if management had little confidence in a successful outcome. Many projects are subject to being cancelled, even late in the project cycle, if the forecasted outcome does not meet expectations. These factors drive the need throughout the project for updated forecasts of the project outcome.
The purpose of this paper is to examine the forecasting process that is applied to projects, from their approval to their completion. We will consider how changes in the project are recognized and adjustments are made in the performance plan. We will also examine the difficulties in making accurate forecasts and some of the things that can go wrong.
Forecasts and Project Definition
We begin at the project stage, where a project definition establishes estimates for budget, schedule, and scope. Different organizations authorize projects based on their own criteria. Exhibit 1 shows a summary of criteria suggested by the Association for the Advancement of Cost Engineering International for cost estimates of different reliability (Humphreys, 1993, p. 278).
Exhibit 1 – Estimate Reliability
Depending on the project environment and the specific project, the quality of the estimates on which a project is authorized can be of any grade. I have applied the specifics of Exhibit 1 to industrial process project estimates. Conceptually, this applies also to estimates of schedule or scope. The exact reliability parameters could be different, but generally prove to be broader than we are comfortable with. Unfortunately, the quality of the estimates is often misunderstood, and thus management's expectations of estimate and forecast accuracy are impossible to obtain.
All estimates can be thought of as probabilistic in nature. Over the course of the project, estimates should evolve so that the margin of error narrows. The evolving quality of the estimates has a direct impact on the quality of any forecasts to completing the project.
Exhibit 2 depicts the concept of evolving quality in the estimates of project performance metrics. What we show here is conceptual, not detailed. This exhibit is based on Exhibit 1. As a caution, the increase in reliability (through decrease in variation) is generally not linear over time. At early stages of the project, step changes in estimates are more likely than are changes indicated by a continuous function. This is especially true as long as scope is being added and deleted.
Exhibit 2 – Estimated Project Performance
Improving estimate reliability over the course of the project is at the heart of our project forecasts. If the reliability of initial estimates was acceptable throughout the project, we would not need most of our forecasting systems.
Core Concepts in Forecasting
This improvement in estimating reliability depends on four core concepts being used:
- Estimates are updated periodically to reflect decisions made, resources available, and the impact on performance of the changing conditions under which the project is being performed.
- Estimates of work to be completed are appropriately reduced by recognizing the work accomplished. Accomplishment is measured against the earlier estimates for the work's completion.
- Interplay between changes in scope or conditions and the ongoing work can be measured and forecast.
- Project management exhibits the will to act on accurate forecasts instead of finding reasons to “spin” the outcome.
None of these concepts is a given. The necessity and difficulty of achieving all of these are often underestimated.
These concepts can be lumped under the project management principle of change management. In reality, however, they deal with four different, though interrelated, challenges in dealing with change.
The first concept deals with the reality that the project's boundaries change as the project progresses. Specific selections of equipment, systems, or process units impact the content of the cloud we call the project. Refining judgments of capabilities leads to increases in scope more frequently than decreases in scope. That is why the suggested +/- range of estimates in Exhibit 1 is not balanced.
Similarly, the conditions under which the project is being performed change as the project is performed. Competition for resources, changes in performance intensity, and external factors all put pressure on the project plan. The most common change in project intensity is the increase that occurs when a greater scope of work is attempted in the same amount of time.
Hopefully, the evolution of scope is stopped early in the project as the major decisions are made and the required scope is clarified. Unfortunately, this is not always the case. In a paper mill project in my experience, the plan called for the expanded plant's required steam capacity to be provided by existing boilers. The rated capacity of the boilers showed that this was feasible, and scope estimates were locked in based on this assumption. Only when the new paper machine was in its trials was it found that steam capacity was a significant roadblock to full operating capacity. None of the existing boilers could achieve output approaching rated capacity. Essentially, prior years of operating below capacity had effectively reduced capacity. The result was a late change in scope and a marked change in the forecasted cost of the project.
The second concept deals with the reality that, as the project progresses, it is very easy to record the spending of dollars and very difficult to assess what those dollars have bought. For instance, in a software systems project, it is very difficult to relate the needs definition stage to accomplishment of the final objectives. Did we overrun or underrun in this stage? If we had a cost underrun in this stage, will an incomplete understanding of requirements later translate into an overrun in the coding or testing stages?
An example that shows how far off such interim estimates is the case of the Boston Big Dig. In 1999, the estimated cost at completion was $10.8 billion (“SEC Settles Case,” 2003). By early 2000 the estimate was raised to $13.5 billion (Caffrey, 2001). The increase in the estimate effectively doubled the estimate of the work remaining. In other words, the portion of the estimate for work not yet undertaken had an error rate approaching 100%.
The concept is illustrated in Exhibit 3. Here we apply the reliability limits of a definitive estimate from Exhibit 1 to a decreasing scope of estimated work. Given our increasing knowledge about the project boundaries and of the work accomplished, our estimate and forecast reliability should increase.
Exhibit 3 – Key Metric Evolution Through Project
The third concept, the problem of forecasting the interplay between changes in scope or conditions and accurate measurement of accomplishment, can be illustrated with this example: An early stage of a building project is the construction of the foundations for the building. This is generally a pretty straightforward set of activities. If properly done, these activities are usually isolated in their cost impact on the remainder of the project. Once the foundations are complete, no usable floor space is complete. However, a comparison of cost against the original estimate for the foundations should allow the removal of any uncertainly associated with this work from the forecast. A transfer is made to work complete with a reduction in work to do. Theoretically, this reduces the scope being forecast from estimates and reduces the sources of variability in the forecast.
In a construction project currently underway in for a new library in Indianapolis, this model has turned into a disaster. It will only be determined in court whose fault is the poor initial construction of the foundations. What is known is that faulty construction, and associated project delays, have added $52 million and two years to the currently projected cost and completion of the library (Corcoran, 2006a). Looking backward at the situation, I conclude that a delay of a few months, and a cost of no more than $10 million dollars, should have been the maximum impact of this problem. This estimate is based on the extreme scenario of promptly deciding to remove and rebuild the entire foundation. Instead, what has happened is that the interplay between changed conditions of performance, and failure to deal with the consequential impacts, have resulted in a 50% forecast cost increase for the entire project.
Summary of the Systems Used in Project Forecasting
The first three core concepts for forecasting can be summarized by the use of three related project systems:
- Ongoing control of the estimate reliability. This is accomplished by recognizing the realities of estimating accuracy, given the information on which it is based, and adjusting estimates for changes in scope or in the conditions of performance.
- Codifying the estimates into a means of measuring project performance for work as it is accomplished. This is generally called an earned value system.
- A watch system that identifies and examines interactions between changes in scope, conditions, or performance and extrapolates this information to project outcomes.
How to obtain quality estimates is beyond the scope of this paper. We can say that investing considerable effort in such systems is required. For estimating to be useful, project management must be educated in the basis and limits of estimate reliability. Project managers are well advised to run at least a basic sensitivity analysis of the accuracy of components of an estimate. The overall project estimates are likely to be much more influenced by the addition or omission of a scope item than by concern about the lowest level details in the estimate. As a friend once commented, “In school they taught me to be concerned about making sure I had an accurate count of the number of windows on each floor. In reality, I need to be more concerned about making sure I count every floor.”
Particularly at early stages of a project, the use of techniques such as Monte Carlo cost simulations or PERT (probabilistic) schedule analysis can assist in realistically forecasting project performance metrics. Carefully applied, such systems should result in much more credible estimates as well as in identifying areas of risk on which to concentrate management attention.
Exhibit 4 – Example of Project Forecast
Earned value systems are a critical part of effective project management. They are also difficult to implement effectively. An excellent discussion of earned value systems can be found in Earned Value Project Management (Fleming & Koppelman, 2005). The core of the recommended system can be seen in Exhibit 4. This exhibit summarizes three core concepts of an earned value system. For a complete system there must be:
- Capture of the expected performance against a time scale (A)
- Accounting for the actual expenditure against the time scale (B)
- Measure of accomplishment against the time scale (C)
From these critical metrics forecasts can be examined in summary terms, and management attention can be focused on leading indicators.
The example in Exhibit 4 demonstrates a project that at approximately the half-way point is behind schedule and over-budget. This information gives a basis for forecasts.
The theory of such an earned value system is solid. The devil is in the details. A common base for measuring estimates, costs, and accomplishment must be devised. Dollar cost is the most often used metric for this purpose, but it is not a metric of assured success. The timing of dollar outflow, the impact of major procurement activities not easily tied to measurable progress, and dealing with commitments versus expenditures all complicate earned value effectiveness when based on dollar measures. It is often just too easy to “game” the system to maintain credibility.
In my work in industrial construction, the most effective core metric used in earned value measures is labor hours. This does not eliminate all the problems, and it is still a challenge to implement and administer. It has, however, been found to be our most reliable overall measure for our projects.
A “watch” system can be built around frequent reviews of project status. Such reviews must be open and balanced. Driving the interaction between detection, prediction, and correction systems work is a major challenge. It is not solved in software. It must be built on credible systems for estimates and measurement of performance. It must be based on understandable measures of performance against the estimates.
The “Will” to Deal With Project Realities
Each of these systems is critically important in maintaining quality forecasts. Their existence and use is necessary for meaningful project forecasts. Properly executed, these three systems can provide the basis for solid project forecasts with an identified element of risk. Their successful use for the benefit of the project presupposes the existence of the fourth core concept: the existence of the “will” to address the bigger project truth.
Recognizing that a project is off-path is a difficult matter. Recognition that leads to prompt corrective action is rare. One might ask why this does not happen.
There are a number of reasons why problem recognition doesn't happen in a timely fashion. Unfortunately, many of these reasons call into question the reliability or operation of the very systems we rely on for answers.
- Is the estimate reliability really equal to the definition of “definitive”?
- Is our ability to measure accomplishment against prior estimates sufficiently robust?
- Are we realistically assessing the impact of changes in scope, conditions, and performance?
As the questions above illustrate, the “why” can be multifaceted. Some obvious opportunities come from system failure in the form of missed cues or erroneous data. This can happen with any of the systems previously mentioned. Indeed, signals lacking clarity from any of these systems can diffuse the early warning and are often a contributing factor to delayed problem recognition.
More often it involves interaction between the systems and the human element.
To revisit some examples from above:
- In the “Big Dig,” estimates putting the final cost at $14 billion had been distributed to the governing authorities in 1994, five years before the faulty financial disclosure documents were issued in 1999 (Angelo, 2001).
- In the library case, by October, 2003, independent quality inspectors had brought serious problems in the foundation construction to the attention of the library board (Corcoran, 2006b), but in February, 2004, it was still being trumpeted that “it's been a great project so far” (Fritze, 2004). Within days of that announcement, all construction was suspended, for what turned out to be 16 months.
While the successful use of the core systems is necessary for forecasting accuracy, it is not sufficient.
Andy Grove, a former CEO at Intel, talks about the decision of Intel to exit the merchant memory market in 1985 (Tedlow, 2006). He asked Gordon Moore, then CEO of Intel, “If …. the board brought in a new CEO, what do you think he would do?” Moore answered, “Get us out of memory.” Then, Grove asked, “Why shouldn't you and I walk out the door, come back, and do it ourselves?”
Similar dynamics are found in our projects. It is often difficult to address problems that are staring us in the face. We can find a lot of excuses. For example:
- “It's a data blip and will straighten itself out in the next reporting period (or the next after that).”
- “It represents a problem that has been solved and will not continue to haunt the project.”
- “I'm sure that with a little attention I can turn this around, and we don't need to bother higher management with it yet.”
- “The evaluation of progress has broken down. While it is clear we have spent this money, our progress is further along than the data shows.”
- “This is a temporary problem, and we will be past it in a couple more weeks.”
- “It's a learning curve issue. The first 20% has been a problem, but now everyone has found their pace and the rest will beat the estimate.”
- “We can look at this deviation as an investment in things going better for the rest of the project.”
Project managers must be able to sort through such excuses and spur corrective action to overcome the underlying causes.
A recent failure to defuse this kind of spin happened at a company that was suffering a blip in safety statistics. At the midpoint of the year, the OSHA recordable rate for the company was at 2.0 (their target was “below 1”). At three successive management meetings, the discussion was about what would be the year-end rate “if no more recordables occurred?” To that point in the year, the number of recordable incidents was in the mid-teens. One wonders about the meaning of a forecast based on “if no more occur?” No discussion took place about how to make that miracle happen. No course of action was identified about finding and correcting problems.
In another project example, I found a construction company whose labor multiplier (hours required to accomplish the work estimated to take 1 hour) was running at 1.4 in the early stages of a project. For three successive months, the project manager had completed a forecast that projected the size of the cost deviation at the end of the project if the labor multiplier was 1.0 for the remainder of all work. Corporate management accepted this forecast until circumstances made that impossible. The simplest form of projection of the cumulative labor multiplier against the remaining work to be accomplished showed the company would be out of business before the project was complete, but this reality was ignored for three critical months.
In this case, the measurement of work accomplished unfortunately turned out to be quite accurate. The fault lay primarily in failure to recognize the change in the conditions under which the work was being done. Under a different set of conditions, the initial estimates might well have been accomplishable. A three-month delay in recognizing the impossibility of continuing under these circumstances threatened not just the project, but the company.
Andy Grove also indicated that when he finally faced tough decisions, he usually found concluded that he should have taken the action much earlier (Tedlow, 2006 p 373). In some cases the business world might consider a year as warp speed. Given the short time frames in projects, such hand-wringing delay can sink the project before the forecast has caught everyone's attention.
Conclusion
In successfully completing projects, it is necessary for forecasts to be made using core systems for managing estimates, measuring accomplishment, and deciphering the interaction between change and ongoing activity. Developing and using such systems are core responsibilities of the project team. The failure to deploy solid systems is a reason for the difficulty in accurately forecasting projects. Again, the existence of such systems is not enough for project forecasting success. Project management at all levels must develop the will to deal with the signals the controls systems are providing.
This is a matter of project leadership as much as project management. It is a challenge that all project mangers can expect to face, probably repeatedly, in their careers. I encourage you to push for the recognition of the systemic and human factors that are obstacles to accurate forecasts in your projects. Nothing less can lead to your repeatable success.