Toward a unifying theory for compounding and cumulative impacts of project risks and changes

Proceedings of the PMI Research Conference 11-14 July 2004 – London, UK


Death by a thousand cuts.
Cumulative impact.
Productivity loss.
Compounding risk impact.
Ripple. Knock-on. Snowball.

In the world of project and program management, names abound for the phenomena that generate cost and schedule growth triggered by multiple impact sources. Whatever the name applied, we know it when it happens. Project managers are replaced, company executives see their stock price hammered, and customers watch the cost of their projects just keep growing. All are unwilling players in the horror show of big projects gone bad, resulting in damage to business reputations and relations, even formal legal disputes.

But if we have plenty of names for these phenomena and impacts, understanding them—let alone foreseeing and quantifying them—is rather less widespread. In this paper I present a theory and a set of analysis results through which those compounding, disruptive, productivity-losing cumulative impacts on projects may be better understood, foreseen, quantified, and managed. And in so doing, I aim to answer some fundamental questions about those impacts, such as:

Why is there cumulative disruptive impact?

Why are there acceleration impacts?

Why is there compounding of impacts from multiple risks?

Why is there a tendency for projects to go out of control?

The Conditions—When Two Worlds Collide

Some of our confusion over all of those multi-labeled impacts surely arises from viewing projects as if they exist in two different worlds. First there is the world of proactive risk analysis, a world in which there is increasing attention paid not just to the potential impacts of specific individual risks, but also to the “compounding” effects of multiple combined risks. Then there is the world of forensic project analysis, often executed in the context of a project gone bad and the need to identify, retrospectively, what party is at fault for how much cost growth—a world in which for over two decades much attention has gone to identifying not just the direct impacts of past events, but also the “disruption” caused by the events.

I have analyzed dozens of projects from each these two different viewpoints, diagnosing and quantifying the sets of phenomena that drive project performance—the conditions that cause projects to grow in cost far beyond expectations and the analytical factors that create compound risk impacts and disruption. They are viewed from different perspectives, and each world has its own lexicon of terms, but those phenomena, those conditions, and those factors are identical.

Before proceeding to a description of how we can view these phenomena in a more consistent manner, let us first cover the basics of just what is meant by each of these worlds of impacts.

Compounding Impacts of Multiple Risks

The scholarly body of work that comprises project risk management includes a variety of methodologies for cataloguing, foreseeing, analyzing, and mitigating prospective impacts on project performance. Increasingly, managers and analysts are concerned with the compounding impact on project performance of multiple risks. Compounding impacts can occur when multiple changed conditions on a project combine to produce a total cost impact that is greater than the sum of the individual changes’ impacts.

Disruption & Cumulative Impacts

In the world of forensic analysis in support of contract dispute resolution, there simply is no area more contentious than the quantification of the impacts collectively known as disruption (Cooper & Reichelt, 2002). Disruption is fundamentally about increased rework and lost productivity on a project. These impacts may be triggered by many different sources, and may occur at widely separated intervals in time and space from the precipitating events. Further, the impacts can be cumulative—the effect of a changed condition can be considerably greater if it is one of many than if it occurs in isolation. For all these reasons, disruption is the most difficult aspect of the quantification of change impacts—even when the analysis is, as is typical, executed retrospectively. Shortcuts abound in this world of analysis, most often taking the form of some modified-total-cost approach, wherein all incurred project costs are claimed, less some allowance for impacts from unclaimed sources. It is a common and understandable approach that is accepted, albeit intellectually questionable.

Now consider a logical progression over the lifecycle of a troubled project. One day we could be actively engaged in proactive risk analysis; the next day we could suddenly be challenged to conduct a retrospective analysis of the disruptive impacts that, despite our excellent risk analysis, have mounted on the project (see Exhibit 1). These two worlds of analysis view project phenomena with different techniques and terminology. Can we employ a more universal explanation for both?

Two stages of a project…Two analysis worlds

Exhibit 1: Two stages of a project…Two analysis worlds

How & Why--A Unifying Theory of Project Dynamics

If we are to work toward a unifying theory of compounding impacts of risks and cumulative impacts of disruption, we can begin with the descriptions of project phenomena offered by project managers themselves. What follows is a distillation of stories independently told by literally hundreds of project managers. These managers ran projects in aerospace, construction, software development, shipbuilding, and electronics systems in a wide variety of settings.

While each had its own peculiarities, what is striking is the similarity of the descriptions, which offers the prospect of a common theory.

Managers explain that any project involving development effort does not consist of a straight line of discrete tasks with clear beginnings and endings. Instead the tasks (measured as drawings or code, or other units) require an iterative process. At every stage, there are cycles of revision—revised drawings or code changes. These revisions blur or obscure exactly when tasks associated with that stage are complete, what fraction of the total work is truly done, and how much effort will be required to complete the rest of the stage and the project. This is aggravated by the fact that much of the needed rework only becomes visible weeks or months later, during a dependent work stage or testing (consider the bugs found in software). Before that discovery occurs, project tasks are perceived—and logged—as complete in conventional systems, when, in fact, they are not: they contain as-yet-undetected errors that will require reworking. Indeed, there may be many cycles of reworking until the tasks of the stage—and the project—are completed.

The rework cycle

Exhibit 2: The rework cycle

We start the project description, therefore, by recognizing this simple structure of the rework cycle, as shown in Exhibit 2 (Cooper, 1993). Work starts in a pool of “Work To Be Done,” which gets depleted by applying staff who work at some (time-varying) productivity. But as that work is usually less than perfect, only some fraction (“Quality”) of the work moves directly into the pool of “Work Done.” The rest is diverted into a pool of “Undiscovered Rework.” Once discovered, that work moves to the pool of “Known Rework,” which requires staff to execute the needed rework. This work too may need subsequent rework. And so goes the cycle.

Building on that basic structure, I added other elements common to the stories told by experienced project managers, as illustrated in Exhibit 3.

Combined views of many project managers

Exhibit 3: Combined views of many project managers

The “+ & Δ” shown in Exhibit 3 represent real or prospective additions and changes to the project’s work-scope. The immediate effects (shown in green) are to grow the EAC, and without a schedule change, the staffing.

Increased staffing needs can mean more overtime (shown in orange), especially until new hires are brought on board. Sustained high overtime reduces staff productivity.

Further, added hiring in a constrained labor market dilutes skill levels and strains supervision (shown in blue), so productivity and quality are impacted more.

Later, the impacts of reduced work quality propagate (see red paths) as subsequent work products build off earlier faulty ones, and more work is done out of sequence.

Later still, mounting pressures of overruns and rework lead to morale problems, furthering productivity losses—even aggravating staff turnover (shown in purple).

Finally, problems early in the project propagate to downstream work (Exhibit 4). Impacts originating in the engineering phase end up affecting construction as well, where another whole set of such dynamics kicks into action.

Impacts propagate across work stages

Exhibit 4: Impacts propagate across work stages

Pause to take note of the obvious circularity in this diagrammed description of project phenomena. The circular paths formed by the connection of causes and effects form loops—feedback loops that are interconnected with one another. Furthermore, many of the loops represent self-reinforcing sets of phenomena. For example, a higher staffing need can generate more overtime or hiring, with productivity loss from either, lowering progress, raising the estimate-at-completion, and thus generating a need for more staff.

The frequency with which managers of so many different projects cite these factors and phenomena leads us to believe that these factors constitute a nearly universal description of the conditions of executing complex projects. While each factor may operate in widely varying degrees for different projects, the commonality of the factors can help us understand and reconcile compounding risk impacts and cumulative disruption.

How Much—Heuristics for Project Impacts

Despite the near universality of these phenomena, not every project is doomed to severe widespread disruption and compounded cumulative impacts. Some projects are resilient, while others are sensitive. We can use the theory described above to bring something to the discussion other than an analytical arm waving or an expensive custom analyses.

Analyses reported in this section seek to demonstrate the magnitude of these impacts on projects with different characteristics and in different conditions. To do so, I employed a simulation model of exactly the cause-effect and feedback structure described above. The model explicitly portrayed the time-varying conditions that cause changes in productivity, rework and its detection, staffing levels, and work execution, such as staff experience levels, work sequence, supervisory adequacy, worker morale, vendor timeliness, overtime, and hiring and attrition, among others. Each project can be described with a particular set of these factors.

But these analyses are not focused on a single project; rather, they examine thousands of projects. The results described here are extracted from literally tens of thousands of simulations of all combinations and permutations related to: amount, timing and duration of changes and risks; productivity-affecting factors and conditions; and target schedule conditions.

I, in effect, analyzed thousands of projects, each with its own unique combination of these factors and conditions. Some are very tightly-scheduled efforts, with much design-build overlap. Others have highly-skill-intensive tasks and operate in a tight labor market. While most projects have all those characteristics, some have little of them, but are driven by other productivity-affecting conditions.

Into these many mixtures of project conditions I injected varying degrees of changes and risk factors. In doing so, we seek to better understand the conditions that cause project impact variation, be that impact a prospective one—from a combination of risks—or an impact we would analyze retrospectively, the cumulative disruption of changes.

Among the many possible combinations, I focus here on two key conditions that demonstrate compounding, cumulative, disruption, and acceleration impacts—the magnitude of changes and the project’s scheduled duration. A discussion of the questions raised at the outset follows the description of the analysis results.

Project impacts from varying magnitudes of changes and risk items

In this first sequence of tests, varying amounts of additions to the project’s scope of design work are injected into the project. The time of conducting the analysis could be near the beginning of a project, when analysts have identified not one but many possible circumstances to which the project is at risk for scope growth or design change. Or it could be near the end of the project, when analysts are looking back, in an effort to understand the impacts of several changes that are known to have occurred. Regardless of whether we are looking forward or retrospectively, we can inject into the project model various amounts of changes in order to test their impact on project cost performance.

Exhibit 5 displays the impact resulting from design scope changes that range in magnitude from 0 to 25% of the originally planned scope. As in all of the analyses described here, the results reported are in terms of the impact on the expenditure of hours in the project’s build effort. Note that the percentage impact on hours expended not only grows as expected with greater change, but it grows non-linearly. The first 5% of design scope change causes a 2.5% increase in build hours, while the last 5% (i.e., moving from 20% to 25% design scope increase) causes a 7% increase in build hours.

The growing impact of more changes

Exhibit 5: The growing impact of more changes

Project impacts under varying project schedules

In this second sequence of tests, a given amount of design scope change (15%) is injected into a project with varying build schedules. Again, the scope change could be a foreseen risk or a known past condition. In all these tests the build effort is of the same scope and starts at the same point in time, but the planned duration of the build in these test conditions varies from 18 to 48 months. Exhibit 6 shows the cost impact from that specified set of changes, under the conditions of these varying scheduled completion times. The most highly compressed schedules are most vulnerable to impact from the changes. For example, a project with a schedule of 33 months sees an impact of 9%, while one with a 21-month schedule (i.e., one year shorter) sees a 40% impact from the same set of changes.

Change impacts are higher in compressed schedules

Exhibit 6: Change impacts are higher in compressed schedules

Project impacts of different magnitudes of change AND under varying project schedules

In this third and final sequence of tests, different amounts of changes or risk conditions are combined with varying build schedules. Exhibit 7 displays the results of these tests, which examine the same variation in changes and schedules as reported above, but in combination with one another.

The two factors combine to generate different impact

Exhibit 7: The two factors combine to generate different impact

The vertical axis displays the magnitude of impact, as before. Now, however, we are looking at how the impact is affected by both the schedule and the quantity of changed design work. As you look along any of the lines emanating either from the schedule dimension (the x-axis), or the change magnitude dimension (the z-axis), you see the corresponding percentage cost impact (on the y-axis). For reference, note that the lines labeled as A and B are the results presented above in Figures 9 and 10, respectively. The impacts get more and more dramatic with combinations of higher amounts of change and shorter schedule durations—reaching a peak here at the point labeled C, where the combination of the highest amount of change (25%) with the shortest schedule (18 months) creates a cost growth of well over 100%.

What It Means--Answering Questions from Different Worlds

Using the qualitative view of projects described above, and armed with this illustrative sample of quantitative analysis results, we can better understand:

Why is there cumulative disruptive impact?

Disruption itself is hard enough to explain and quantify, and an aspect that makes it especially challenging is the cumulative effect of many changes. Some call it that, while others label it “snowballing” or “death by a thousand cuts.” Whatever the term used, the phenomenon of cumulative impact is clearly visible in the test results seen in Exhibits 5 and 7.

Should there be a small scope addition or other change on a project, the impact on staffing needs and work progress may be limited to little more than the directly visible effects of the added work, with minimal disruptive impact. But our analysis results clearly show that cumulative disruption impact does occur if that same small change occurs as one of many changes. Why? For two reasons. First, the more changes there are, the greater the likelihood of triggering more of the self-reinforcing phenomena shown in Exhibit 4. A few changes may, for example, generate the need for use of some extended overtime, along with the associated productivity loss. With many more changes comes even more overtime, and with it even more productivity loss, along with additional new hires needed to accommodate the added work, which lowers the experience base, increases rework and further lessens productivity. Second, if together the changes in design have enough gusto to slow its work progress notably, it sets the stage for transmitting more productivity- and quality-affecting impacts to downstream dependent build stages—thus generating greater cumulative impact.

Clearly, the amounts of cumulative impact charted in Exhibit 5 will differ for different projects under different conditions—but the pattern of results is the same. With greater degrees of change come ever greater, and growing, amounts of cumulative impact.

Why are there acceleration impacts?

One of the great ironies that one sees in analyzing the impacts of changes on projects is this:

  1. projects that are delayed cost more, while
  2. projects that are accelerated cost more.

When a project is delayed, a contractor will logically claim increased costs. It makes sense, and project customers pay, literally, for the delay. This is an accepted analytical and legal principle of contract change impact pricing. And almost as often, a contractor will seek additional funds if there is an acceleration—claiming added cost of working to a shortened schedule. Or it could be a constructive acceleration—the added costs of working to a schedule that has not been extended enough for the added changed work. What gives? Is it really possible that a shorter schedule, or lack of schedule relief, can cost more? Acceleration impacts are, sadly, quite real. In the world of disruption analysis, these impacts are just as difficult to explain and quantify and so the analyses are often poorly executed.

But recall the analysis results reported above in the context of the diagram of Exhibit 3, repeated below as Exhibit 8. In this rendition of the diagram, the phenomena that improve performance when a schedule is accelerated are shown in green. Those phenomena that worsen performance when the schedule is accelerated are shown in red. Note the preponderance of red paths. And worse, they are self-reinforcing paths: more hiring leads to skill dilution and lower productivity and more rework, which leads to lesser-than-planned progress, which leads to more hiring, and so on. Further, as in the cumulative impact description above, the tighter schedule means that the downstream build effort must use less mature design product, so even more cost impact occurs. The result is cumulative impact from acceleration.

Acceleration impacts work through these paths

Exhibit 8: Acceleration impacts work through these paths

Just how much acceleration impacts can amount to is illustrated in the analysis results shown in Exhibits 6 and 7. For any significant amount of change, the chance of triggering productivity-reducing phenomena is increased if schedules are compressed. While there are clearly tradeoffs to be made against the fixed costs of marching armies of the project, they are just that—tradeoffs. Too often the over-simplistic notion that extending the project schedule will cost more prevails without due consideration of the very real costs of acceleration.

Why is there compounding of impacts from multiple risks?

In the world of proactive risk analysis, the same phenomena retrospectively described as disruption can be expected to occur when multiple risk conditions combine to affect a project. An example is a risk condition (such as late information) that could cause more cost on one project activity, and another changed (risk) condition that separately impacts the cost of a different activity. Together, the two changed conditions may aggravate the total impact of the two individually. How? The two together might trigger a need for more hiring of new staff, with an accompanying loss of staff productivity, as new hires not only work less productively, but also create increasing demands on supervisory time. This added consequence can then lead to further impact, such as allowing additional rework that less-strained supervisors could have detected and avoided, creating a multi-path compounding impact of the combination of changed conditions. On a project with critical talent needs, this is an all-too-familiar phenomenon, and but one example of how multiple risk conditions can exacerbate the impact significantly beyond the simple sum of individual impacts.

Indeed, one way of viewing risks is as changes that have yet to occur. If they should occur, and as they cause project impacts, we might be looking ahead to understand and mitigate the compounding impacts, or we might be looking back to diagnose and quantify them as disruption. Different labels, same dynamic phenomena.

And so, why is there a tendency for projects to go out of control?

Pick your favorite survey to support the point, but one thing is clear: projects all too often go out of control. Why? The theory and analyses offered here help explain why it is that projects have such a strong tendency to fail in meeting cost and schedule targets. Those self-reinforcing phenomena—the ones we prospectively label as compounding risk impacts and retrospectively label as disruption─are always there, ready to take effect and start working in ever-greater intensity to thwart our managerial efforts. If managers don’t add resources when the project is behind schedule, they are castigated for inaction. If they do add resources, the project suffers from overtime fatigue, or skill dilution, or supervision constraints, or second-shift inefficiencies, and any of the many productivity and rework-worsening phenomena triggered by those first impacts, in self-reinforcing spirals of cost growth.

Conclusion--So What Can We Do?

As project managers and customers, we need to take a more sophisticated view of the dynamics of our projects and programs. As it stands now, we too often make our projects more susceptible to performance-hurting phenomena. We do so with some of the most basic things we are taught in Project Management 101, or that some of our texts tell us, or more typically, that we learned at the feet of some other experience-hardened manager. But those conventional control measures and responses may aggravate rather than help. (Cooper, 1998).

Directly counter to the lessons of the analyses here, for example, we vow not to let the schedule slip! Why? Because we learned longer schedules cost more. How? One, we saw projects take longer and cost more. Two, it’s easy to think about and compute the cost of the marching army of management and support effort. Three, this is the way analysts compute extra cost from a critical path computation of added time. Four, even courts accept the logic that added time is a basis for computing additional cost. And so, when faced with lagging progress and the prospect of a schedule extension, what do we do? We boost project resources by increasing the staff via overtime or added shifts or transfers of new staff or outright new hires. With this comes the sometimes acknowledged, but almost always present impacts of fatigue, inexperience, work inefficiencies and rework, all destined to add cost to the project.

As the unifying theory and the analyses here show, tighter schedules and increased staffing levels can easily cause dynamics that raise the cost of a project, and the impact of multiple changes. The cost-raising effects may be prospectively labeled as compound risk impacts, or retrospectively claimed as disruption or cumulative impact or acceleration costs. A clear lesson from the analyses here is that, if we believe even some of those impacts exist, then relaxing a project schedule might improve conditions enough to reduce the cost. Of course, sometimes the schedule is sufficiently important that it merits the added costs of the attempt—such as when other, clearly much more expensive impacts will be avoided, or when national security demands timely deployment. But even in these cases we need to be aware of the tradeoff between cost and schedule that is so often underestimated or ignored, and yet is so often present.

Finally, as project analysts, we should be clear among ourselves and for our clients, the above-chastised project managers and customers, that the worlds of proactive risk analysis and retrospective forensic analysis are not so different after all. The compound effects of multiple prospective risks, when realized, become the cumulative impacts of disruption, aggravated by acceleration. When analysis of the project moves from forward- to backward-looking, we must do more than shift our terminology and methods of analysis. We must rely less upon conventional thinking and intellectually-vague-rules-of-thumb that do little to improve our profession. We must anticipate, analyze, and communicate better the underlying causation of project cost growth—whatever its label. And we must help our clients translate that understanding into more enlightened and effective project management.


Cooper, K. G. (1998). Four failures in project management. In J. Pinto (Ed.), Project Management Handbook Newtown Square, PA: Project Management Institute.

Cooper, K. G. (1993). The rework cycle. In Project Management Institute (Ed.), PMNETwork . Newtown Square, PA: Project Management Institute.

Cooper, K. G., & Reichelt, K. S. (2002). Quantifying project disruption with simulation. San Antonio, TX: Project Management Institute Annual Conference.



Related Content