rigorous analysis of a challenging problem
In large, complex projects, the only constant is change. Increasing scope and changing design requirements are common, often pervasive. While such changes may be manageable individually, taken together they can cause projects to spiral out of control. Too often, the result is a contract dispute, of which “disruption” is frequently the biggest, but most amorphous piece.
What is “disruption”? Phrases commonly used to define disruption include ripple effects, knock-on impacts, secondary effects, impact on unchanged work, lost productivity. Typically, the direct impacts of changes can be documented or estimated reliably, even if entitlement issues are argued. However, the real secondary effects of changes, i.e., disruption damages, are problematic.
While courts and boards have accepted the idea of disruption damages, cases attempting to claim disruption generally have not been very successful. Historically, it is the disruption component of claims that has been the most difficult to quantify, the most contentious, and has resulted in the lowest recovery for claimants. Why are disruption claims so difficult?
• Disruption can be widely separated in space and time from the precipitating event(s), but to be claimed successfully must be causally tied to their source: “Although a change order may directly add, subtract, or change the type of work being performed in one particular area of a construction project, it also may affect other areas of the work that are not addressed by the change order” (Jones 2001).
• Disruption impacts can be cumulative across large numbers of individual impacts.
• Disruption is fundamentally about productivity and rework, which are hard to measure, and thus are rarely measured well.
• The ideal form of damage quantification is to define the amount of impact, including disruption, that would put the injured contractor in the condition it would have been but for the damaging events—a challenging analytical task.
• Disruption claims must screen out the effects of other concurrently occurring contributors (such as strikes, difficult labor markets or mismanagement).
• Contractor-Customer discussions of project cost growth tend to be adversarial, even while the project continues, making efforts to quantify, explain, and mitigate disruption especially challenging.
• Finally, with all of these difficulties, there is also, quite frankly, a poor track record of rigor in disruption quantification; it is far easier, and usually tempting, for both sides to put all blame on the other without rigorous analysis to back their claims; sloppy logic and analysis back assertion (“You caused all our problems”) and counter-assertion (“You mismanaged everything!”).
Claimants have traditionally used a variety of methods to try to quantify their disruption claims, but none adequately deal with these issues. Is there a better way? We believe there is in the form of dynamic simulation modeling.
Making the Case
Although we are not attorneys, our experience on dozens of disruption claims has exposed us to many of the issues and relevant case law.
Acknowledging the difficulty of quantifying claims, in Wunderlich (Wunderlich, 1965) the U.S. Court of Claims ruled that while “a claimant need not prove his damages with absolute certainty or mathematical exactitude” this consideration did “not relieve the contractor of its essential burden of establishing the fundamental facts of liability, causation and resultant injury.”
Three elements must be proven for a claim to be successful:
(1) Liability is a legal issue that is separate from quantification of impact, and not the subject of this paper. In our quest to quantify damages associated with an impact, we do not address the question of whom is liable, but rather just focus on quantifying the extent of impact from disputed events (regardless of who is ultimately found responsible for each of these events).
(2) Resultant injury is the claimed loss of productivity. In Modern Foods Inc. (Modern Foods), the ASBCA ruled that an equitable adjustment is “the difference between what it would have reasonably cost to perform the work as originally required and what it reasonably cost to perform the work as changed” In Sun Electric (Sun Electric), the ASBCA ruled, “an equitable adjustment involves the difference between costs for the changed work and what upon the other hand it would have cost to perform the work as originally specified.” Similarly, in Air-A-Plane (Air-A-Plane), the ASBCA found, “Equitable adjustment is the difference between what it would have reasonably cost to perform the work as originally required and what it reasonably cost to perform the work as changed.” This may be relatively straightforward for small numbers of isolated changes to a project. However, as the ASBCA found in International Aircraft Services (Int'l Aircraft), “the dollar amount of an appellant's claim cannot be determined on an item by item, part by part basis any more than a tapestry can be evaluated by separating it into its individual fibers and valuing each fiber.” In cumulative impact claims, the challenge is how to consider all changes together with everything else that may have been happening on a project.
(3) Causation requires the establishment of a causal link between the direct impacts claimed and the total injury suffered by the claimant. This is the most difficult element to prove of the three (Jones 2001) because of the complexity of separating inefficiencies that are caused by the customer's changes from those that would have occurred anyway, for example, as the result of contractor-caused inefficiency.
Disruption is fundamentally about reduced productivity and increased rework on the project. To quantify disruption rigorously and accurately, a methodology needs to be able (1) to explain the causes of variations in productivity and rework, and (2) to assess what would have happened under alternate conditions. But to quantify the disruption accurately is not enough. To be successful in resolving disputes, a methodology must also address the other challenges to resolving a disruption claim—it must be able to:
• Provide a causal map that cogently ties a resulting affect to the precipitating event, even though portions of the effect may be months or years downstream and in a different part of the project
• Account for and explain the synergy among the individual claimed events that can result in a much larger than expected overall impact
• Explain why productivity trajectories and rework creation would have been affected, how much and for how long
• Account for other concurrent events that might also influence project performance, such as strikes, difficult labor markets or mismanagement.
Ideally, the methodology would be auditable and different assumptions could easily be tried, enabling it to serve as an objective test bed. Additionally, the methodology should permit validation (though this is a long and complex topic discussed in detail in other papers, see in particular Stephens 2002).
Ultimately, as important to claimants as the capabilities of the methodology is its admissibility in court. Recent court findings in Daubert (Daubert, 1993) and Kumho Tire (Kumho Tire, 1999) have established the standards for courts to use in admitting expert testimony. Daubert established the following criteria for admissibility of scientific evidence of a methodology (later extended by Kumho Tire to apply not only to “scientific” testimony, but also to expert witnesses with “technical” or “other specialized knowledge”):
• It must have a testable hypothesis
• The hypothesis must have been tested to determine the known or potential error rate
• It must have been peer reviewed and published
• It must have general acceptance in the scientific community.
Review of Traditional Methods
Approaches commonly used as the basis for quantifying contract damages include Modified Total Cost, CPM and Measured Mile. Each has its place, but none is without shortfalls for use in complex disputes over disruption damages.
• Modified Total Cost takes the position that customer-responsible cost growth is the cost overrun less contractor inefficiencies. A difficulty with this approach is that it does not explain the causality between customer-driven impacts and the resulting cost increases, instead attributing all cost growth to the customer, unless explicitly acknowledged by the contractor (Jones 2001).
• Critical Path Method (CPM) is a more formal “model.” It is useful for discrete work item planning and scheduling. Its use on disruption claims involving iterative design activity and its impact on construction is limited at best. Further, CPM analysis aims to explain project delays and, used in disruption claims, assigns a cost to those delays. This approach is problematic, as time and money are not so neatly correlated (e.g., while one might argue that delays increase cost, another might as easily argue that acceleration increases cost). Additionally, CPM typically starts from the “as-planned” project, resulting in the problem of validating the predicate. And, finally, CPM doesn't deal well with changes in productivity or with rework, the essence of disruption.
• Measured Mile analysis attempts to compare an impacted piece of work with a comparable piece that was not impacted. In concept, this is a reasonable approach, but the greatest difficulty with this method is in identifying appropriate parts of the work for comparison. It is difficult to establish a fair reference. Further, in a project that suffers large amounts of delay and disruption, even the most isolated parts of the work may themselves be affected by snowballing problems. Also, it provides no causation or traceability of cost growth, no explicit attribution of cost responsibility.
Given the flaws with the traditional methods, what is a contractor to do? The court's objective is to put the claimant back in the position they would have been in, but for those events for which the customer is liable. The Defense Contract Audit Agency (DCAA) states: “[A]n adjustment should not increase or decrease a contractor's loss or profit which would otherwise have been incurred on the contract.” That is the objective of the process. Now, how can we get there?
Start all over, and redo the project this time without the disputed actions or inactions? Assuming the parties could agree on entitlement issues, this would offer a clear definition of what would have happened. But while this may afford the most complete and accurate assessment of the actual project costs, it has some rather obvious difficulties: it would be time-consuming, it would be impossible to recreate the exact project conditions (even if it were the same project, it would no longer be in an identical environment given the passage of time), and of course, none of the parties would want to go through it again. It might even cost more than hiring attorneys and consultants. And what if a ruling changed the entitlement interpretation? Yet another round of redoing would be required. Therefore, in the absence of a real project for comparison, some kind of model needs to be used. The only question is what kind of model that should be.
Dynamic Simulation Modeling for Disruption Damage Quantification
The authors and our colleagues have developed a simulation model of projects that has at its core an explanation of productivity and rework. The model was first developed to analyze project performance and explain cost growth on a large shipbuilding effort at Litton in the 1970s (Cooper, 1980). After this effort resulted in an unprecedented award for Litton, the model has since been applied to over 130 large projects primarily in aerospace, defense, and construction, and in all stages of projects' life cycles:
• Pre-bid, to help managers develop an accurate assessment of likely project costs and risks, and to assess likely competitor bids
• During project execution, to help managers proactively manage the project (this being easily the model's most common use)
• For change management, to help managers price the full cost impact of changes and mitigate those impacts
• Retroactively, in post-mortems to help managers understand what happened on projects and promote learning best practices.
While most of its use has been proactive, to aid in the management of projects, the model also has facilitated resolution of dozens of large, contentious disputes. Among the model's uses in disputes are cases in shipbuilding, aerospace/electronics, civil construction and IT systems. The model has been used on both government and commercial contracts, on cases headed for federal court, state courts, arbitration and mediation. The model has yet to be court-tested, as thus far, all cases have settled prior to use in court. However, we believe the model will be found admissible in courts (a full explanation of the model's alignment with technical expert witness requirements is detailed in Stephens 2002).
The model provides a simulation of project performance as it actually occurred, including not only customer changes, but also project conditions (such as the labor market, other ongoing work at the contractor, etc.) and can be used to analyze the impact of altering these conditions. The numerically validated model can then be used to simulate the project, as it would have occurred in the absence of customer changes. The difference between “what happened” and “what would have happened” is, by definition, the full impact of those changes and the disruption they, and they alone, caused.
This approach can provide an objective, auditable explanation and quantification of the causes of project performance.
The following section details how the model works.
Modeling Project Dynamics
The traditional view of how projects work (Exhibit 1) embodied in conventional planning tools is that as you apply people to work on tasks, work gets done (Cooper, 1993).
In more sophisticated planning tools, some accounting of “productivity” is considered (Exhibit 2) perhaps even that this productivity can be dynamic, i.e., changing over time.
But this picture of project dynamics leaves a gaping hole. How do we explain the typical project performance shown in Exhibit 3—work is completed in the early stage of a project, while people continue to work for years beyond. What are these people doing? Are these same people who got so much done in the early stages of the work, now remarkably unproductive? Or is something else going on?
Apparent progress can slow, or even seem to stop, mid-way through a project (Exhibit 4) because the staff finds itself seemingly taking a step backward for every step forward Those with experience on large, complex projects will immediately recognize this “lost year” phenomenon (Cooper, 1993), and know the answer: rework.
As shown in Exhibit 5, that tail of effort that continues is effort expended reworking the initial work products.
And so we add to our view of projects the notion of “quality” (Exhibit 6), which we define as “the fraction of work being done which will not require subsequent rework.”
Even this view omits an important part of the project dynamics, the notion that rework when created is not immediately recognized as such. It may linger, dangerously inviting future rework, for weeks, months, even years, before it is discovered and fixed. So, the complete view (Exhibit 7) must include the concept of undiscovered rework.
A Variety of Direct Impacts Can Set Projects Down the Death Spiral
There are a variety of events and conditions that can trigger the “secondary effects” (disruption) that can hurt performance on a project. Among those we have seen on projects gone awry are:
• Design changes
• Workscope changes
• Late receipt of important technical information
• Excessive delays in design review and approval
• Diversion of key management and technical resources
• Inadequately defined specifications or design “baseline”
• Changes in applicable technical standards and regulations
• Late or inadequate subcontractor performance
• Schedule changes or acceleration
• Superior knowledge.
Often the direct consequences of these events are well understood (even if contractual entitlement arguments surround them). The difficulty lies in the indirect consequences (or disruption). Depending on the number, timing and magnitude of events, the indirect consequence can be anywhere from negligible to several times the size of the direct foreseeable consequence. How does this work? How can we describe it, let alone quantify it?
The Model Describes Project Dynamics
The “+ & Δ“ shown in Exhibit 8 represent changes and additions to the project's workscope. The direct consequence of changes and additions is to increase the workscope and reduce management's Perceived Progress on the project. With a lower progress estimate, management's Expected Hours at Completion will grow, and they will increase their Staffing Requested.
Increasing the staffing request may have the short-term consequence of requiring the use of more overtime until the new hires are brought on board (Exhibit 9). Sustained high levels of overtime reduce the per/hour productivity of staff and hiring in a constrained labor market dilutes Skills & Experience and strains Supervision (Exhibit 10), which further erode productivity and quality.
Later, the impact of the reduced quality will be felt (Exhibit 11). The errors created by the fatigued and less experienced staff will have propagated, as subsequent work products build off earlier faulty ones, and thus have been done at lower productivity and quality as well.
Later still, all the pressures of overrunning the budget and schedule, and finding more and more rework, lead to morale problems, furthering the decline in performance (Exhibit 12).
To make matters worse, all these problems feed back upon themselves. For example, the early staffing shortage that required overtime and reduced productivity, eventually necessitates more staffing further lowering productivity and quality.
Problems early in the project propagate to downstream work (Exhibit 13). Change impacts originally isolated in the engineering phase end up affecting construction as well. This is a particular strength of the methodology—the ability to link construction impacts to the initiating difficulties back in the design phase.
With this causal description of project dynamics, it becomes obvious how problems early in a project can propagate through many stages—ripple…knock on…disruption.
Quantifying the Disruption
So how do we use this to answer the question of how much disruption damages should be attributed to the customer? The three steps are to: (1) recreate the project history; (2) remove customer impacts and re-simulate the would-have condition; and (3) quantify disruption by analyzing the difference between the two simulations.
Recreate Project History
The model is tailored to recreate known historical performance on the project. For any specific project, we start with known initial conditions (such as the starting budget and schedule for the project) and typical parameters for similar types of projects (and with well over 100 projects in our database, we have enough history to be able to draw on something similar). These parameters describe the strength of relationships (e.g., how much various levels of overtime affect productivity), time delays (e.g., how long it takes to hire a new employee once the need for one has been recognized). We also add in environmental conditions (e.g., that describe the job market) and the direct impact of exogenous events (such as scope changes or a labor strike).
The simulation of this preliminary model is then compared with actual historical data, and discrepancies are noted. Interviews with managers and people familiar with the project are used to adjust parameters and resolve differences between the project as simulated and as it actually occurred. The final model simulation will closely match known history (Exhibit 14) along a wide variety of dimensions, both “hard” (for example, staffing and progress) and “soft” (how much overtime impacted productivity, how morale evolved over the course of the project).
In our experience on projects, enough information is known about the factors in the model that it is possible to simulate (and validate) even those for which information is not directly available. For example, while no data may exist for productivity, data do exist for staffing and progress—this enables one to make accurate inferences about the productivity trajectory (Exhibit 15).
Remove Customer Impacts and Simulate Would-Have
To quantify the impact of specific events, we compare the simulation that includes these events (and thus, represents the project as it actually occurred) to a simulation that does not (the Would Have). The Would Have is a simulation is what would have happened but for, and only but for, the removed events (Exhibit 16).
Quantify “Disruption” by Analyzing the Difference Between the Simulations
The difference between the Historical and the Would Have simulations (Exhibit 17) represents the full consequence of the events (both the direct impact and the indirect, or disruption, impact). Since the only difference between the two simulations is the removed events, these events are solely responsible for all the difference between the two simulations. This is just what the courts seem to want when determining damage assessment.
Should different “would have” scenarios be required (e.g., if a ruling renders unclaimable an event that had been part of the dispute), alternative scenarios can be quickly and easily tested.
Using the Model to Explain the Sources of Disruption
Claimants are seeking to recover for “lost productivity.” Analyses with the model not only can quantify the total impact of claimed events, but also can be used explain the sources of lost productivity. Because we have full project simulations of what actually happened, and what would have happened, these can be thoroughly compared to explain all differences between those two scenarios. The would-have simulation describes not only how things would have been different, but also why.
For example, on a perfectly orderly project, work will be performed as planned in an orderly sequence. However, work may be driven out of sequence by a number of factors, including schedule pressure and design changes. The more work is done out of the ideal sequence, the more productivity on that work will suffer.
In Exhibit 18, we show the productivity multiplier (that is, factors that dynamically affect the prevailing productivity over the course of a project) from out of sequence work as a project actually occurred, and as it would have occurred in the absence of a set of customer changes. When this multiplier equals 1.0, it has no effect on productivity. When it equals .75, it is reducing productivity to 75% of what it would otherwise be (i.e., it reduces productivity by 25%).
Note that in the absence of the customer changes, this condition would not have been ideal, but it would have been better, and only to the extent casually traceable to the impact tested.
This, and other, factors that impact productivity (and rework) in engineering also affect the downstream work in construction. The later (and lower quality) engineering product makes work less productive in construction than it would have been. Reduced productivity drives up labor requirements. Exhibit 19 shows the resulting labor profiles in construction.
An example of a secondary (or knock-on) effect in construction is the effect from physical crowding, shown in Exhibit 20. The higher headcount requirements increase crowding and further reduce productivity—a condition that improves (although, again, not to perfection) in the “would-have” condition.
All differences between these two simulations can be traced back directly to the differences in inputs. With two full program simulations (for what actually happened and what would have happened), we can make detailed comparisons and explain the sources of performance difference. We can learn, for example, that 25% was due to the direct changes, 75% was due to disruption. Of the disruption, 20% was due to increased out-of-sequence work, 15% to increased overtime, and so on.
Once the model has been customized to a particular project, running alternative scenarios is straightforward. Should a fact-finder be interested in including or excluding a different set of unplanned events and conditions (e.g., stemming from a different entitlement finding), we can assist the fact-finder by simulating the alternative would-have scenario.
Best of all, we can assist contractor and customer to manage change impacts proactively by enabling analysis of proposed changes before they are agreed. Customers can choose to, or not to, proceed with proposed changes with better understanding of their full costs. And contractors can test mitigating actions to find ways to reduce the overall cost of changes before they have a chance to ripple out of control and find their way to court.
Proving disruption in court has long been a problem. The particular challenges are in proving causality and the quantity of resultant injury. The methods traditionally used do not adequately address the need to explain the causes of reduced productivity and increased rework on disrupted project.
Dynamic Simulation Modeling Addresses the Needs and Requirements of Fact-Finders
• Explains the causes of variations in productivity and rework
• Assesses what would have happened under alternate conditions
• Provides a causal map that cogently ties a resulting affect to the precipitating event, even though portions of the effect may be far downstream
• Accounts for and explains the synergy among the individual claimed events that can result in a much larger than expected overall impact
• Explains why productivity trajectories and rework creation would have been affected, how much and for how long
• Accounts for other concurrent events that might also influence project performance, such as strikes, difficult labor markets or mismanagement.
Providing the fact-finder with a tool that fairly and equitably quantifies impacts, dynamic simulation modeling offers a superior means of quantifying project disruption. Indeed, when used in proactive applications, this same capability offers project management the opportunity not only to foresee the full impacts of changes, but also to explore, via simulation, ways to mitigate those impacts. This, of course, represents the most effective means of disruption impact cost recovery—cost avoidance.
Air-A-Plane Corporation. 1960. ASBCA No. 3842, 60-1 BCA, § 2,547.
Cooper. 1980, Dec. “Naval Ship Production: A Claim Settled and a Framework Built.” Interfaces, Vol. 10, No. 6.
Cooper. 1993. “The Rework Cycle: Why Projects Are Mismanaged.” PMNETwork, February 1993 and “The Rework Cycle: How It Really Works…And Reworks….” PMNETwork, February 1993.
Cooper. 1998. “The Four Failures: Systemic Sources of Project Problems.” for publication by Project Management Institute.
Cooper. 2002. “Learning to Learn, from Past to Future.” International Journal of Project Management 20, pp. 213–219.
Daubert. 1993. Daubert v. Merrill Dow Pharmaceuticals, Inc. 509 U.S. 579.
DCAA. Defense Contract Audit Agency, Audit Guidance.
International Aircraft Services, Inc, ASBCA No. 8389.
Reginald M. Jones. 2001, Fall. “Lost Productivity: Claims for the Cumulative Impact of Multiple Change Orders.” Public Contract Law Journal, Vol. 31, No 1.
Kumho Tire Company v. Patrick Carmichael (526 U.S., 1999).
Modern Foods Inc., ASBCA 2090, 57-1 BCA § 1229.
Craig A Stephens, James M. Lyneis and Alan K. Graham. 2002. “System Dynamics Modeling in the Legal Arena: Special Challenges of the Expert Witness Role.” To be published.
Stephens. Admissibility of Expert Testimony Based on Analysis of Complex-Project Dynamics—Implications of the Daubert and Kumho Tire rulings. To be published
Sun Electric Corporation, ASBCA No. 13031, 70-2 BCA § 8371.
Wunderlich Contracting Co. v. United States, 351 F.2d 956, 173 ct. cl. 180, 1965.
Proceedings of the Project Management Institute Annual Seminars & Symposium
October 3–10, 2002 • San Antonio, Texas, USA