Extreme over-runs and what we can learn
This paper draws conclusions about how complex projects behave, and how we can manage them better, following the detailed analysis of a number of projects that badly over-ran. The explanations use models that show systemic effects within the projects – effects that derive not from individual parts of the project, but from their interactions, causing the projects to behave in a way that is difficult to predict and are sometimes counterintuitive. This paper however will discuss important aspects that need to be taken into account, using the models to explain why some projects overspend catastrophically, and then look at the implications for how we manage projects.
The paper is divided into two. The first half covers principles and concepts: explaining what it is about projects that makes them complex, describing systemic effects within projects, showing the “soft” nature of some of the important effects, discussing how these effects inter-relate, and particularly when they create positive feedback. A process for deriving models from messy real-life projects will also be described. The second half of the paper will ask – so what? It will look at the implications of the work for our project-management practice, in particular looking at how we analyse project risk, how we undertake lessons-learned activities, different ways we need to think when managing projects, and also some implications for how we see the Guide to the Project Management Body of Knowledge (PMBOK® Guide).
The author of this paper is a member of a team at Strathclyde University consisting also of Colin Eden, Fran Ackermann and Susan Howick, which has been working in post-project litigation for 12 years. The team has been involved in identifying, modelling and quantifying “Delay & Disruption” (D&D) in projects that have gone significantly wrong, even though apparently well-managed by traditional project-management methods. It is a mixed team using mixed techniques, useful for understanding how projects behave. The team has prepared (with good client satisfaction) significant parts of nine major claims, with a total value around $1.5 billion, with mainly engineering projects, spread around Europe, Canada, UK, and US. The claims entail a detailed analysis of project behaviour, in which the team needs to trace causal linkages from actions taken, through the dynamic behaviours set up in the project, and to understand and quantify the resulting effects. All of the projects were characterised by “messiness”, where the participants didn't really understand how the projects had turned out the way they had; the team needed, therefore, to map out carefully the structure of causality that had led to the final project out-turn. Having done this in detail for a number of projects, and having more briefly looked at many more projects, the team has learned many lessons about the behaviour of complex projects, has undertaken research with organizations on strategies for how to avoid systemic delay-and-disruptive –type problems in projects, and has also developed new approaches to risk assessment and management.
In order to understand the behaviour of projects that behave unexpectedly, effects must be looked at not as individual, isolated influences which can be studied independently, but as they combine together to produce “complex” behaviour. Simon (1962) defined a complex system as: “One made up of a large number of parts that interact in a non-simple way. In such systems the whole is more than the sum of the parts, not in an ultimate, metaphysical sense, but in the important pragmatic sense that, given the properties of the parts and the laws of interaction, it is not a trivial matter to infer the properties of the whole… .A complex system is one in which the behaviour of the whole is difficult to deduce from understanding the individual parts”. (p. 468) This highlights a key paradox: while traditional project management techniques (that decompose a project in a structured way into manageable sub-sections, which together encompass all of the content of the project) have proved successful, they form a significant hindrance to effective understanding of complex projects, whose behaviour is more than the sum of its parts and whose reaction to changes in the environment is difficult for the human mind to predict.
There is not scope in this paper to give a full discussion as to what constitutes “complexity” in a project, but Williams (1999) proposes that project complexity can be characterised by two dimensions, each of which have two sub-dimensions. Different authors argue about the definitions and measurement of these dimensions (and the definition of the word “complexity”), but similar divisions are well represented in the literature.
- The first dimension is structural complexity (Baccarini, 1996) made up of the size, or number of elements in the project, and the interdependence between these elements; “elements” are particularly organisational (parts of the organisation, plants, number of partner companies, the supply chain), but can also reflect the complexity of the work breakdown structure. We shall see that reciprocal dependencies particularly contribute to complexity as they allow feedback relationships to develop.
- The second dimension is uncertainty: both uncertainty in project goals, and uncertainty in the means to achieve those goals (Turner & Cochrane, 1993). This is uncertainty not in the “aleatoric” or probabilistic sense, but uncertainty in the “epistemic” sense, a lack of complete knowledge of how to carry the project out (or exactly what the project is aiming towards). Uncertainty and then changes to project goals will not only usually produce more complex products (therefore more complex projects), but the fact that changes are made mid-project will increase the structural complexity of the project.
It appears to be felt generally that project complexity is increasing. Products being developed are becoming more complex, and technology is changing faster. Projects are tending to become more time-constrained, and the ability to deliver a project quickly is becoming an increasingly important element in winning a bid, with an increasing emphasis on tight contracts, often with heavy liquidated damages for lateness; this enforces parallelism and concurrency, which increases project complexity further.
Systemic effects in projects
There are two ways in which the combination of effects can become more than the sum of its parts. The first is called the portfolio effect, or the “2+2=5” effect. For example, a number of small delays in a North Sea oil-platform project can cause a project to miss a weather-window and cause significant delay. In some examples the portfolio effect is not so obvious: for example, a succession of Change Orders on a project which collide and, as management tries to deal with them, their effects compound each other.
A typical construction project serves as an example, involving work to design and construct a large piece of plant, such as a power plant or process plant, where the end-date is fixed with heavy Liquidated Damages for lateness. This example is based on a real project (Williams, 2002). In this project, the customer delayed the design process by dithering over design approvals. He also interfered with the designers during their work in a variety of ways. These two led to a delay in design. The relationships between effects can be illustrated by means of “cause maps”, drawn simply as concepts in boxes, linked with arrows showing the direction of causality. The delay in design naturally caused the construction work to be late, which meant both that workers were carrying out unplanned work in winter and also that work had to be compressed, so that the site became over-crowded with workers. Exhibit 1 shows these effects in a causal map.
But the delay to design was not uniform across the whole design process: some elements were heavily delayed with others hardly delayed at all. So the drawings and plans went to construction in the wrong order, which exacerbated the decrease in construction workers' productivity. And the effect of construction productivity declining when the end-date is fixed was to increase the delay to construction, resulting in the feedback loop shown in Exhibit 1:
Exhibit 1: construction project
This, then, is the second – and much more important - problematic way in which effects can combine: into feedback loops. A causal chain where Effect A causes or exacerbates or promotes Effect B, which causes or exacerbates or promotes Effect C, and so on can generally be easily captured by standard methods. However, where Effect C also leads back to Effect A, this gives a feedback loop. These loops can be positive, so each effect tends to increase itself and the project spirals (these are called “vicious circles” if the effect is unwanted, or “virtuous circles” if the effect is a good effect). Alternatively they may be negative loops, in which increases in one effect produce a balancing or controlling influence by which the effect is brought back towards its original value. The “2+2=5” effect has been mentioned, but when positive feedback loops are introduced, this effect becomes a “2+2=6, 7 or even 8” effect, as effects cause themselves to increase in a vicious/virtuous circle.
A good example of feedback occurs in the frequent situation where a change is introduced to the design process mid-project. This will not only incur the cost of the additional work, it will also incur the cost of deleting and redoing design work, and retrofitting any items already manufactured. These costs can usually be estimated in standard ways. However, there will also be a number of secondary effects: with structural complexity, effects on inter-dependent items in a “ripple-out” effect, an effect on the manufacturing learning-curve, often subjective effects on the designers, and other “delay and disruption” effects. And because of all of these there is also an effect on the project duration – either a time extension or project acceleration. Further, where there are many scope changes, there will be further delays in client approvals; multiple changes to cross-impacting elements (which can be contradictory) in a compounding effect; an increase in product complexity, which can increase cross-relations in parallel activities, which can delay the system freeze; disruption to the design schedule, which means elements are being designed without full specifications; the workforce has to be increased; there can be additional effects in concurrent manufacturing such as increased retrofitting, degradation of learning and so on. Structuring this analysis into a causal map can help understand these complex inter-relationships.
Before we look at the structural inter-relationships, however, we need to note that many of the causal chains include variables that we might think of as “soft”. Ignoring these variables will mean the feedback loops disappear – so these variables are often critical for explaining how projects behave. Although the variables might be difficult to model, this does not mean we should ignore them. There are three key sets of human beings in the project system.
The first is the client, who can behave in a variety of ways that can have dramatic effects on projects. The client can propose or insist on scope changes (either funded or maybe by interpreting the scope of work in a different way from the contractor), can cause delays, particularly if there is some sort of approval process built into the project; can ask for extra non-productive work (benchmarking, extra studies, etc), can interfere in the design and build processes, or simply a lack of client-contractor trust can affect the project. All such behaviours can set up significant project dynamics and produce feedback, causing a project to escalate (Eden, Williams, Ackermann & Howick, 2000).
Similarly, the workforce of a project consists of humans who will be affected by the events within the project. Motivation, reaction to schedule pressure, exhaustion, morale, can all be affected. And while some of these effects might be difficult (if not infeasible) not measure directly, they can have tangible, measurable causes and give rise to tangible, measurable outcomes: for example, “loss of morale” is not really measurable directly, but there may be an indirect relationship between client-changes and productivity (as the client keeps changing and changing, the design workforce becomes de-motivated and productivity declines) which can be estimated.
Thirdly, projects have human managers. Many project models built ignore the fact that there is an intelligent human being, albeit with bounded rationality and a view of the world necessarily limited by the need to gather data and make sense of it. We need, firstly, to remember that managers rely on measurements and perceptions; data always represents some particular perception of reality, and these perceptions influence the behaviour of the project; this is exacerbated by the difficulty in many domains of estimating (say) how much of a task is left to do. But more generally, models which don't take account of the management decision-making are often useless, and we will look at a particular example later. Crucially in the context of catastrophic over-running projects, in response to project slippage, management will take action - either they will adjust the schedule, or they will attempt to accelerate. They could accelerate by making individual activities shorter or “crashing”, by for example using more manpower (with problems of churning, overcrowding etc) or by carrying out activities earlier than expected (such as working on items for which the surrounding system is not yet frozen, increasing the use of parallel activities, starting to commission work not yet completed, and so on). All these actions, we might note, increase the project's structural complexity. But more importantly they can set up dynamics and produce positive feedback.
Many of the aspects discussed above can be allowed for by experienced Project Managers (although they are not always easy to measure). But where problems go beyond straightforward analysis is where the aspects combine, particularly when they produce feedback. This is where catastrophic failure comes from: project perturbations usually create interactions that feed on themselves - unplanned events in a project cause ‘vicious cycles’ and a portfolio effect or ‘systemicity’. Project managers usually have to respond to disruption by taking decisions, which seek to retain, planned delivery and planned quality i.e. they need to accelerate. Then the consequence of such actions will be to increase the power of the vicious cycles, because these actions are also disruptions that, in turn, must be contained within a shorter time scale. Actually, to some extent, these consequences can often be traded: extending delivery may reduce some of the consequences of disruptions and delays, versus reducing delivery delay by more actions to accelerate with more disruption costs of extra labour hours.
A good example of such feedback is given in the “Shuttle Wagon” case described in Williams, Eden, Ackermann, and Tait (1995). The project became delayed for various reasons, but was subject to very tight timescales, so activities had to be performed more in parallel than they should have been; the increased cross-relationships between activities caused activities to take longer, thus exacerbating the delay. At the same time, the increased cross-relations delayed the system freeze, so that management had to begin work on items for which the specification was not yet frozen; this inevitably led to increased re-work as some design guesses turned out to be wrong, again exacerbating the delay. The increased re-work and the increased interrelationships between activities increased the amount of work to do, eventually exhausting the supply of experienced design personnel, yet again exacerbating the delay. These were only a few of the effects, but even putting these together shows a causal map with many nested positive feedback loops (see Exhibit 2). It should be noted, however, that two of the concepts on this map (coloured blue in Exhibit 2) are not necessary results of the causes, but are management decisions, and it is these (necessary and unsurprising) decisions that exacerbated the positive feedback loops.
Exhibit 2: “Shuttle” project
In conclusion then, the key results of the analysis are that:
- The explanations for project behaviour derive from systemic inter-related sets of causal effects
- The key results derive from dynamics set up by these effects turning into positive feedback loops, which produce catastrophic over-runs
- Many loops are set up or exacerbated by management responses to project perturbations.
The Strathclyde Process
The results described above are not unique to Strathclyde; similar results for example derive from the work of Cooper and PA (Graham, 2000); readers might well be familiar with work such as the $2,000 hour (Cooper, 1994). Strathclyde however present a structured process to move the project analyst from the real “messy” difficult-to-understand world to a quantitative model. Roughly this process was used on the Shuttle project (see Ackermann, Eden & Williams, 1997), but has been refined since. The process essentially has four distinct steps:
(i) “Cognitive mapping” is used to interview managers and capture the explanations that each give for the behaviour of the project; in general specialist computer software is used that had previously been developed by Eden and Ackermann called “Decision Explorer” (see www.banxia.com).
(ii) Then the cognitive maps combined into a single “cause-map”, a holistic representation of the project's life combining the views of the team, which is validated with groups of senior members of the project team (using visual interactive software including “Group Explorer”, again by Eden and Ackermann (www.phrontis.com).
(iii) Formal analysis of the cause-map identifies the key elements - feedback loops and “triggers” which have started the behaviour off – and reduces the map to the bare bones; this leads to a reduced map, an “Influence Diagram”.
(iv) The final step is the transformation of the Influence Diagram into a quantified simulation model, using “System Dynamics” (SD). In fact, two SD models are built, one representing the project, as it should have occurred had it not been perturbed (the “as bid” model), then the transformed Influence Diagram is overlaid onto this model to represent what has happened.
The clear process to produce the final quantitative model has two important advantages (Williams 2003):
- It enables subsets of perturbations and “what-if”s to be examined to see how the project would have turned out had only certain events occurred
- It gives a transparent and auditable route, so that participants accept the model, and the elements of the model can be traced to their origins. Various validity checks and triangulation tests are carried out to ensure the model is robust and represents what actually happened.
So what? – Project Risk Management
So how does this work help us in our practice of Project Management. This paper outlines four areas where it can make a different to our practice, the first being Project Risk Analysis and Management.
As described in Williams, Ackerman, and Eden (1997), Project Risk Management is usually based around a Project Risk Register (PRR), which acts as a repository of knowledge and as a foundation for the analysis and management that flows from the knowledge of that risk. But this firstly assumes that the risks in the PRR are independent, whereas there are clearly many inter-connections, some obvious, some subtle, leading to systemicity and feedback loops. It also assumes that the PRR is all-inclusive, and the mechanism is not at all an aid to creativity in establishing the risks. So the PRR is clearly inadequate both for capture and representation of the risks, and as a basis for analysis and decision-making. Cause mapping on the other hand:
(i) Helps individuals in risk analysis: it enhances clarity of thought, allows investigation of interactions, helps to “spark off” new thoughts and to identify synergistic actions, and to categorise;
(ii) helps groups in risk analysis: it brings out interactions between managers, helps to surface cultural differences, helps people contribute different knowledge, and gives rise to a richer set of knowledge;
(iii) and analysis of the cause maps provides useful information, particularly in the identification of loops, but also various other analyses that can be performed.
The PRR is a powerful tool, but it can be too restrictive. Mapping can help gather the risks and also can represent systemicity: mitigation actions are important and we have to take account of them; feedback loops and systemicity must be recognised, analysed and controlled; and we need to guard particularly against risks that have the potential of setting up positive feedback loops. These ideas have already let to a significant change in the way one manufacturer identifies risks before a project bid is made (Williams, Ackermann, Eden & Howick, 2005).
Another important point made in Williams (2004) is that conventional network Monte-Carlo simulations are inadequate, as they do not model the actions management would take in response to a late-running project. Such simulations usually result in unreasonably wide probability distributions, reducing credibility, as they ignore the actions of management to bring projects back on-time. And in particular, they ignore any chain of causality that can subsequently give a positive feedback loop which could potentially cause major overspends.
So what? – Learning from projects
Because of these explanations of how projects behave, while trying to draw lessons from projects, simple data doesn't always provide the understanding we need - counter-intuitive effects such as feedback and the compounding of individual effects are difficult to comprehend or predict intuitively. We must take a systems perspective what happened in the project, and learn not only the simple and obvious lessons, but also the lessons that derive from the more complex possibly non-intuitive behaviours of our projects. Often projects appear to participants to be simply a “mess”, and a cause-map provides convincing (qualitative) explanations of the overall results, the maps that result provide a mine of information about different events in the model, and abstraction of the maps provide a natural foundation for quantitative analysis. It is only by taking such a view that project post-mortems can play the role they should in contributing to the “learning organisations” and in particular to aid the pre-project risk analysis of succeeding projects.
So what? – Management actions
There are general lessons to be learned from these models about how management should act when projects start to be perturbed or go wrong. All actions have benefits and disbenefits – and project-managers need to consider both. They need to be sensitive of the effects of actions, not just immediate effects, but the causal chains that can be set up (remembering to include “soft” factors). In particular, they need to be sensitive to the possibility of setting up positive feedback, so they can seek to break or prevent the feedback effects. Humans have difficulties in identifying and thinking through feedback loops (Sterman, 1989), and as we have seen, sometimes apparently right actions might not be best (they might even be counter-productive). This brings us onto the last point.
So what? – The Guide to the Project Management Body of Knowledge (PMBOK® Guide).
We have given above some explanations for project behaviour – what does that mean for our understanding of The Guide to the Project Management Body of Knowledge, the PMBOK® Guide? The PMBOK® Guide claims to be self-evidentially correct (it is normative), but our track record of project management is mixed, and some claim that project management methods are counter-productive.
There is a common view that there is no underlying theory of the PMBOK® Guide . However, there are three underlying assumptions, that project management is rationalist and normative; it assumes a positivist ontology (reality is “out there”) and it assumes that project management is all about managing scope. This has led to three emphases in the PMBOK® Guide: a heavy emphasis on planning (10 out of the 13 PMBOK® Guide processes concern planning); there is an implication of a conventional control model (where progress is continuously assessed against the project plan, and management decides and commands), and an emphasis on management of the project generally decoupled from the environment (ie the project manager manages according to the plan, and changes to the plan should be rare).
The explanations given above for project behaviour clash with the assumptions underlying the PMBOK® Guide:
• Concerning the third assumption, systemic models show behaviour arising from the complex interaction of parts of the project, so traditional models decomposing the scope of the model can be inadequate;
• Concerning the first assumption, project behaviour and its reaction to actions taken can be complex and non-intuitive; project management techniques are not necessarily self-evidently correct.
• Concerning the second assumption, “soft” factors not recognized by a hard ontology can be important links in the chains of causality that set up feedback loops so can be critical determinants of project behaviour.
The PMBOK® Guide is suited to projects with many elements, but where the types of interdependencies are limited, or to projects without high uncertainty. However, it is when uncertainty affects a structurally complex and time-compressed project that systemic effects start to produce problems. In these circumstances, an apparently self-evidently correct set of project management procedures is unhelpful.
New project management methodologies have developed recently in which project “emerges” rather than being pre-planned (usually identified by the words “agile” or “lean”); IT has been a particular domain where these have been developed (eg RAD, ASD, Extreme Programming etc). These methodologies are quite contrary to the emphases in the PMBOK® Guide, as the project is not entirely pre-planned, management style is more cooperative (there is a recognition that what the project manager commands might not happen as expected) and there is more acceptance of the external environment.
Thus we might expect that different management styles would be suitable for different types of project. For projects that are structurally complex and uncertain, particularly where compressed, the normative PMBOK® Guide can be disadvantageous. Systemic modelling work explains how and why this is so. But we should not simply ignore the PMBOK® Guide and move to contrary techniques; rather, we need to define project-nature metrics to indicate which projects are amenable to PMBOK® Guide style management and which have a propensity for systemic effects, so that an appropriate management style can be specified, then of course we have to define how the project management philosophy should be adjusted for the metrics. This is ongoing work at present.
Ackermann, F.R., Eden, C.E. & Williams, T.M. (1997) Modelling for litigation: mixing qualitative and quantitative approaches. Interfaces 27 48-65
Baccarini, D. (1996) The concept of project complexity - a review. International Journal of Project Management 14, 201-204.
Cooper, K.G. (1994), The $2,000 hour: how managers influence project performance through the rework cycle, Project Management Journal 25, 11-24
Eden, C.E., Williams, T.M., Ackermann, F.A. & Howick S (2000) On the Nature of Disruption and Delay (D&D) in major projects. Journal of the Operational Research Society 51(3), 291–300
Graham, A.K. (2000) Beyond PM 101: lessons for managing large development programs. Project Management Journal 31, 7-18
Simon, H.A. (1982) Sciences of the Artificial 2nd ed, Cambridge, Mass: MIT Press.
Sterman, J.D. (1989), Modelling of managerial behavior: misperceptions of feedback in a dynamic decision making experiment, Management Science, 35, 321-339
Turner, J.R. & Cochrane, R.A. (1993) Goals-and-methods matrix: coping with projects with ill defined goals and/or methods of achieving them. International Journal of Project Management 11, 93-102
Williams, T.M. (1999) The need for new paradigms for complex projects. International Journal of Project Management 17(5), 269-273
Williams, T.M. (2004) Why Monte-Carlo simulations of project networks can mislead. Project Management Journal 35 (3), 53-61
Williams, T.M., Ackermann, F.R. & Eden, C.L. (1997) Project risk: systemicity, cause mapping and a scenario approach. In, K.Kahkonen & K.A.Artto (Eds) Managing Risks in Projects. London: E&FN Spon, pp 343-352.
Williams, T.M., Ackermann, F.R. and Eden, C.L. (2003) Structuring a Disruption and Delay Claim European Journal of Operational Research 148, 192-204
Williams, T.M., Ackermann, F.R., Eden, C.L. & Howick, S. (2005) Learning from Project Failure. In, P.Love, Z.Irani, and P.Fong (Eds) Knowledge Management in Project Environments. Oxford, UK: Butterworth-Heinemann.
Williams, T.M., Eden, C.L., Ackermann, F.R. & Tait, A. (1995). Vicious circles of parallelism. International Journal of Project Management 13(3), 151-155.
©2005, Terry Williams
Originally published as a part of 2005 PMI Global Congress Proceedings – Edinburgh, Scotland