The use of project post-mortems

Introduction

The need for organizations to be “learning organizations” (Senge et al. 1994) is rightly emphasized these days, and there is a particular need for project-based organizations, if they are to flourish, to learn from one project to the next. But we can observe that in individual companies in practice, project review processes are rarely in place, project failure and success are rarely analyzed, and learning just doesn't happen. Frequently this is because the next bid and the next project are pressing and too urgent to leave time to reflect. Often it is because there are no standard methods in place for analyzing projects. And when post-project reviews have been performed in the past, if companies have not found them to be helpful or useful, then there is no motivation to carry them out for currently completing projects.

Even when post-project reviews are performed, there are no standard, structured, routine ways of analyzing projects to ensure that the organization can draw lessons and learn for future projects. For simple, clear-cut projects, the few words of advice in PMBOK® Guide may suffice: “The causes of variances, the reasoning behind the corrective action chosen, and other types of lessons learned should be documented so that they become part of the historical database …” [PMBOK® Guide (Project Management Institute 2000), section 4.3.3.3]. But for more complex projects, how do we learn these lessons? Collecting the data on what happened isn't the main problem (although it can be sometimes); it is gaining understanding about what went wrong (or right) and why. Gordon MacMaster (2000) recently said, “It's not that there is a lack of data. The entire data warehousing industry is booming because of data's abundance. Are repositories of data also meaningful historical information? The answer is obviously no.” He goes on to say, “Data repositories reflect the objective parameters of a project and the facts, figures, and approved documents…It takes a significant effort to sort through the official history … The effort involved is a significant deterrent to using historical data in project planning, regardless of the quality of the data repositories.“And this is assuming that the official history is correct and fully documented—how much difficult this is in the (real) situation when data is contradictory or ambiguous.

This paper describes the work of a team that has been involved in post-mortem analyses of a range of projects over the past eight years. We have carried out these analyses for one reason that always grabs project-team's attention—preparation of post project claims, in our case particularly Delay and Disruption (D&D) claims (Eden et al. 2000). In doing this, the process used seems to have been successful in learning lessons both specific to the particular project or to the particular company, and generic, about projects generally, and this paper briefly describes the types of techniques used.

The primary aim of this paper, however, is not to claim that this is the best way of carrying out post-mortems, but to start a debate on whether we think post-mortems are important, why we don't carry out post-mortems in practice, and how to gain understanding (rather than simply data) about the way project turn out—both the easy lessons and the more complex non-intuitive behaviors of our projects.

The Work at Strathclyde

The team at Strathclyde, led by Prof. Colin Eden, has been involved in post-mortem analysis of a range of projects over the past nine years, mainly for the purposes of preparation of D&D claims. The first claim undertaken by the team involved the Channel Tunnel “Shuttle” train-wagons (Williams et al. 1995), and was carried out by the first three authors (led by the second author, Prof. Eden, as team Director); the first author has a background in project management but the second and third had already developed specific sophisticated techniques (backed up by purpose-built software) to elicit and structure complex structures of causality, which were immediately appropriate to establish the story of causality in a project. The Shuttle claim, and all successive claims, have been characterized by the use of these techniques, combined with specific simulation and other techniques to ensure that the lessons learned were quantitative (Ackermann et al. 1997 describe this synergistic combination of “soft” qualitative and “hard” quantitative analysis). Over the past nine years, the team has worked on projects in a range of industries, including railway rolling-stock, aerospace, civil engineering, and shipbuilding, situated in Europe, Canada, the U.S. and the UK. The total value of all the claims they have been involved is now around Can$1.5bn and the claims have so far met with a significant degree of success. Resulting from these analyses of specific projects, the team has also built up a considerable understanding of what D&D is, and why projects behave in the (sometimes counterintuitive) way that they do (Eden et al. 2000).

The claims all entail a detailed analysis of the behavior of large complex projects. The team thus needs to be able to trace complex sets of causal links from actions taken by the parties through the dynamic behaviors set up within the project and to understand and quantify the resulting effects. It is only in this way that the effects of the actions taken by the client (and contractor) can be tracked through all the complex interacting parts of the project and thus the outcome of the project explained. The projects are all characterized by “messiness”: root causes of problems are often unclear, causality is unclear, and many of the effects observed are counterintuitive—in fact, as we shall see, many of the effects are driven by features difficult to evaluate intuitively: complex systemic interrelationships, feed-back loops within the effects, and controlled or exacerbated by the effect of management actions to counteract problems (such as to accelerate a delayed project).

In such situations, we have found that mapping out the structure of causality is essential to understanding the behavior of the project. The team uses specialist decision-support and group-decision-support software tools for use with teams of managers, which were developed through the work of Eden and Ackermann. This has been found to be highly time-efficient, using people most effectively, and the software enables the complexity and size of these maps to be effectively managed. The qualitative maps developed from these also form a natural method for developing the quantitative models. The process is described in more detail below.

This experience has led to many lessons being learned, about the behavior of complex projects in general, and in particular about projects in the domain of the contractor for whom most of this work has been carried out (see e.g., Williams et al. 1995). Some of these lessons have been incorporated into a sophisticated business game played by senior executives to help instill the learning (Williams et al. 1996), as described below. Furthermore, the experience of these claims led the team's major sponsor, to award the team a research contract (of around Can$100k per annum) to investigate how to reverse-engineering this knowledge to develop methods and tools that could help assess and manage risk in large complex projects. Clearly, it was perceived that the learning from these projects was valuable; even though the post-mortem analysis was carried out for the purposes of claims rather than for learning, and these two sets of learning have been provided for future projects.

Mapping Complex Projects

Pitagorsky (2000) draws lessons from recording at the event-level and the project-level, then says: “once recorded, the project-level record … can be viewed along with other projects. If there are patterns—such as chronic lateness, chronic disputes regarding quality—the causes of the patterns can be analyzed using the event-level record (change requests and resolutions, status reports) within the projects. … Learning from experience begins with recording each relevant event.” But this simply isn't realistic with complex projects, nor will it give the understanding that we require. As we have already said, while post-mortem analysis is often easy for simple projects where the problems encountered have been clear-cut, complex projects, however, by their very nature (Williams 1999) exhibit behaviors whose causality is not clear-cut—indeed, often counter-intuitive. So for complex projects, the simple guidance of listing what happened is not sufficient and we need methods that can capture the complexity of the events and their causality and models that can explain why the outputs were as they occurred.

The process carried out in the Strathclyde work is based upon the work of Eden and Ackermann, and consists of four stages. There isn't space here to explore the process fully (it is explained also in Williams et al. 2000), but the stages can be summarized as follows

• Firstly, “cognitive mapping” is used to interview managers and capture the explanations given for the various circumstances of the project, and also to capture existing relevant project documentation. This technique structures the way in which humans construe and make sense of their experiences, and specialist computer software called “Decision Explorer” and “Group Explorer” (developed by Eden and Ackermann) is used to record and analyze the extensive maps developed. Eden (1988) gives a general description of this technique and its theoretical background, and Ackermann et al. (1997) discuss its use within the “Shuttle” study.

• Each person's cognitive map is input, and in the second stage these are then combined (through cross-relationships and the merging of identical ideas) into a single “cause-map,” which gives an overall representative view—a holistic representation of the project's life. This model is developed and validated, working in a visual interactive mode with groups of senior members of the project team. This includes researching and further exploring conflicting or ambiguous parts of the map. This is a key stage of the process, and requires sophisticated facilitation skills; it is described further in the literature (Eden & Ackermann 1992; Ackermann & Eden 2001).

• An idea of maps is given in Exhibit 1, which (to preserve confidentiality) gives very generalized concepts and their causal relationships (this picture is abstracted from Williams et al. 2000) (this is a very simply example but even here there are 16 different feedback loops).

• Formal analysis of the cause-map, using the analysis methods within the Decision Explorer software (Eden et al. 1992) is then used to identify the feedback loops and the “triggers” (i.e., initial causes of the feedback loop,” which form the basis of understanding delay and disruption as it is generated by the dynamic impact of events). Reduction of these concepts to the bare bones in the third stage leads to a reduced map (an “Influence Diagram”). The overall structure of loops may still be complicated, but of course this is also true of the dynamics of the real situation—the overall behavior of interconnected and nested feedback loops can be characteristically difficult to discern subjectively, and produce what many in a project team see as “chaos.”

Exhibit 1. Example of a Cause Map (With Generalized Concepts)

Example of a Cause Map (With Generalized Concepts)

• In order to quantify the lessons learned, the qualitative model must be able to be transformed into a quantifiable model. A quantitative analysis technique that could be argued to naturally follow on from the use of cause mapping and feedback is a simulation technique known as “System Dynamics” (SD)—and this is one of the techniques that can indeed display the (sometimes counterintuitive) behavior of complex projects (see e.g., Cooper 1994). First of all, a model of the project “as it should have been” is built from the knowledge of the project as planned. Then the model is developed by overlaying variables derived from the Influence Diagram to show and explicate how the project behaved in practice. The Influence Diagram shows the data that must be collected to populate this model. Some of this data will be “hard” data that should be in the project history (such as dates of Design Reviews, number of design iterations, learning curves and so on); others will be softer, more subjective data which might need to be estimated by the managers who operated during the project (such as productivity, or number of engineering mistakes)—but all data should be triangulated, and spreadsheets and similar techniques are used to check the reasonableness of data collected.

(As an aside, it has been argued that the use of a simulation methodology such as SD can help overcome problems in perceiving feedback and offer a framework for conceptualizing complex business issues [Graham et al. 1992]. This is why we have used our SD models as a basic framework for the “project management game” described earlier, in which senior managers experience the underlying positive feedback loops in time-constrained complex projects, developing their understanding of self-sustaining escalation and the significance of learning, design changes, client delays, etc. [Williams et al. 1996]. This has proved very successful in educating top management in the generic lessons learned from some of our studies.)

As well as capturing the facts and the causality underlying the facts, the interviews and workshops enable the stories of the project to be captured. MacMasters (2000) says, “the most valuable learning about past projects often comes from listening to those few individuals that assume the role of storyteller. One absorbs the context, nuances, and rationale (or lack thereof) behind the project documentation from them … Combining objective project documentation with subjective perceptions about a project is the leap between historical data and historical information.” This process helps to capture just these perceptions. But much more than that, it captures the perceptions in a structured format, to ensure coherence and triangulation of data, and to allow a holistic perspective—a systemic view—to be taken. Furthermore, the natural transition into quantitative simulation allows those lessons to be established quantitatively, and alternative scenarios (what if we'd done this … what if the client had done that …) explored.

The Debate

The first aim of this paper is not necessarily to claim that this is the best way of carrying out post-mortems, but to start a debate on post-mortems themselves:

• Do we agree that post-mortems are important?

• Why don't we carry out post-mortems in practice?

• What is the best way to carry them out—in other words, both the most informative and the most efficient. How can we learn from each project for the next project?

As a starting-point for this debate we make four points:

• Data doesn't always give understanding. In particular, counterintuitive effects such as feedback and the compounding of individual effects are difficult to comprehend, let alone predict, intuitively. It is necessary to take a systems perspective of the project and what happened, and systems modeling can help to demonstrate such effects.

• We need to learn not only the easy lessons (“we neglected configuration control and that got us into a mess, costing $x to resolve”), but also the lessons that derive from the more complex possibly non-intuitive behaviors of our projects (“we doubled our workforce on this project but it only yielded 5% extra output” (an example of the “$2000 hour” [Cooper 1994]), or “we tried to compress the project by moving ahead before client approval was gained, but it doubled the man-hours spent while only saving us a couple of weeks” (not an untypical type of result—see Eden et al.2000).

• We have found the Eden/Ackermann process very useful in establishing the reasons for overspend and overrun in projects which appear to the participants to be simply a “mess”—individual cognitive maps reveal some of the causal chains, and the overall cause-map provides convincing (qualitative) explanation of the overall results. Furthermore, the maps that result (before reduction to the Influence Diagram) provide a gold mine of information about different events in the model, instances of the types of effect that have been observe. (The result “Design Reviews were late because of client intervention…” might have been true, but the point is driven home by the addition of “… for example, the Water-Pump Design Review was delayed four months by the client's sudden insistence on the use of Strathclyde Water Pumps as the sole supplier.”) This is particularly useful in explaining causal chains of effects: if you wished to discuss a chain of generic effects, it is much more powerful to follow the story of one subsystem or one chain of events (“… for example, the Water-Pump Design Review was delayed four months by the client's sudden insistence on the use of Strathclyde Water Pumps as the sole supplier … and this caused work to go ahead on the water-pump house before the design was fully approved for the Water Pumps … and this meant that there were lots of late changes to the design when the Water Pump design was changed … and this overloaded the engineers … and so on”). Finally, these maps and their production of the Influence Diagram provide a natural foundation for a quantitative analysis, which can explain the complex effects. (Here again the variables used are usually generic variables and are best explained or instanced by stories of typical effects.)

• There is a natural role for project post-mortems to play in the pre-project risk analysis of succeeding projects—and this is one way that project-oriented organizations can become learning organizations. We need to structure our Post Mortems so that we can learn lessons to take into future projects.

While not providing a prescriptive “way ahead,” we trust that this paper will promote the debate on project post-mortems, and we hope that our experience (albeit the mostly resulting from claims work) would provide useful food for thought in developing effective post-mortems.

Ackermann, Fran, and Eden, Colin, 2001. Contrasting Single User and Networked Group Decision Support Systems. Group Decision and Negotiation, 10 (1), 47–66.

Ackermann, Fran, Eden, C., and Williams, T. 1997. Modelling for Litigation: Mixing Qualitative and Quantitative Approaches. Interfaces 27, 2, 48–65.

Cooper, Ken. 1994. The $2000 Hour: How Managers Influence Project Performance Through the Rework Cycle. Project Management Journal 25 (1), 11–24.

Cooper, Ken. 1993. The Rework Cycle: Benchmarks for the Project Manager. Project Management Journal 24 (1).

Eden, Colin. 1988. Cognitive Mapping: A Review. European Journal of Operational Research. 36 (1), 1–13.

Eden, Colin, and Ackermann, F. 1992. Strategy Development and Implementation—The Role of a Group Decision Support System. In R. Bostrom, R. Watson, and S. Kinney (Eds.), Computer Augmented Teamwork—A Guided Tour. New York: Van Nostrand Reinhold.

Eden, Colin, Ackermann, Fran, and Cropper, Steve. 1992. The Analysis of Cause Maps. Journal of Management Studies, 29, 309–324.

Eden, Colin, Williams, T.M., Ackermann, F.A., and Howick, S. 2000. On the Nature of Disruption and Delay (D&D) in major projects. Journal of the Operational Research Society 51 (3), 291–300.

Graham, A.K., Morecroft, J.D.W., Senge, P.M., and Sterman, J.D. 1992. Model-Supported Case Studies for Management Education. European Journal of Operational Research 59, 151–166.

MacMaster, Gordon. 2000. Can We Learn From Project Histories? PM Network 14 (July): 66–67.

Pitagorsky, George. 2000. Lessons Learned Through Process Thinking and Review. PM Network 14 (March): 35–40.

Project Management Institute. 2000. A Guide to the Project Management Book of Knowledge (PMBOK® Guide) 2000 Edition. Newtown Square, PA: Project Management Institute.

Senge, P., Kleiner, A., Roberts, C., Ross, R., and Smith, B. 1994. The Fifth Discipline Fieldbook: Strategies and Tools for Building a Learning Organization. New York: Doubleday.

Williams, Terry. 1999. The Need for New Paradigms for Complex Projects. International Journal of Project Management 17 (5), 269–273.

Williams, Terry, Ackermann, Fran, and Eden, Colin. 2000. Structuring a Disruption and Delay Claim. Working Paper 2000/1 University of Strathclyde Dept. Management Science. Glasgow, UK.

Williams, Terry, Eden, C.L., Ackermann, F.R., and Tait, A. 1995. Vicious Circles of Parallelism. International Journal of Project Management 13 (3), 151–155.

Williams, Terry, Goodwillie, S.M., Eden, C.L., and Ackermann, F.R. 1996. Modelling the Management of Complex Projects: Industry / University collaboration. UNESCO International Conference on Technology Management, UnIG ‘96, Istanbul, June 1996.

Proceedings of the Project Management Institute Annual Seminars & Symposium
November 1–10, 2001 • Nashville, Tenn., USA

Advertisement

Advertisement

Related Content

  • Project Management Journal

    Top Ten Behavioral Biases in Project Management member content locked

    By Flyvbjerg, Bent This article identifies the 10 most important behavioral biases for project management.

  • Project Management Journal

    Perceived Complexity of a Project’s Optimal Work Plan Influences Its Likelihood of Adoption by Project Managers member content locked

    By Brokman-Meltzer, Mor | Perez, Dikla | Gelbard, Roy Perceived complexity is a factor when project managers adopt suboptimal work plans, even when optimal plans are readily accessible.

  • Project Management Journal

    Executives' Decision Processes at the Front End of Major Projects member content locked

    By Chenger, Denise | Woiceshyn, Jaana This article reports on an inductive multiple-case study of how executives made such decisions in major upstream oil and gas projects.

  • Project Management Journal

    The Missing Link in Project Governance member content locked

    By Ferrer, Paulo Sergio Scoleze | Araújo Galvão, Graziela Darla | de Carvalho, Marly Monteiro This study aims to understand how the information about corporate governance permeates the the project environment and influences decisions.

  • Project Management Journal

    Project-As-Practice member content locked

    By Kalogeropoulos, Theodoros | Leopoulos, Vrassidas | Kirytopoulos, Konstantinos | Ventoura, Zoe This article applies Bourdieu’s practice theory within the field of project management through a qualitative study into 17 successful and experienced Greek project managers.

Advertisement