Early warning signs in complex projects
Key words: Early warning signs; complexity; governance; project assessments
Projects often do not turn out the way we expect. Emerging problems often start early in the project but display only weak signals. There is a weakness in current practice in being able to pick up, and act upon, these advance early warning signs (EWS) of project problems, project failure, underperformance, or cost over-run. Attention to detection of such signs needs to be focused upon the project governance structure, and particularly the different sorts of project assessment that form these structures.
This paper gives initial results from a partially PMI -funded study looking at why methods fail to pick up early warning signals, how current project assessment methods try to uncover early warning signs of problems, and how successful current project assessment methods are in trying to uncover early warning signs of problems. The literature tells us much about these questions—in particular, the extent to which, and how, project assessments in the chosen countries/sectors are executed under established governance frameworks, and some indications of the difficulties involved, particularly in the case of complex projects. But this study also tries to supplement this with a study of actual organizations, looking at both what actually happens in practice and what is espoused, thus also trying to observe and understand how project assessments can handle complexity in projects as well as their context, and which are the most important early warning signals to look for in the different contexts.
A number of interviews and a study of the literature showed where problems are found and the types of governance structures in different organizations. Case studies have also been started but are not yet analyzed, although themes can already be identified in the need for willingness to notice early warning signs (EWS) around signs concerning interorganizational issues, and in complexity.
Clearly project management practice has developed in trying to look for early warning signs. Much of this is embedded in tacit knowledge of experienced managers. Some governance mechanisms / frameworks involve this tacit knowledge, but it is not yet made explicit or formalized. Some early warning signs are more difficult to identify, such as those involving intra- or interorganizational effects, or complexity, or between-team behavioral issues, and so governance needs to ensure that identified early warning signs are acted upon. However, this paper is reporting just part-way through a project, which it is hoped will bring these initial thoughts to a structured conclusion.
Project assessments are carried out worldwide. However, we are often surprised at how projects turn out, as complex projects often do not behave in the way that we expect, and, in particular, effects within complex projects are often time-delayed and take time to emerge. Project assessments need to look through the complexity of the project in its context and identify the relevant early warning signs of project problems, project failure, underperformance, or cost overrun. This is important in securing the success of projects. To be effective, project assessment should be systematic and well-adapted to the purpose and context of the assessment, as well as anchored in a governance framework established by the project owner / sponsoring organization. Experience and current literature seem to indicate that we are not very good at identifying early warning signals, and that when these are evident they are often not responded to, causing problems to compound in severity. The study is designed to strengthen the basis for implementing effective project assessments.
Increasing complexity in projects and rapid changes make the governance of projects increasingly challenging. Complexity in this context includes external complexities such as stakeholder relationships and decision-making processes, as well as internal complexities such as technology and interfaces to existing systems. This increasing complexity offers a wide range of reasons why future projects may fail. In addition, the causal relations between early indications or incidents and later results are seldom obvious and often very complex. This makes it very difficult to know what to look for when assessing complex projects.
A project was therefore carried out by the Concept programme at NTNU (Norway), the University of Southampton UK and RMIT, Australia, jointly funded by the Project Management Institute (PMI), the Concept Program (Norway), and the Norwegian Center for Project Management, which began investigating the practice of project assessments to identify how and to what degree the right early warning signs are identified. Although the project is currently ongoing, this paper describes progress to date.
“A firm that wishes to prepare for strategic surprises has two options. The first is to develop a capability for effective crisis management …The second approach is to treat the problem before the fact and thereby minimize the probability of strategic surprises” (Ansoff, 1975). This study concerns our ability (or not) to detect signals of failure before and during the project, in particularly Ansoff's ideas of “weak signals.” It has become increasingly clear that the seeds of project performance are sown in the very earliest set-up phase of the project (Williams, Samset, & Sunnevåg, 2009; Concept, 2009), and this study looked at detecting early warning signs of project problems from the very set-up of the project onwards.
Project governance frameworks are intimately linked with project assessments. “Over the last ten years there has been more interest in the governance of projects in general and the governance of large complex public projects in particular” (Miller & Hobbs, 2005, p. 47). Existing governance frameworks include guidelines and instructions for performing project assessments and define when and how they are used (see for example Klakegg, Williams, & Magnussen, 2009, showing how project assessments are embedded within the governance structures, particularly of Norway and the UK OGC system). If the project assessments are not able to identify early warning signals of problems, then governance cannot operate effectively. A governance model shown in Walker, Segon, and Rowlinson (2008, p.128) places particular emphasis on accountability and transparency; this may contribute to identifying a possible way for early warning detection to be effectively put into place.
If we are to look at early warning signs of potential problems to eventual project success, the first question to ask is clearly “What is project success?” It has become increasingly recognized that in practice, success in projects is much more complex and multifaceted than the “iron triangle” of completing projects on time / cost / quality. The literature on this area, from the well-known 1986 PMI conference (PMI, 1986) onwards, is very large, so just some key points will be made here. Samset (2009) describes how requirements first formulated for U.S.-funded international development projects by the USAID in the 1960s and subsequently endorsed by many major bodies look at five requirements: efficiency, effectiveness, relevance, impact, and sustainability. He points out that there are many examples of projects that score highly on efficiency, but which subsequently prove to be disastrous in terms of their effect and utility, and also that there are many that fail to pass the efficiency test but still prove to be tremendously successful both in the short- and the long-term.
Others have also looked for broader definitions of success. Stewart (2001) offers a “balanced scorecard” for projects. Shenhar, Dvir, Levy, and Maltz (2001, p. 717), offers four dimensions of success with an increasingly long-term time frame for expected results: project efficiency, customer success, business success, and preparing for the future. Walker and Nogeste (2008) describe different dimensions of success and their relative priorities—again based on the four success dimensions of efficiency, customer/stakeholder impact, business success, and preparing for the future—then take an organization's strategic objectives to develop a balanced scorecard approach for assessing project success. The measures within these success dimensions immediately give some clues as to early warning signs to watch for. For example, if dimension 1 is “project efficiency,” the goals need to be realistic; if “stretch goals” are used, then they need to be defined clearly, along with the consequences of meeting the goals only part-way; and thought can then be given to designing weak signals of deviations from goals and targets so that they do not undermine the rationale of being stretched, which otherwise results in conservative or timid behavior.
Why then do projects fail? In the search for indicators that can serve as early warning signs in projects, we need to look at sources describing factors of project success and failure. In the project management literature one can find descriptions of so-called project success factors, or sometimes their inverse, project pitfalls. This topic has been extensively researched, with important work including Pinto and Prescott (1988), Kerzner (1987), and, particularly for large projects, the famous IMEC study by Miller and Lessard (2000).
Zwikael (2008a; 2008b) has reported on research undertaken on a study of project success with a concentration on process rather than on success factors, and has found that different processes are deemed more critical than others, depending upon the industry and cultural setting—similar to the idea of “fit,” extensively discussed in the PMI “value project” (Mullaly & Thomas, 2008). A synthesis of the literature concerning success factors was made by Torp, Magnussen, Olsson, and Klakegg (2006). Jergeas (2005) has outlined an approach to project monitoring based on identifying specific success factors in a project and measuring indicators predicting the fulfillment of these success factors.
The crisis literature (e.g., Loosemore, 2000) highlights that crises occur for a reason, and that the reasons are often ignored, covered up, or not recognized. They are events that, before being acknowledged, are seen to have a low probability of occurrence but a high potential impact and have rarely been accompanied by contingency plans. These types of conditions are perhaps best tackled using an emerging strategy (Mintzberg, Ahlstrand, & Lampel, 1998). Miller and Olleros (2000) argue that successful projects are not selected—they are shaped. They conclude that the complexity leads to messy decisions in the front-end and that these decisions are never final (p. 96). They suggest that this is something that we need to handle, and call the changing steps of a project “shaping episodes.” Soderholm (2008, p.85) similarly recommends coping strategies such as reopenings, revisions, and fine-tuning plans.
Some generic examples of failure modes are clear. Failure of the Taurus IT project in the 1990s (Drummond, 1998) was believed to be substantially triggered by uncontrollable scope creep through poor project definition and excessive stakeholder influence that led to paralysis. Meier (2008) looked at projects within the U.S. Federal Intelligence and Defense Agencies. He found a number of particular early warning signs that occurred frequently in these projects, which serve as a valuable check-list:
- Overzealous advocacy
- Immature technology
- Lack of corporate technology roadmaps
- Requirements instability
- Ineffective acquisition strategy and contractual practices
- Unrealistic program baselines
- Inadequate systems engineering
- Inexperienced workforce and high turnover
Similarly, Kappelman, McKeeman, and Zhang (2006) set up a list of 53 early warning signs for IT project failure, and canvassed experts as to their importance. A clear top 12 issues emerged, half of which were people-related (lack of top management support; weak project manager; no stakeholder involvement and/or participation; weak commitment of project team; team members lack requisite knowledge and/or skills; subject matter experts are overscheduled), and the other half of which were process-related (e.g., lack of documented requirements and/or success criteria). Technical issues were very low on the list.
Projects are subject to risk and uncertainty. Some risks are known and can be planned for—the “known unknowns.” A great deal of literature exists on project risk management, looking at the aleatoric risks within the project and the known epistemic risks. The increase in knowledge that occurs as a project progresses allows these risks to be revisited, providing early warning signals if subjective probabilities or impacts are increasing, and particularly if no more knowledge is forthcoming or the risks are not being monitored and managed. However, much project uncertainty—perhaps the key uncertainties—do not come under this heading, and many in fact are “unknown unknowns.” This is, of course, a main problem in identifying EWS. Much uncertainty comes from the lack of a clear unambiguous goal (Linehan & Kavanagh, 2004; Engwall, 2002), making an analysis of achieving these goals equally unclear. Even when the goal is known, moving towards the goal can be a messy uncertain process as participants “make sense” (Weick, 1995) of the project and work towards project delivery.
Project assessments go by many names, including:
- Project reviews—often undertaken beforehand through a staged gateway (Cooper, Edgett, & Kleinschmidt, 1997) approval approach, also during and after the project as benefits-realization. The characteristic in this category is that it is anchored in some sort of governance framework or institutional framework/decision-making process (Archer & Ghasemzadeh, 1999; Office of Government Commerce, 2007).
- Project health-checks—often implying a more formal assessment, sometimes looking for fraud, often while the project is being undertaken, sometimes at set stages or ad hoc if there are particular reasons for such assessment. Checklists and KPI performance reports can be used (Shafagi & Betts, 1997; Wateridge, 2002).
- Benchmarking—a systematic comparison of two or more projects, analyzing quantitative aspects of project performance (cost, time, PDRI evaluations, etc.) and/or qualitative descriptions (objective formulations, stakeholder assessments, environmental impact descriptions, etc.). In the early stages of a project, benchmarking is typically used to compare project proposals competing for scarce resources, in order to determine which is most likely to succeed and give a high return on investment. Little literature exists; one of the first applications of benchmarking to projects was done in the IMEC project (Miller & Lessard, 2001) (see also Emhjellen, 1997).
- Postproject evaluations—done after the project as a project history (Kleiner & Roth, 1997; Roth and Kleiner, 1998; Schindler & Eppler, 2003; Maqsood, Finegan, & Walker, 2006). Williams (2007) provides a literature survey on postmortem project analyses. Project reviews collect both tacit and explicit knowledge; the latter can be collected and retained in systems such as data bases. But to collect and disseminate the perhaps more important tacit knowledge requires more active socialization methods (Williams, 2007). Documented project histories need to be context-rich or to contain “narratives.”
- Project audits—a formal assessment looking for accordance between what is done and some regulations, decisions, or systems, sometimes in an effort to uncover fraud. Often project audits are done while the project is being undertaken, sometimes at set stages and other times on an ad hoc basis (if it is decided that particular issues warrant investigation). They may also be done postproject. In many circles, “audit” has a specific meaning (for example, an assessment that is enforced by law).
A leading book on project reviews is that by Oakes (2008), who categorizes his case studies by frequency/ formality, the type of review team (independent specialists or peer reviewers), and the focus of the review (business or technical).
As Flyvbjerg cautions (Flyvbjerg, Rothengatter, & Bruzelius, 2003), overoptimistic assessments of benefits and underestimates of problems and risks can subvert this process as a way of flagging risk that may result in an unsustainable project. This period of the project life cycle is an effective time to look for early warning signals, but often they are purposely overlooked, as Flyvbjerg and his colleagues suggest, or they simply are not envisaged. There is clearly a need for a focus on building in systems of review, monitoring, and pattern scanning that can serve to detect early warning signals as part of the governance structure.
So, what should be measured? Project performance measurement has dealt with simple measures of time, cost, and quality. We have already asserted that measurement attempts must look at broader definitions of success. Furthermore, these factors are consequences of activities, incidents, and other conscious or unconscious actions or lack of actions, so for the purpose of early warning in projects we need additional measures.
In IT projects, Kappelman et al. (2006) showed that people-related and process-related risks scored higher than product-related risks as dominant warning signs of IT project failure, and also that there were evident indications of symptoms well in advance of the final failure, indicating that perhaps early warning signs could have been picked up. Syamil, Doll and Apigian (2004) argued that behavior-related performance measures evaluating the project process serve as a mediating variable affecting to what extent the chosen process contributes to the overall project result. Hoegl, Weinkauf, and Gemuenden (2004) similarly found that collaborative processes during the project have predictive properties with regard to later team performance and can serve as early warning indicators. Balachandra and Raelin (1980) presented a model indicating that project success factors could be used for developing a model for early warning—an approach supported by Sanchez and Perez (2004).
Jaafari (2007) proposed an approach to project diagnostics based on so-called project health checks, building partly on excellence models known from the quality management field and partly on maturity model principles, looking at customers and markets, stakeholders, technology, facility design and operational requirements, supply chain system, learning and innovation, finance, project delivery system, risks, and due diligence.
Cultural or disguised human early warning signals are also important. Nikander and Eloranta (2001) and Nikander (2002) present compilations of typical project problems including indications such as “gut feelings” and “nonverbal information,” as well as “differences and deficiencies in project culture” and “miscommunication.” Recent work by Whitty (in press), in which he discusses body language and cues that people naturally use, adds to the validity of using such cues and signals in an array of warning recognition and detection devises, and gives a significant discourse on useful indication areas. This ability to read others allows subconscious and hidden fears or anticipations to be identified. Often a workplace culture suppresses the willingness of people to express their fears (for example, people may not wish to be branded a “nonteam player” or be seen to be timid). Many of the subconscious or barely conscious fears can reveal important early warning signals. Such unease can be measured through surveys or stakeholder engagement tools (Bourne & Walker, 2006); whistle-blowing procedures (Beauchamp & Bowie, 1997) are also important. Organizational culture can be one of the explanations for why we do not learn from past mistakes; Williams (2007) provides a survey of lessons-learned activities, and it is clear that organizational culture is key to lessons being recognized and taken up.
Finally, there is a particular problem with assessing complex projects (Williams, 2005 Cicmil, Cooke-Davies, Crawford, & Richardson, 2009), where the relationship between events and out-turns is hard to understand (Simon, 1982; New England Complex Systems Institute, 2009). This means that complex projects often do not behave in the way we expect, and in particular, effects within complex projects are often time-delayed and take time to emerge; causal relations between early indications or incidents and later results are seldom obvious and are often very complex. Increasing complexity in projects makes the assessment of projects increasingly challenging. Complexity comes from interdependencies and uncertainty (Williams, 1999), but also from human-oriented social aspects (Stacey, 2007) or behavioral complexity. As well as internal complexities such as technology and interfaces to existing systems, external complexities such as stakeholder relationships (Pryke & Smyth, 2006) bring particular difficulties in understanding, not to mention in assessing, project behavior. Remington and Pollack (2007) discuss several types of complexity and tools to address the various types. Other tools can be product-based, such as mapping risks or visualizing stakeholder impact so that stakeholder engagement strategies can be developed (Bourne & Walker, 2006). Still other examples include the cause and effect tools that others have developed and used for diagnosing system faults (Williams, Eden, Ackermann, & Tait, 1995).
We should also note that there are many technological advances and even speculative or futuristic ideas that could be utilized in the future to help point the way to methods for detecting early warning signals. High-speed simulation methodologies allow most conceivable scenarios to be rapidly tested. The development of virtual-reality technologies might be useful, although we will need to include more than visual cues to detect when a situation does not “smell right” or “feel right” (Walker, 2000).
These ideas seem to cover three main but distinct areas:
- “Weak signals” that appear to precede problems
- Initial indications of factors known to be critical failure factors
- Mid-term measurements of success criteria, where success is taken with its widest meaning
We need to distinguish between these.
Research Questions and Methodology
Given this literature, the research questions of the study are sixfold:
(i) Why do methods fail to pick up early warning signals?
(ii) How do current project assessment methods try to uncover early warning signs of problems?
(iii) How successful are current project assessment methods in trying to uncover early warning signs of problems?
The literature tells us much about these questions, in particular the extent of and how project assessments in the chosen countries/sectors are executed under established governance framework, as well as about some indications of the difficulties. But this study is also supplemented with a study of actual situations, looking at what actually happens in practice rather than what is espoused.
This first part of the study takes a positivist view to uncover what sort of problems there are to which EWS can alert us. To look into solutions, we need to take a more phenomenological stance, and will consider:
(iv) How can project assessments handle the complexity in projects and their context?
(v) Which practices seem more appropriate in which contexts?
(vi) Which are the most important early warning signals to look for in the different contexts?
To answer these questions, we are carrying out two sets of investigations. First, we conducted interviews with a range of experts, looking at what companies and public entities do to implement project assessments. The interviews focused on methodological choices and the effects of established practices (i.e., what are they looking for?). To follow this up, we are investigating a limited number of case projects to find out what the project assessments have identified as early warning signals and whether these have actually been confirmed in the time-after assessment (i.e., did they look for the right things?). This study is limited to a few categories of typical contexts (characterized by degrees of complexity) and only principal early warning signs, not detailed considerations of causality and intricate multiple influences. These case studies have not yet been completed at the time of writing this paper.
Process and Procedures
Project assessments are closely connected to governance frameworks installed by government, corporations, or other organizations, with the purpose of supporting decision making and governance of their projects. Following up on the study of Klakegg et al. (2009), which defines governance frameworks and a way to describe and analyze them, this study goes specifically into the subject of project assessments as a vital part of all governance frameworks. The purpose is to find out what kind of assessments are indicated in these frameworks and what the guidelines to the frameworks say about early warning signals. This preliminary analysis includes looking at nine governance frameworks (two in the United Kingdom, two in Australia, and five in Norway). Six of these frameworks are from the public sector, while three are from private sector.
A general impression is that all levels of government in these three countries have some sort of governance framework for major investment projects. We have identified frameworks of the state level, regional level, and municipal level. Each of the three countries have one general framework covering all or most types of projects, and then some frameworks for special areas such as defense in the United Kingdom and hospitals in Norway. All of these frameworks are clearly gateway models. In the private sector, the material indicates that many corporations have installed governance frameworks with a purpose similar to those of the public sector, but that are in some cases quite different in structure and content. One of three private sector frameworks in this material is explicitly said not to be a gateway model. Still, the assessments in this framework have a similar connection to decision making, and we find most of the characteristics of a gateway process.
Most of the governance frameworks are mandatory within the organization in which they are implemented. There are examples of frameworks that allow different approaches according to the classification of projects (based on size or criticality). Some frameworks indicate that the assessments are more or less integrated in the decision making (the decision makers take part in key assessments on a common “arena” or in a meeting where the alternatives are presented and discussed), but most of them are external to the project team and are presented in some form of recommendations to the decision makers. The recommendations normally pertain to three main issues:
- Whether the investment is viable/feasible or fits the strategy of the owner. This is a question of whether the owner should go through with the investment or not. It is often presented as some form of profitability or benefit/cost ratio.
- Which alternative is the most preferable of the available (defined) options. This will normally be based on some criteria and the assessor's own professional experience and advice.
- How to go forth in the next phase of development. This is based on the assessor's professional judgment and transfer of experience concerning the best possible management strategies and control basis for the next phase.
Assessments are in some governance frameworks characterized as control means by a specially appointed party with a mandate to do a critical scrutiny of documents and plans. These mandated assessments are believed to be highly influential in the following decision-making process. In other frameworks, the result of assessments is presented more as friendly advice from a senior expert to whom the party responsible for the project may choose to listen or not, or to whom he or she may want the result to be open. The current trend seems to indicate a development towards the more critical and mandated assessments, especially for critical and large projects.
The decision makers are typically political or elected boards in public sector frameworks. In the private sector, the power to make decisions seems to be concentrated on fewer individuals, even on single individuals in some cases. This mirrors the fact that some of these organizations and their projects are smaller than the public ones represented in this material. However, the same tendency is found in other studies (Klakegg & Olsson, in press).
The assessments needed to meet the intentions of the governance frameworks cover a whole spectrum of aspects, from general needs analysis, stakeholder analysis, and profitability calculations, to specific risk and life cycle cost analysis. There are too many different assessments to mention here, but the main focus and working form is shown in Table 1.
Klakegg et al. (2009) found indications of a link between the background for introducing the framework and the main focus and working form of the assessments. This link is logical, because the organizations will naturally focus on the most important problem area and challenges they face. The review focus and working form follows naturally from the purpose of the assessments.
Table 1: Characteristics of assessments in nine governance frameworks.
|Framework No.||Sector / Scope||Overall assessment focus||Review focus||Dominating assessment working form||Identification of early warning signals|
|1||Public sector / All projects||Delivery confidence||Business case||Document reviews / Interviews||Partly / indirect|
|2||Public sector / Defence||Life cycle capability and cost||Life cycle||Document reviews / arena||Partly / indirect|
|3||Private sector / mining||Capital effectiveness over lifespan||Project performance||Document reviews / meetings / arena||No|
|4||Public sector / All projects||Informed decision making / confirm alignment||Business case||Document reviews / meetings||Partly / indirect|
|5||Public sector / All projects||Value / cost and risk||The concept / the project||Document control||No|
|6||Public sector / Hospitals||Learning and improvement||Feasibility||Document reviews / following evaluation||Partly / indirect|
|7||Private sector / Oil and gas||Return on investment||Strategic fit / profitability||Document reviews / arena||-|
|8||Private sector / Construction||Project risk||Risk / profitability||Workshops||-|
|9||Public sector / Property dev.||Transparency and reporting||Risk / cost||Document reviews / workshops||No|
Many of the frameworks include some form of description of the assessment process. The most typical practice is obviously document reviews. The assessments are also based on gathering other information through interviews with individuals or groups in workshops or meetings. In the event that the decision makers or their representatives are taking active part in such meetings, the assessments are referred to as “Arena” in Table 1. This seems to indicate a more internal focus and integrated decision-making process. It is not usual, but found in three distinct frameworks in this study.
The framework guidelines often mention identification of early warning signals as one of the purposes of the assessments. This is sometimes used as a main argument or rationale for the whole framework; public and private sector organizations acknowledge the need to identify early warning signals in order to be able to make changes before it is too late. On the other hand, none of the guidelines that we have found explicitly states which early warning signals the project assessor should look for in any given situation. Some of the frameworks include checklist-like listings of key questions that the assessor should ask or for which the assessor should obtain documentation. These directly or indirectly point towards some specific early warning signals, as is found in four of nine frameworks. Three frameworks do not include indications of which early warning signals to look for, while we lack information from the remaining two frameworks.
The conclusion to the study of governance frameworks and their assessments is that the organizations obviously acknowledge the need for assessments into the basis for decisions on key stages of project development. The frameworks do normally include guidelines for the working procedures, and in some frameworks also include specific questions, checklists, and tools for use during assessments. In terms of early warning signals, the frameworks are vague. At best, they indicate possible early warning signals through specific questions to be asked, but in many frameworks there are no indications at all of what to look for in specific situations.
Interviews: Identifying Areas for Early Warning Signals
When one is identifying areas that could represent potential problems in the future, as a basis for developing more specific early warning signals to monitor, there are a few different “sources” that can be utilized in this process:
- Information and knowledge from different assessments made to date in the project
- Understanding of the complexity of the project and its environment
- Identification of knock-on factors that could come into play if certain events or developments occur
- Postproject reviews of former projects that are similar enough to provide clues about possible problems in the current project
Starting with the insights gleaned from undertaking various assessments in the projects, we asked to what extent these enabled identifying future problems in both the project and the business setting in which the project will operate. The answers to these questions are not conclusive; one private-sector project owner claimed the analyses and assessments made are sufficiently good that no major surprises turn up later in the project (although minor issues—e.g., interfaces among deliveries from different contractors not matching—can occur, but never cause the project to run “beyond 110%”). Two project management consultants, on the other hand, posited that such assessments are currently not good enough, although they have improved over the years. In particular, there seems to be missing a guiding framework or checklist (if this is indeed appropriate) to aid this work, and there seems to be a natural tendency to focus on the solutions and contents of the project in the early phases. The engineers focus on technical issues and the economists on the business issues, while no one really thinks about the project execution.
Only half of our interviewees mentioned distinguishing complex projects from less complex projects. But what is complexity? Two organizations answered this question by talking about scale—indeed, one company explicitly said that they considered that there is “normally a relation between project complexity and project size.” But possibly these and certainly others looked towards complexity rather than scale, tending to include discussions of three issues, which reflect the literature discussion about complexity above:
- Complexity coming from a multiplicity of parts interacting in ways such that the behavior of the whole is difficult to deduce from understanding the individual parts (see above), such as multiple departments.
- Behavioral complexity (discussed above) from the nature of human interactions: the range of stakeholders and their expectations, joint ventures, language, and political and cultural issues; one global organization operating in Asia mentioned cross-cultural complexity, particularly where norms, local rules, customs, and other culturally assumed behaviors can be misunderstood between organizations, clients, or joint venture partners from different national cultures.
- The complexity of the environment (rather than within the project) was seen by some as the most important: “the most important point is that the environment in which you are delivering the project is complex and dynamic,” said one. For another, this included interacting with ongoing operations.
Some of the interviewees talked about the need to adjust management methods and governance for more complex projects—such as the number of gateways, risk reporting, or the level of formality / documentation/ bureaucracy—which perhaps reduces the dependence on “gut feel,” but maybe also reduces the ability of “gut feel” to detect early warning signals. In addition, when asked about early warning signals for complex projects, as well as “hard” issues such as technological development, interviewees mainly indentified “softer” issues such as culture, the lack of an outsider's view/perspective on the project, anchoring in the permanent organization, lack of consistency between ambitions of stakeholders, certain organizations promoting one solution and trust, as well as more “gut feel” signals such as detection of unrealistic attitudes, lack of clarity of thought, or misalignment between quantitative risk analysis and qualitative risk assessments.
As already mentioned, contextual factors were pointed to as an especially important source of complexity. When asked what types of contextual factors are looked for in projects as potential early warning signals, the following were mentioned:
- Location decisions and complications from such
- Leadership issues
- Quality of information and documentation produced
- Whether guidelines for early phase assessments and “behavior” were followed
- Relevance of the proposed solution compared with needs
- Culture—i.e., whether specific conditions exist that will make cultural aspects a factor
- The need for development of new technology
- Main risks identified
- Missing competence in the project team
- Sponsor with unclear expectations and role
One area where the literature suggested early warning signals might arise from was the interaction of issues and problems, often referred to as “knock-on effects.” A handful of the respondents (private and public sector) confirmed that such interaction is often seen: “Issues tend to come together,” as one respondent explained. Another respondent said that such interaction may evolve if you overlook problems; if problems are ignored, other issues may come up that combine with your already unsolved problem to make things much worse. This can also be caused by “silo thinking”: different parts of the organization work toward their goals, not understanding that their actions influence other parts of the organization, ultimately causing different problems to culminate in a big mess. When asked why such knock-on factors were hard to identify and use as early warning signals, we only got one direct response: It is difficult to provide project managers with enough time to think ahead about such potential problems, and we need to make people in projects question assumptions they take to be absolute truths.
The final potential source of early warning signals was postproject reviews and lessons learned (we combined these, although formally a postproject review is a more stringent and tailored analysis of a completed project, whereas lessons learned are a more collective term for the “organizational memory,” in the form of written reports, accumulated experience, and even postproject review reports). The main impression from all respondents was that although many attempts are made at learning from previous projects, this is rarely very effective. The reasons cited ranged widely: lack of time to prepare lessons learned; a reluctance to “air dirty laundry”; projects are viewed as unique and thus are seen as not capable of learn much from past projects; and reports that are short enough that people will read them lack sufficient information about the context of the project to enable any real learning. One respondent summed up the problem quite well: “There are many lessons identified, but not very many learned.”
We did, however, learn of some practices that might help make lessons learned more useful. One private sector project manager said that he consistently wrote down any insights understood during the project, to ensure that any lessons would not be forgotten if such a report were to be written only at the end. The same company also had a practice where senior managers appointing project managers would indicate which previous projects might be relevant and ask the new project managers to review lessons learned reports from these. Another private company goes even further: new projects being proposed for execution must review at least three similar projects and assess whether problems encountered / lessons learned in these apply to the new project. Another private company, global in its scale, requires reviews to have considered five similar projects. Finally, a project management consultant pointed out that bringing in external assessors in various analyses in practice represents a rather efficient means of experience transfer, as these see many projects in many different organizations and can point out similarities and lessons learned.
When trying to understand how lessons learned and postproject reviews can be used to identify early warning signals, we obtained several suggestions. The obvious, mentioned by some, is that reports from postproject reviews and about lessons learned from relevant projects can contain information about problems experienced that could be used for early warning in this new project. It was also said that the likelihood of picking up on such issues increased if lessons learned were presented orally rather than if written reports were simply made available. Even more effective is bringing external views into the project group, either by involving people from relevant previous projects in certain discussions or even including people from external stakeholders. Another suggestion, made by three private sector companies, is to convert lessons learned/postproject reviews into either specific checklists of possible problems or more open lists of possible areas of concern. These can be reviewed by new projects as direct sources of early warning signals.
Interviews: The Use and Usefulness of Early Warning Signals
We asked about the use of earlier identified early warning signals later in the project, and received quite a variety of responses. One public sector organization said such signals change so much over time that they end up not being used very much. This is in sharp contrast to a private company that has implemented a balanced scorecard approach where early warning signals are one set of indicators populating the scorecard; these indicators are assessed in regular meetings at intervals of some weeks.. Another private company employs a similar approach, but based on “traffic lights” for risk elements identified as early warning signals; the status of identified risks are symbolized by colors and reviewed regularly. Three project management consultants all concurred that early warning signals that originate from external project assessments are “stronger” than those based on internal ideas, partly because the externals carry more weight and partly because the sponsor will be aware of them. One of these also said that such signals are most effective if they are included in the project reporting system—i.e., reports to the sponsor or other stakeholders must include an assessment of the status of problem areas expressed in early warning signals. Finally, one private company tried to institutionalize common early warning signals and lessons learned by modifying the stage gate requirements to ensure that such issues are properly addressed.
One “big question” in this study is how useful early warning signals are in foretelling problems (and enabling acting on such predictions). At least five of the organizations that were studied (private and public sector) claimed that the use of early warning signals had been useful in preventing problems, with one of the private companies saying that the performance of their projects had gradually increased over the last two to three decades, in part due to their use of balanced scorecard early warning. Others were not as unequivocal: One private company said reviews of early warning signals can detect problems, but cited an example where the review came 3 months later than it should have. One from the public sector said that early warning signs are often not well-articulated and, further, that the most difficult part is interpreting the early warning signals that occur—in some cases, with hindsight, they see that the signals were actually picked up, but not acted upon. This was echoed by another public sector interviewee, who stated that the organization was good at identifying early warning signals, but poor at letting the signals affect decisions. Especially in cases where the early warning signal is a “sense of uneasiness,” it is difficult to induce action. It can be very difficult to justify a feeling: People are reluctant to report such feelings, even if they later prove to be valid. After all, how can you justify action when you can not explain why?
With some dissension regarding the usefulness of early warning signals, the next question was whether there were particular reasons for early warning signals not being useful. The respondents provided a number of explanations:
- One commonly occurring problem is overly ambitious plans, which is difficult to detect.
- The same is true of another recurring problem, the development of new technology and difficulties resulting from this.
- Even when early warning signals are picked up and indicate problems, projects are very difficult to stop. In such cases, when the concerns are documented, the response is usually to provide assurances that “things will be OK” and that they “will run even faster,” thus effectively countering voiced warnings.
- Especially with complex projects, it is difficult to identify all relevant early warning signs; thus the problems that do materialize are issues not covered by the identified early warning signs.
- People involved in governance discussions and high-level project management discussions have often become too senior to have recent and relevant experience from operational matters, and therefore they fail to address these by looking for early warning signals.
- A tendency for group thinking, where ideas novel to, or counter to, the team's collective thinking and experience will not surface.
Given that there are problems in utilizing early warning signals effectively, we finally asked our respondents what could be done to remedy this situation. One private sector respondent said too “heavy” a process for identifying early warning signals could be a problem; it stifles creativity and thus fails to uncover all relevant warning signals. Another said that we need more discipline in actually using the early warning signals once they have been identified. A public sector suggestion was to repeat relevant project assessments and the exercise of finding early warning signals several times throughout the project. The project management consultants pointed to the need for a formalized process for finding early warning signals, asking the right questions, and bringing in people with the right competence in the process, including someone “thinking outside the box.”
The current study is also looking into a small number of case projects in order to check out how these support or contradict, or at least supplement our analysis and findings from literature and interviews. The cases in the final report will report eight cases in three countries. At the time of writing not all cases are completed. Thus, only a few indications are included here. Table 2 sums up the general theme of each of the currently finished case studies.
Table 2: Themes from case studies.
|AUS1||Private||ICT||The “project” becomes a program where lessons learned have been absorbed into current project management arrangements. This case study illustrates how many EWS can be identified and dealt with before the problems they manifest become insurmountable. |
A key EWS identified is behavioral misunderstanding and dysfunctionality in ICT projects which need co-learning between developers and users.
|UK1||Public||ICT||This complex project illustrates that although the early warning signals might be present, there has to be the political will to listen to them.|
|UK2||Private||Construction||This project shows an EWS on deteriorating subcontractor relationships, but also the difficulty in spotting problems due to a contractor engineering team too close to the client's. It also shows the need to look for EWS in contractor selection.|
|NO1||Private||Oil & Gas||Defining early warning indicators must be tailored to each project and based on a participatory process involving different people in the project. Even harder is implementing the actual use of the indicators on a continuous basis.|
|NO2||Private||Construction||The eagerness to obtain new business and new clients lead people to overlook warning signs that exist and press on with the project.|
|NO3||Public||City Development||This complex project did the relevant assessments on the relevant issues and times, but the surprises still keep coming. The assessments seem not able to catch early warning signals. The case illustrates shortcomings of traditional project management approaches and assessments in complex projects.|
|NO4||Public||Construction||This project illustrates that a medium complex project might do quite well with the traditional project management and assessment approaches, even if they do not identify early warning signals very well.|
|NO5||Public||Technology||This complex project succeeded in finding alternative approaches, supplementing the traditional ones that made the project successful in managing its environment and being proactive to decision-making processes. The key was not assessments, but rather the way the project management team worked with stakeholders and decision makers.|
The analysis across cases has not started yet. For this paper, the indications in Table 2 will have to speak for themselves, although themes can already be seen in the need for willingness to identify early warning signs, in EWS around interorganizational issues, and in complexity.
Clearly, project management practice has developed in trying to look for early warning signs. Much of this is embedded in tacit knowledge of experienced managers. Some governance mechanisms / frameworks involve this tacit knowledge, but it is not yet made explicit or formalized. Some EWS are more difficult to pick up, such as those involving intra- or interorganizational effects, or complexity, and governance needs to ensure that identified EWS are acted upon. Issues about the measurement of trust, behavior, understanding, and asymmetries in maturity and domain-specific knowledge may be more important that we first realized. This paper is reporting just part-way through a project, after which we will ultimately bring these initial thoughts to a structured conclusion, providing useful results regarding whether we are able to identify and use EWS in complex projects.
Ansoff, H. I. (1975). Managing strategic surprise by response to weak signals. California Management Review, (Winter), 21–33.
Archer, N. P., & Ghasemzadeh, F. (1999). An integrated framework for project portfolio selection. International Journal of Project Management, 17(4), 207-216.
Balachandra, R., & Raelin, J.A. (1980). How to decide when to abandon a project. Research Management, 23, 24-29.
Beauchamp, T. L., & Bowie N. E. (1997). Ethical theory and business (5th ed.). Upper Saddle River, NJ: Prentice Hall.
Bourne, L., & Walker, D. H. T. (2006). Using a visualising tool to study stakeholder influence— Two Australian examples. Journal of Project Management, 37(1), 5–21.
Bourne, L., & Walker, D. H. T. (2008). Project relationship management and the stakeholder circle. International Journal of Project Management. 26 (1): 125-130.
Cicmil, S., Cooke-Davies, T., Crawford, L., & Richardson, K. (2009). Exploring the complexity of projects: Implications of complexity theory for project management practice. Newtown Square, PA: Project Management Institute.
Concept. (2009). The concept research programme. Retrieved February 2009 from www.concept.ntnu.no.
Cooper, R. G., Edgett, S. J., & Kleinschmidt, E. J. (1997). Portfolio management in new product development: Lessons from the leaders-I. Research Technology Management, 40(5), 16–28.
Drummond, H. (1998). Riding a tiger: Some lessons of Taurus. Management Decision, 36(3), 141–146.
Emhjellen, K. (1997). Adapting benchmarking to project management: An analysis of project management processes, metrics, and benchmarking process models. Doctoral dissertation, Norwegian Institute of Technology, Trondheim, Norway.
Engwall, M. (2002). The futile dream of the perfect goal. In K. Sahil-Andersson & A. Soderholm (Eds.), Beyond project management: New perspectives on the temporary-permanent dilemma. Libe Ekonomi (pp. 261–277). Malmo, Sweden: Copenhagen Business School Press: Malmo, Sweden.
Flyvbjerg, B., Rothengatter, W., & Bruzelius, N. (2003). Megaprojects and risk: An anatomy of ambition. New York: Cambridge University Press.
Hoegl, M., Weinkauf, K., & Gemuenden, H. G. (2004). Interteam coordination, project commitment, and teamwork in multiteam R&D projects: A longitudinal study. Organization Science, 15(1), 38–55.
Jaafari, A. (2007). Project and program diagnostics: A systemic approach. International Journal of Project Management, 25(8), 781–790.
Jergeas, G. (2005). Measuring and monitoring project performance and success. The Revay Report 24(2), 1-4.
Kappelman, L. A., McKeeman, R., & Zhang, L. (2006) Early warning signs of IT project failure: The dominant dozen. Information Management Systems, 23(4), 31–36.
Kerzner, H. (1987). In search of excellence in project management. Journal of Systems Management, 38, 30.
Klakegg, O. J., & Olsson, N. O. E. (in press). An empirical illustration of public project ownership. International Journal of Project Organisation and Management.
Klakegg, O. J., Williams, T., & Magnussen, O. M. (2009). Governance frameworks for public project development and estimation. Newtown Square, PA: Project Management Institute.
Kleiner, A., & Roth, G. (1997). How to make experience your company's best teacher. Harvard Business Review, 75(5), 172–177.
Linehan, C., & Kavanagh, D. (2004, December). From project ontologies to communities of virtue. Paper presented at the 2nd International Workshop, “Making Projects Critical.” University of Western England, United Kingdom.
Loosemore, M. (2000). Crisis management in construction projects. New York: American Society of Civil Engineering Press.
Maqsood, T., Finegan, A., & Walker, D. H. T. (2006). Applying project histories and project learning through knowledge management in an Australian construction company. The Learning Organization, 13(1), 80–95.
Meier, S. (2008). Best project management and systems engineering practices in the preacquisition phase for Federal Intelligence and Defense Agencies. Project Management Journal, 39(1), 59–71.
Miller, R. & Hobbs, B. (2005). Governance regimes for large complex projects. Project Management Journal, 36, 42–50.
Miller, R., & Lessard, D. (2000). The strategic management of large engineering projects: Shaping institutions, risks, and governance. Massachusetts Institute of Technology, Cambridge, MA, USA.
Miller, R., & Olleros, X. (2000). Project shaping as a competitive advantage. In R. Miller & D. R. Lessard (Eds.), The strategic management of large engineering projects: Shaping institutions, risks, and governance. Cambridge, MA: MIT Press.
Mintzberg, H., Ahlstrand, B. W., & Lampel, J. (1998). Strategy safari: The complete guide through the wilds of strategic management. London: Financial Times/Prentice Hall.
Mullaly, M., & Thomas, J. L. (2008). Exploring the dynamics of value and fit: Insights from project management. International Journal of Project Management, 40, 124-135.
New England Complex Systems Institute. (2009). About complex systems. Retrieved February 2009, from www.necsi.org/guide/study.html.
Nikander, I. O. (2002). Early warnings: A phenomenon in project management. Dissertation for Doctor of Science in Technology, Helsinki University of Technology, Espoo, Finland.
Nikander, I. O., & Eloranta, E. (2001). Project management by early warnings. International Journal of Project Management, 19, 385.
Oakes, G. (2008). Project reviews, assurance and governance. Aldershot, UK: Gower Publishing Ltd.
Office of Government Commerce. (2007). Managing successful programmes. London: The Stationary Office.
Pinto, J. K., & Prescott, J. E. (1988). Variations in critical success factors over the stages in the project life cycle. Journal of Management, 14, 5–18.
Project Management Institute. (1986, September). Measuring success. Proceedings of the 18th Annual Seminar/Symposium of the Project Management Institute, Montreal, Canada.
Pryke, S., & Smyth, H. (Eds.). (2006). The management of complex projects: A relationship approach. Oxford: Blackwell.
Remington, K., & Pollack, J. (2007). Tools for complex projects. Aldershot, UK: Gower.
Roth, G., & Kleiner, A. (1998). Developing organizational memory through learning histories. Organisational Dynamics, 27(2), 43–60.
Samset, K. (2009). Projects, their quality at entry - and challenges in the front-end phase. In T. Williams et al. (Eds.). Making essential choices with scant information - pp. 18–38, Basingstoke, UK: Palgrave MacMillan.
Sanchez, A. M., & Perez, M. P. (2004). Early warning signals for R&D projects: An empirical study. Project Management Journal, 35(1), 11–23.
Schindler, M., & Eppler, M. J. (2003). Harvesting project knowledge: A review of project learning methods and success factors. International Journal of Project Management, 21(3), 219–228.
Shafagi, M., & Betts, M. (1997). A health check of the strategic exploitation of IT. Salford, UK: Construct IT.
Shenhar, A. J., Dvir, D., Levy, O., & Maltz, A. C. (2001). Project success: A multidimensional strategic concept. Long Range Planning, 34(6), 699–725.
Simon, H. A. (1982). Sciences of the artificial (2nd ed.). Cambridge, MA: MIT Press.
Soderholm, A. (2008). Project management of unexpected events. International Journal of Project Management, 26(1), 80–86.
Stacey, R. D. (2007). Strategic management and organisational dynamics: The challenge of complexity to ways of thinking about organisations. London: Financial Times/Prentice Hall.
Stewart W (2001) Balanced scorecard for projects. Project Management Journal 32, 1, 38-53
Syamil, A., Doll, W. J., & Apigian, C. H. (2002). Product development process performance: Measures and impacts (pp. 1991–1996), Proceedings from the Annual Meeting of the Decision Sciences Institute. San Diego, CA..
Syamil, A., Doll, W.J., and Apigian, C.H. (2004), “Process performance in product development: measures and impacts”, European Journal of Innovation Management, Vol. 7 No.3, pp.205-17. Torp, O., Magnussen, O. M., Olsson, N. O. E., & Klakegg, O. J. (2006). Kostnadsusikkerhet i store statlige investeringsprosjekter; Empiriske studier basert på KS2, Concept report no. 15 (“Cost Uncertainty in large Public Investment Projects; Empirical studies based on QA2”), NTNU, Trondheim, Norway.
Walker, D. H. T. (2000, April 24–27). Information technology and procurement—One scenario for the 3rd millennium. CIB W92 Procurement System Symposium on information and communication in construction procurement (A. Serpell, Ed.), Santiago, Chile, 1: 89–99
Walker, D. H. T., & Nogeste, K. (2008). Performance measures and project procurement. In D. H. T. Walker & S. Rowlinson (Eds.). Procurement systems—A cross industry project management perspective (pp. 177–210). Abingdon, Oxon, UK: Taylor & Francis.
Walker, D. H. T., Segon, M., & Rowlinson, S. (2008). Business Ethics and Corporate Citizenship. In, In D. H. T. Walker & S. Rowlinson (Eds.). Procurement systems—A cross industry project management perspective (pp. 101–139). Abingdon, Oxon, UK: Taylor & Francis.
Wateridge, J. (2002). (Post) project evaluation review. In Project management pathways (pp.65-1 to 65-12). High Wycombe, UK: Association for Project Management Ltd.
Weick, K. (1995). Sensemaking in organizations. London: SAGE Publishers.
Whitty, J. S. (in press). Project management artefacts and the emotions they evoke. International Journal of Managing Projects in Business.
Williams, T., Eden, C., Ackermann, F., & Tait, A. (1995). Vicious circles of parallelism. International Journal of Project Management 13(3), 151–155.
Williams, T. M. (1999). The need for new paradigms for complex projects. International Journal of Project Management, 17(5), 269–273.
Williams, T. M. (2005). Assessing and building on project management theory in the light of badly overrun projects. IEEE Transactions in Engineering Management, 52(4), 497–508.
Williams, T. M. (2007). Post-project reviews to gain effective lessons learned. Newtown Square, PA: Project Management Institute.
Williams, T. M., Samset, K., & Sunnevåg, K. (2009). Making essential choices with scant information: Front-end decision-making in major projects. Basingstoke, UK: Palgrave.
Zwikael, O. (2008a). Top management involvement in project management—Exclusive support practices for different project scenarios. International Journal of Managing Projects in Business, 1(3), 387–403.
Zwikael, O. (2008b). Top management involvement in project management: A cross country study of the software industry. International Journal of Managing Projects in Business, 1(4), 498–511.
© 2010 Project Management Institute. All rights reserved.