Project risk analysis

how to make better choices in the uncertain times

Abstract

Managing projects with multiple risks and uncertainties is central to the art of project management. However, tough economic times make it much more difficult to manage project risk. How do you predict possible events one month in the future, let alone for the entire duration of a project? What would the project cost given uncertainties in financing? Are your suppliers still going to be in business? When assessing risks and uncertainties, project managers often rely on intuition rather than logic and comprehensive analysis. Intuitive thinking is often subject to illusions, which cause predictable mental mistakes and eventually poor decisions. The way to balance the effect of these psychological illusions is a systematic valuation of risks and associated mitigation efforts. It is hard to manage something that cannot be measured: project managers quantify risk probability, outcomes, and their cumulative effect on a project. Moreover, it is important to valuate various risk mitigation options: how much would each option cost and what would be the duration? This paper focuses on few very straightforward techniques that will help demystify project risk analysis.

Introduction

The root cause of almost all project failures is human error or misjudgement. These errors are hard to prevent, for they stem from human psychology. However, decision making is a skill that can be improved by training. By understanding how psychological heuristics and biases can affect our judgment, it is possible to mitigate their negative effects and make better decisions.

In his paper “Lessons Discovered but Seldom Learned or Why Am I Doing This if No One Listens,” Hall (2005) reviewed a number of projects that had failed or had major problems. Among them were the following:

•           Malfunctions in bank accounting software systems, which cost millions of dollars;

•           Space programs, including the Mars Polar Lander, Mars Climate Orbiter, and Ariane 5 European Space Launcher that were lost; and

•           Defense systems, including the Patriot Missile Radar system and Tomahawk/LASM/Naval Fires Control System, which had serious problems.

Hall (2005) listed the various reasons why projects are unsuccessful:

  • Sloppy requirements and scope creep,
  • Poor planning and estimation,
  • Poor documentation,
  • Issues with implementation of new technology,
  • Lack of disciplined project execution,
  • Poor communication,
  • Poor or inexperienced project management, and
  • Poor quality control.

In his list, Hall (2005) included only the results of human factors; he did not find any natural causes—earthquakes, say, or falling meteorites or locust attacks—for project failures in these cases. In his paper, he also described a recent study by the Swiss Federal Institute of Technology. The study analyzed 800 cases of structural failures where engineers were at fault. In those incidents, 504 people were killed, 592 injured, and millions of dollars of damage were incurred. The main reasons for failures were:

  • Insufficient knowledge (36%),
  • Underestimation of influence (16%),
  • Ignorance, carelessness, neglect (14%),
  • Forgetfulness (13%),
  • Relying upon others without sufficient control (9%),
  • Objectively unknown situation (7%), and
  • Other factors related to human error (5%).

Extensive research on why projects fail in different industries leads to the same conclusion: Human factors are usually the cause (Wilson, 1998; Johnson, 2006; Rombout & Wise, 2007). Furthermore, there is actually one fundamental reason for all these problems: poor judgment. Hall (2005) asked, “Why don't more people and organizations actually use history, experience, and knowledge to increase their program success?” The answer lies in human psychology.

All project stakeholders make mental mistakes or have biases of different types. Although the processes described in A Guide to the Project Management Body of Knowledge (PMBOK® Guide)—Fourth Edition (Project Management Institute [PMI], 2008) and many project management books help us to avoid and correct these mental mistakes, we should try to understand why these mistakes occur in the first place.

Intuitive and Controlled Thinking in Project Management

In 2005, Malcolm Gladwell, a staff writer for The New Yorker, published the book Blink: The Power of ThinkingWithout Thinking, which instantly became a best seller. Gladwell focused on the idea that most successful decisions are made intuitively, or in the “blink of an eye,” without comprehensive analysis. In a very short time, Michael LeGault (2006) wrote Think! Why Critical Decisions Can't Be Made in the Blink of an Eye as a response to Malcolm Gladwell. LeGault argued that in our increasingly complex world people simply do not have the mental capabilities to make major decisions without doing a comprehensive analysis. So, who is right—Gladwell or LeGault? Do we blink or do we think?

Both LeGault (2006) and Gladwell (2005) raised a fundamental question: What is the balance between intuitive (“gut feel”) and controlled (analytical) thinking? The answer is not straightforward. As the human brain evolved, it developed certain thinking mechanisms—mechanisms that are similar for all people regardless of their nationality, language, culture, or profession. Our mental machinery has enabled us to achieve many wondrous things: architecture, art, space travel, and cotton candy. Among these mechanisms is our capacity for intuitive thinking. When you think automatically, and even sometimes when you are analyzing a situation, you apply certain simplification techniques. In many cases, these simplification techniques can lead to wrong judgments.

The balance between intuitive and analytical thinking for a particular problem is not clear until the decision-making process is fully examined. Significant intellectual achievements usually combine both automatic and controlled thinking. For example, business executives often believe that their decisions were intuitive; but when they are questioned, it can be demonstrated that they did perform some analysis (Hastie & Dawes, 2001). When people think consciously, they are able to focus on only a few things at once (Dijksterhuis, Bos, Nordgren, & van Baaren, 2006). The more factors involved in the analysis, the more difficult it is to make a logical choice. In such cases, decision makers may switch to intuitive thinking in an attempt to overcome the complexity.

However, there is always the option to use different analytical tools, including risk analysis software, to come up with better decisions. So, coming back to our original question—do we blink or think?—it is important not to dismiss the value of intuitive thinking in project management. Ever since there have been projects to manage, managers have been making intuitive decisions, and they will continue to do so. Intuition can work well for most short-term decisions of limited scope. Because project managers rarely have enough time and resources to perform a proper analysis, and decision analysis expertise is not always available, there is always the temptation to make intuitive decisions. Even if you have experience with and knowledge of a particular area, some natural limitations to your thinking mechanisms can lead to potentially harmful choices. In complex situations, intuition may not be sufficient for the problems you face. This is especially true for strategic decisions that can significantly affect the project. In addition, intuitive decisions are difficult to evaluate: when you review a project, it is difficult to understand why a particular intuitive decision was made.

Cognitive and Motivational Biases

Let's imagine that you are a campaign manager for a U.S. senator. You organized a few very successful meetings with voters in local day care centers, distributed one million “My Opponent Is a Degenerate” flyers, and released $3 million worth of negative ads exposing your opponent's scandalous behavior when he was five years old. After all your hard work, you estimate that your senator has the support of at least 55% of the decided voters. Unfortunately, your estimate happens to be wrong: in reality, you have only 40% support. So what is the cause of this discrepancy? This is not only a mistake in your estimate of the poll numbers; there is also the question of whether you ran your campaign (project) correctly.

Why did you make this mistake? There might be a number of explanations:

  • You were overconfident, and your expectations were greater than what was actually possible.
  • You did not accurately analyze your own data.
  • You were motivated to produce such positive estimates because you didn't want to be fired if the poll numbers were not good enough.
  • Your boss, the senator, told you what your estimates should be.

We can explain the discrepancy in your poll numbers, and perhaps other problems in the campaign, by looking at some of the biases in your thinking. Don't worry—we're not picking on you. These are biases that can occur in anyone's thinking.

There are two types of biases: cognitive and motivational. Cognitive biases show up in the way we process information. In other words, they are distortions in the way we perceive reality. There are many forms of cognitive bias, but we can separate them into a few groups:

  • Behavioral biases influence how we form our beliefs. An example is the illusion of controlling something that we cannot influence. The example is our tendency to seek information even when it cannot affect the project.
  • Perceptual biases can skew the ways we see reality and analyze information.
  • Probability and belief biases are related to how we judge the likelihood that something will happen. This set of biases can especially affect cost and time estimates in project management.
  • Social biases are related to how our socialization affects our judgment.
  • Memory biases influence how we remember and recall certain information. An example is hindsight bias (“I knew it all along”), which can affect project reviews.

An example of one of the more common perceptual biases in project management is overconfidence. Many project failures originate in our tendency to be more certain than we should be that a certain outcome would be achieved. Before the disaster of the space shuttle Challenger, NASA scientists estimated that the chance of a catastrophic event was 1per 100,000 launches (Feynman, 1988). Given that the disaster occurred on the Challenger's 10th launch (NASA 2007), the 1 in 100,000 estimate now appears to be wildly optimistic. Overconfidence is often related to judgment about probabilities, and it can affect our ability to make accurate estimates. Sometimes we can be overconfident in our very ability to resolve a problem successfully (McCray, Purvis, & McCray, 2002) project management.

Motivational biases are caused by the personal interests of the person expressing an opinion. They are often easy to identify but difficult to correct, as you must remove the motivational factors causing the bias. If an opinion comes from an independent expert, removing the bias will not be too difficult because, by definition, an independent expert does not have any vested interest in the project outcomes. If you suspect that a member of the project team is biased, however, corrective actions can be difficult to accomplish, as it is hard to eliminate the personal interests of team members or managers from the project without removing the individuals themselves.

Identifying Risk and Uncertainties

Effective risk management process is one of most important foundations of successful project management. During tough economic times, you don't have a luxury to make costly mistakes. You should perform the analysis to determine what could happen during a course of project.

Almost everything in a project is uncertain. Before you start a project, you need to determine the potential uncertainties, identify the risk events that may affect the project, and generate alternative project scenarios. Various uncertainties can affect a project scheduled, including:

  • Duration and cost of activities,
  • Lags between tasks,
  • Resource allocation,
  • Calendars: for example, certain days can be lost due to weather conditions, and
  • Work breakdown structures: certain tasks will or will not be executed under certain conditions.

The PMBOK® Guide—Fourth Edition (PMI, 2008) defined risk identification as one of the core processes of project management: “Identify Risks is an iterative process because new risks may become known as the project progresses though its lifecycle” (p. 282). It suggested a number of tools and techniques for risk identification. They can be used for generating alternatives as well:

•           Documentation review is a review of relevant project documents associated with current and previous projects. If a plan is inconsistent with its requirements, that can be a source for potential risks.

•           Information-gathering techniques include brainstorming, the Delphi technique, interviewing, decision conferencing, and strengths, weaknesses, opportunities, and threats (SWOT) analysis.

•           Assumption analysis identifies risks by reviewing inconsistencies or inaccuracies in original project assumptions.

•           Diagramming techniques include flow charts, cause-and-effect diagrams (Exhibit 1), event chain diagrams (Exhibit 2), and mind maps. Particularly event chain diagram presents the event (risks) or multiple related events (event chain) affecting the project schedule on the Gantt chart.

Cause and effect diagrams

Exhibit 1: Cause and effect diagrams

Event Chain Diagram

Exhibit 2: Event Chain Diagram

Generating Alternatives

We often forget to come up with alternatives and then think about them only if we are concerned that our primary plan may not work. There are a few ways to develop alternatives. First, think of all the parameters that cannot be changed. For example, quality and safety should not be jeopardized; however, in reality they are often the first tossed overboard when tradeoffs are made. Come up with alternatives related to cost or duration. A good tool for analyzing alternatives for each objective is a strategy table (Exhibit 3), in which alternatives for each objective can be connected.

Strategy Table

Exhibit 3: Strategy Table

Risk Breakdown Structures and Risk Templates

Absolutely everything can be classified and categorized. Aliens we have not yet encountered have already been classified in science fiction movies—there are little green men with the slim bodies and enormous heads. Our tendency to create different hierarchies and classifications is not an obsession: it actually helps us to understand the nature of a problem. For example, risks can be assigned to different categories, such as external and internal risks, or organizational and technical risks. This type of hierarchy is not only useful for risk identification; it can also be used with quantitative analysis. When we perform an analysis using event chain methodology, this type of risk breakdown structure is one of the main inputs.

The PMBOK® Guide—Fourth Edition (PMI, 2008) recommended using a checklist analysis as risk identification techniques. With it, you create a standard set of risks that can be applied to many projects. When you review the list, you can ask yourself: “Could that occur in my project?” This can mitigate the negative effect of the availability heuristic, where you may remember only those events that occurred recently or are related to a major incident. The PMBOK® Guide—Fourth Edition (PMI, 2008) recommended that you create risk templates from historical data. Unfortunately, there are no universal risk templates that can apply to all industries and all types of projects. Most templates, including the example from the PMBOK® Guide—Fourth Edition are generic and may not be relevant to your specific project. However, they can be useful as a starting point for creating your own risk templates. Your customized risk templates should be reviewed and updated regularly. Project management literature includes many examples of different risk lists, which can be used as templates (see, for example, Hillson [2002]). Kendrick (2003) proposed a more advanced type of template: risk questionnaires. They provided three choices for each risk, and the project manager can select when the risk is likely to manifest itself during the project: (1) at any time, (2) about half the time, or (3) less than half the time. This helps project managers understand qualitatively the chance that a risk can occur.

Qualitative Risk Analysis

The PMBOK® Guide—Fourth Edition (PMI, 2008) defined qualitative risk analysis as “a process of prioritizing risks for subsequent further analysis or action by assessing and combining their probabilities and impact” (p. 289). The idea is to prioritize risks based on probability, impact on project objectives, time frame, when the risk may occur, and risk tolerance.

One of most exciting recent space missions, the New Horizons, is supposed to explore Pluto and a region of the outer solar system called the Kuiper Belt. The New Horizons was launched on January 19, 2006, and its voyage to Pluto will take almost 10 years. The spacecraft contains 24 pounds of plutonium, which will be used for power generation. NASA estimates that there is a 1 in 1.4 million to 1 in 18 million chance that an extremely unlikely launch area accident could release up to 2 percent of the plutonium. Karl Grossman (1996) did not agree with NASA's interpretation of the risk. “If it's 2 percent or it's 6 percent or if it's 20 percent or if it's 100 percent, when you are talking about plutonium, you are talking about the most radioactive substance known” (Tobin, 2006, ¶15). Fortunately, the New Horizons was launched successfully without accident; however, the controversy over using nuclear energy for space missions continues.

The issue here is that it is not enough to determine the probability of risks. We must also quantify the impact of the risks. The combination of probability and impact will give us an input necessary to make a decision. Quantitative and qualitative risk analysis can be used to analyze the combined probability and impact of the risk. Apparently, Karl Grossman (1997) has a much lower risk tolerance than NASA.

In many projects, especially smaller ones, a quantitative risk analysis is not required to determine which risks are most important. It is enough to know their probability of occurrence and impact on project objectives, such as time, cost, scope, and so on. Negative impacts are considered threats; positive impacts are opportunities. When you assess both probabilities and impacts, you may use a probability and impact matrix to prioritize the risks (Exhibit 4). Black areas represent high risks that have first priority for mitigation or avoidance. White areas represent low risks. Organizations define the classifications of high and low risks based on risk preferences: the more risk-averse an organization is, the more black area the matrix will have.

Probability/impact matrix

Exhibit 4: Probability/impact matrix

Modern quantitative analysis techniques, for example, event chain methodology, can automatically prioritize risks using sensitivity analysis, which is a relatively easy process as long as you have a project schedule and a risk breakdown structure. However, the probability and impact matrix remains a useful tool, especially in situations where you want to prioritize risks, such as quality, reliability, and safety that are not directly related to the project schedule.

Quantitative Risk Analysis

You may need to analyze how the combination of the risks and uncertainties could affect the project. The goal of this analysis is to create a “risk profile” of the project. You may need to know the following information:

  • The chance that the project will be completed within a certain period and on budget;
  • The project's success rate, or the chance that it will be completed; and
  • The low, average, and high estimates for duration, cost, and other project parameters.

Quantitative risk analysis methods include:

  • Sensitivity Analysis. It is a type of probabilistic analysis that determines how the sensitive results of the analysis are to uncertainties in input variables. Sensitivity analysis determines which uncertainty has the greatest potential for an impact.
  • Monte Carlo simulation of the project schedules: It is a mathematical method used in risk analysis. Monte Carlo simulations are used to approximate the distribution of potential results (project duration, cost, success rates, and other parameters based on probabilistic inputs (task duration, costs, risks, and other input variables). Each trial is generated by randomly pulling a sample value for each input variable from its defined probability distribution. These input sample values are then used to calculate the results. This procedure is then repeated until the probability distributions are sufficiently well represented to achieve the desired level of accuracy.
  • Decision tree analysis. It is a type of analysis that determines which decisions is the best. For example, for project cost management, the decision tree assists in calculating the value of the decision and determining which decision costs the least.

Modern software makes quantitative risk analysis very easy to use. For example, to perform Monte Carlo simulations of the project schedule, you need to assign risks and uncertainties for tasks and resources of the project schedule. For example, you may define a statistical distribution for the task duration. The analysis among other things will determine the change that project will be complete on time.

Event Chain Methodology

Event chain methodology is one of the modern quantitative analysis techniques (Virine & Trumper, 2007). It is an uncertainty modeling and schedule network analysis technique that is focused on identifying and managing events and event chains that affect project schedules. Event chain methodology helps to mitigate the negative impact of psychological heuristics and biases, as well as to allow for easy modeling of uncertainties in the project schedules.

  • Mitigate effect motivational and cognitive biases in estimating and scheduling. In many cases, project managers intentionally or unintentionally create project schedules that are impossible to implement.
  • Simplify the process of defining risks and uncertainties in project schedules, particularly improve the ability to provide reality checks and visualize multiple events.
  • Perform more accurate quantitative analysis while taking to an account such factors as relationship between different events and actual moment of the events.

Event chain methodology is based on the following principles:

  1. An activity (task) in most real life processes is not a continuous uniform procedure. Tasks are affected by external events, which transform an activity from one state to another. One of the important properties of an event is the moment when an event occurs during the course of an activity. This moment, when an event occurs, in most cases is probabilistic and can be defined using statistical distribution.
  2. Events can cause other events, which will create event chains. These event chains can significantly affect the course of the project. For example, requirement changes can cause an activity to be delayed. To accelerate the activity, the project manager allocates a resource is from another activity, which then leads to a missed deadline. Eventually, this can lead to the failure of the project.
  3. Once events and event chains are defined, quantitative analysis using Monte Carlo simulation can be performed to quantify the cumulative impact of the events. Probabilities and impacts of risks are using as in input data for Monte Carlo simulation of the project schedule. In most real life projects, it is necessary to supplement the information regarding the uncertainties expressed as an event with distributions related to duration, start time, cost, and other parameters.
  4. The single events or the event chains that have the most potential to affect the projects are the “critical events” or “critical chains of events.” By identifying critical events or critical chains of events, we can mitigate their negative effects. These critical chains of events can be identified by analyzing the correlations between main the project parameters, such as project duration or cost, and the event chains.
  5. Event chain diagrams are visualizations that show the relationships between events and tasks and how the events affect each other (Exhibit 2). The simplest way to represent these chains is to depict them as arrows associated with certain tasks or time intervals on the Gantt chart. Different events and event chains can be displayed using different colors. Events can be global (for all tasks in the project) and local (for a particular task). By using Event chain diagrams to visualize events and event chains, the modeling and analysis of risks and uncertainties can be significantly simplified.

The process of the analysis using event chain methodology includes the following steps:

  1. Define project schedule;
  2. Assign risks to the task and resources. Each risk has a number of properties: probability, outcome, and moment when this risk may occur;
  3. Define relationship between risks or define event chains;
  4. Perform Monte Carlo simulations; and
  5. Analyze results is done the same way as using “classic” Monte Carlo simulations.

Conclusions

Human mistakes are the root cause of almost all project failures. People are making these mistakes because of various cognitive and motivational biases. In tough economic times, it is very important to perform logical and accurate analysis to avoid costly mistakes. Various qualitative and quantitative risk analysis techniques can be used to make better choices. Qualitative risk analysis is essentially a judgment elicitation exercise. It helps to mitigate negative effect of cognitive and motivational biases by providing a framework for risk identification, analysis of risk response, and identification mitigation efforts. Event chain methodology is a modern technique for quantitative risk analysis that focuses on identification of events, which affect the project schedule.

References

Dijksterhuis, A., Bos, M. W., Nordgren, L. F., & van Baaren, R. B.. (2006). On making the right choice: The deliberation-without-attention effect. Science, 311(5763),1005–1007.

Feynman, R. P. (1988). An outsider's inside view of the Challenger inquiry. Physics Today, 41(2), 26–37.

Gladwell, M. (2005). Blink: The power of thinking without thinking. New York: Little, Brown and Co.

Grossman, K. (1997). Wrong stuff: The space program's nuclear threat to our planet. Monroe, ME: Common Courage Press.

Hall, D. (2005). Lessons discovered but seldom learned or why am I doing this if no one listens. Proceedings of Space Systems Engineering and Risk Management Symposiums, Los Angeles, CA, 170–178.

Hastie R., and R. Dawes. 2001. Rational Choices in an Uncertain World. Thousand Oaks, CA: Sage Publications.

Hillson, D. (2002). Structuring and Breakdown. Available at http://www.risk-doctor.com/pdf-files/rbs1202.pdf (accessed June 4, 2009).

Kendrick, T. (2003). Identifying and Managing Project Risk: Essential Tools for Failure-Proofing Your Project. New York: AMACOM, a division of American Management Association.

Johnson, J. (2006). My life is failure. West Yarmouth, MA: The Standish Group International.

LeGault, M. (2006). Think! Why critical decisions can't be made in the blink of an eye. New York: Threshold Editions.

National Aeronautics and Space Administration (NASA). (2007). Space Shuttle Flights by Orbiter, 2007. Retrieved May 16, 2007, from http://www.nasa.gov/mission_pages/shuttle/launch/orbiter_flights

McCray, G. E., Purvis, R. L., & McCray, C. G. (2002). Project management under uncertainty: The impact of heuristics and biases. Project Management Journal, 33(1), 49-57.

Project Management Institute. (2008). A guide to the project management body of knowledge (PMBOK® guide) (4th ed.). Newtown Square, PA: Project Management Institute.

Rombout, S., & Wise, D. (2007). Failure to launch: Has poor estimating compromised your project? Proceedings of the 2007 PMI College of Scheduling Conference, Vancouver, BC, Canada.

Tobin, K. (2006) Probe to Pluto set for launch NASA mission to planet expected to take nearly 10 years Retrieved from http://www.cnn.com/2006/TECH/space/01/13/pluto.mission/index.html

Virine, L., & Trumper, M. (2007). Project decisions: The art and science. Vienna, VA: Management Concepts.

Wilson, S. (1998). Failed IT projects (the human factor). Retrieved July 20, 2009, from http://faculty.ed.umuc.edu/~meinkej/inss690/wilson.htm

© 2009, Lev Virine
Originally published as a part of 2009 PMI Global Congress Proceedings – Orlando, Florida

Advertisement

Advertisement

Related Content

  • Project Management Journal

    Narratives of Project Risk Management member content locked

    By Green, Stuart D. | Dikmen, Irem The dominant narrative of project risk management pays homage to scientific rationality while conceptualizing risk as objective fact.

  • Project Management Journal

    Identifying Subjective Perspectives on Managing Underground Risks at Schiphol Airport member content locked

    By Biersteker, Erwin | van Marrewijk, Alfons | Koppenjan, Joop Drawing on Renn’s model and following a Q methodology, we identify four risk management approaches among asset managers and project managers working at the Dutch Schiphol Airport.

  • Project Management Journal

    Top Ten Behavioral Biases in Project Management member content locked

    By Flyvbjerg, Bent This article identifies the 10 most important behavioral biases for project management.

  • Project Management Journal

    Collective Mindfulness member content locked

    By Wang, Linzhuo | Müller, Ralf | Zhu, Fangwei | Yang, Xiaotian We investigated the mechanisms of collective mindfulness for megaproject organizational resilience prior to, during, and after recovery from crises.

  • Project Management Journal

    Perceived Complexity of a Project’s Optimal Work Plan Influences Its Likelihood of Adoption by Project Managers member content locked

    By Brokman-Meltzer, Mor | Perez, Dikla | Gelbard, Roy Perceived complexity is a factor when project managers adopt suboptimal work plans, even when optimal plans are readily accessible.

Advertisement