Project Management Institute

Decision analysis in projects

judgments and biases

img

DECISION ANALYSIS

John R. Schuyler

Previous articles in this series have described projection and evaluation models. This article focuses on the expert judgments that are inputs into the analysis model.

There are three roles in the decision process, as shown in Figure 1:

Expert or assessor is the role of most professionals. They provide the judgments that go into an evaluation.

Evaluation analysts are primarily responsible for developing project models that generate outcome forecasts.

Decision makers review the forecasts and outcome values of various alternatives and make choices.

Overlapping is common, as people often wear multiple hats.

Judgments

A judgment is a value assessment performed, at least in part, by a person. Various types of judgments, or assessments, are inputs to an evaluation:

• Non-probabilistic

• Single point values (e.g., a particular cost)

• Single line forecasts (e.g., inflation rate over time)

A conventional deterministic analysis uses only best estimates. Even in a decision analysis, it is adequate to represent with single values either minor variables or those with well-known values.

• Probabilistic (or stochastic)

• Continuous probability distributions (e.g., hours to complete an activity)

• Discrete probability distributions (e.g., number of subcontractors)

• Event probabilities, i.e., whether the event happens or not (e.g., whether a task will require rework).

When the value is uncertain, the representation takes the form of a probability distribution. Considering a single variable alone, a probability distribution represents an expert's entire judgment about the possible outcomes of a chance event.

The term risk analysis is often used when talking about the process of assessing the probability distribution for a single variable.

Objective and Subjective Probability Because a judgment is performed by a human, at least in part, the resulting probability distribution is called a subjective probability.

The extreme opposite is objective probability, which is supported by a comprehensive understanding of the system or by conclusive empirical evidence. Either we know the value of every possible element in the population or completely understand the process giving rise to various outcome values.

The continuum between the two extremes for the source of a probability distribution is shown in Figure 2. Most judgments will lie somewhere in the middle, far from either extreme.

A desirable attribute for judgments is that they be as objective as possible. With sufficient data or system knowledge we can approach 100 percent objective probabilities. Fully objective assessments are usually impossible.

Judgments, as inferred by their name, always involve some degree of subjectivity on the part of the expert. For this reason, inputs to a decision analysis are often called subjective probabilities.

Subjective probability assessments reflect degree of belief. There is no such thing as an “incorrect” probability so long as it best captures the expert's belief. The opinion is personal. Two people, given the same information, often arrive at very different estimates.

In decision analysis, subjective probabilities are often all that are available. However, rigorous use of such judgments leads to improved decisions. The decision model provides a logical way to combine everything important we know about a problem.

A good decision is one that is consistent with the data and with the organization's decision policy. A good decision does not guarantee a good outcome in a particular instance. However, consistently making good decisions increases the likelihood of favorable overall results over time.

Eliciting Judgments. Eliciting means to bring or draw out the judgment of an expert. Unless the expert has been well-trained in probability concepts, eliciting a judgment is usually best done through an interview process. The interviewer, often a decision analyst, acts as an interested but objective party. Various perspectives are discussed and quantified until the expert is confident that the resulting probability distribution accurately reflects his or her best judgment.

Some experts are uncomfortable with quantifying their judgments directly as probability distributions. In such cases, an analog is often used to diffuse the probability anxiety. One means is to use a probability wheel that uses colored pie-shaped sections to represent the possible outcomes. Figure 3 shows an example. The areas are adjusted until the expert feels that the sections are proportional to the likelihood of the respective outcomes. Numeric probabilities can then be read from a scale on the back of the wheel. A similar approach would be to use a device with linear distances to represent probability weights.

A classical physical analogy to uncertainty is an urn containing colored balls or marbles. The colors are chosen to represent different possible outcomes of a chance event. A project manager might be considering which component to use in a critical application. Component X is reliable but expensive. Component has a high probability of working in the product application and costs one-third as much. If Component Y is chosen and fails, the losses will bankrupt the company. Let white marbles represent Component Y's “success,” and let black marbles represent “failure.” The mixture can be adjusted to where the manager is indifferent between (A) going with Component X and (B) drawing a marble from the urn. If the fraction of black marbles is less than the actual risk of Component Y failure in the application, then choose Component Y.

Figure 1. Roles in the Decision Process

img

Decision Analysis in Projects

This is the ninth in a series of 12 articles about the probabilistic methods of decision analysis. This installment discusses a role most professionals assume at various times: providing expert opinions that are used in project decisions. There are many types of unintended biases; these should be recognized and dealt with. Estimation performance feedback is a powerful way to improve future judgments and evaluation practices.

Readers are invited to submit written questions and comments on this series to the author via PMI Communications.

1. Expected Value-The Cornerstone. Representing a probability distribution as an unbiased, single value.

2. Optimal Decision Policy. Appraising value or cost: a consistent approach suited to all decision types.

3. Decision Trees. Graphical decision model and EV calculation technique.

4. Value of Information. Evaluating an alternative to acquire additional information.

5. Monte Carlo Simulation. An alternative, popular technique for calculating expected values and outcome probability distributions.

6. Other Probabilistic Techniques. Other established and new probability techniques suited to simple situations.

7. Modeling Techniques - Part I. Project and cash flow projections: approaches, tools and techniques.

8. Modeling Techniques - Part II. Sensitivity analysis; correlation; dynamic models.

9. Judgments and Biases (this article). Encoding expert judgments about risks and uncertainties.

10. Utility and Multi-Criteria Decisions. Decisions involving objectives other than maximizing monetary value.

11. Implementing and Using Decision Analysis. Overcoming barriers to accepting and using decision analysis in projects; management implications.

12. Summary and Recap.

Figure 2. Sources of Probabilities

img

Figure 3. Probability Wheel

img

In eliciting an opinion, the expert is asked to consider the possibilities. What is assumed in this assessment? What is the worst (or best) possible outcome? What has to go wrong (right)? Bracketing the possibilities and working inward avoids anchoring on a presupposed central value.

Overconfidence About Our Knowledge. Actual forecasting behavior and clinical experiments repeatedly show that people are overconfident about their knowledge. Consider persons asked to provide, say, an 80 percent confidence interva12 for a parameter. The less someone knows about a topic, the wider his or her confidence interval should be. Typically, only about half of the participants have the true outcomes contained in their judged 80 percent confidence intervals. Some other interesting results from such surveys include:

  • People without knowledge of a topic are often unable to differentiate between 30 and 98 percent confidence intervals.
  • Asking for a 95 percent interval typically gets about a 65 percent confidence range.
  • The more people know about the general subject (not the specific question), the larger the confidence interval they assign. The less they know, the smaller the chance that the interval includes the true value.
  • Even when told beforehand that most persons are overconfident with their intervals, participants continue to make the intervals too small. Asking for two range estimates helps (e.g., first 90 percent, then 50 percent confidence interval).

The solution to this overconfidence problem is practice and feedback. Expert assessors should practice and be given feedback on the quality (outcomes) of their assessments. If artificial situations are necessary for practice, it helps to make the exercise as meaningful as possible, such as by putting some real money at stake.

If the judgment is elicited with an interviewer, he or she will often ask the expert first for a plausible range. This might represent about a 50 percent confidence interval. Then, they can discuss extreme values and reasons why those might occur.

Biases

In the first article3 of this series, it was said that one hallmark of a credible evaluation is objectivity. This is not objectivity used in the sense of what was the source of the probability assessment. In the discussion that follows, objectivity is synonymous with lack of bias.

A bias is a repeated or systematic distortion of a statistic, imbalance about the mean. We want our estimates to prove neither too high nor too low over the long run. The average error4 (actual minus estimate) should tend toward zero over many forecasts.

Achieving objectivity, or low bias, requires attention in three parts of the evaluation:

• Bias-free assessments of the input variables

• A value function, embodying the decision policy, which faithfully measures progress or goodness toward the organization's objective

• Integrity in the decision model structure so that the calculations do not introduce any biases.

The following sections describe some common biases in evaluation.

Evaluation or Decision Policy. The second article in this series was about decision policy: how to measure the value of the project outcome. Decision policy has three elements, any of which can introduce bias:

  • Objective. If the measure does not directly correspond to the organization's purpose, there will be a bias. For example, “quality,” however measured, does measure stockholder value.
  • Time. Present value discounting measures time value of money. If the discount rate does not correctly correspond to the marginal cost of capital, then there is a bias. Most companies use a discount rate that is too high for probability-adjusted forecasts; thus, there is a bias against long-term projects.
  • Risk policy. Many managers, even organizations, believe their responsibility is to act conservatively. They are willing to sacrifice a risk premium in terms of reduced expected monetary value in order to reduce risk. This practice may be intentional and appropriate, but it is taking a careful look at whose objective(s) is(are) to be maximized in decision policy. An objective value measure will be a statistic that corresponds directly and logically to the objective(s) of the organization.

Belief and Perception Errors. Some obvious biases arise from emotions and motivational effects. Others are due to relying on heuristics or rules of thumb. Some biases arise because of poor understanding of statistics or probability theory. There are perception biases, which are similar to optical illusions; that is, our mental model many not be consistent with reality.

Listed below are some of the more common biases which might be inadvertently introduced into an analysis:

Feelings about the effects of certain outcomes. For example, a contractor may have an excessive desire to win a bid in order to appear “successful.”

Personal feelings. The natural tendency is to shade judgments that reinforce what we want to do.

Biases toward new information. New information is often only sought when a “favorable” report is expected. Bad news doesn't sell. Bad news is often discounted, faulted, or ignored completely.

Believing what one wants to believe. Projects with unreasonably optimistic projections are often selected over projects with more reasonable projections.

Insensitivity to sample size. Often there is insufficient data to draw the conclusion.

Availability bias. We remember examples that were more famous or where we were more closely involved. Some scenarios are easier to imagine than others, such as those in our industry or expertise. The most recent data or experience is weighted most heavily. The easier it is to imagine or remember, the higher the believed likelihood of occurrence.

Insensitivity to prior probabilities. Discrepant and unusual events are given undue weight even though rare. We remember these exceptions and weight them as if commonplace. We also remember common events (generalize). Judgment bias occurs when the current situation is neither common nor unique, but is simply rare.

Sensitivity to the cause of the problem. People are willing to do more to save a species endangered by hunting than endangered by ecological change.

Anchoring. People do not like to change their minds, especially if their position was stated publicly, They “anchor” to the previous position. Later, they recognize information that reinforces the original judgment or decision.

It is widely found that assessment should not try to build a distribution shape around an initial “best guess.” The expert inadvertently anchors to the original number. This effect is minimized if the expert works inward from the edges of the probability distribution.

Framing. People are sensitive to the wording and context of outcomes. This is the framing of an event. Very different judgments can result from the way the questions are asked.

Consider a major project that will earn $20 million profit for the company if it succeeds. Two million dollars will be lost if the project fails. Asking “What minimum probability of success would you require to be willing to approve this project?” evokes a response different than the complement probability (l-p) from the question “What maximum probability of failure would you accept and still be willing to approve this project?”

Surveys show that people are very adverse to outcomes posed in terms of losing jobs or lives when compared to outcomes posed in terms of saving jobs or lives.

Improving Evaluations

Consistent, objective evaluations require good analysis practice.

Biases, with the possible exception of risk attitude, are to be avoided. Everyone upstream from the decision maker should attempt to be as objective as possible. Otherwise, the analysis will be questionable. However, the final stage of the analysis process is to compare risk versus value (or cost) profiles of the different alternatives. The next article in this series will discuss how a consistent risk policy can be embedded in the analysis process.

Certain errors can be detected by validation. A model can sometimes be devised to validate data. Conversely, input data can be used to validate models.

In developing a valid model, it helps to decompose the project system into sub-problems. Quantify subjective assessments, and model any interrelationships between variables. A suitable stochastic method (e.g., Monte Carlo simulation) correctly combines the probability distributions throughout the calculations.

A conscientious effort should be made to best capture the experts’ judgments in assessing input variables. Some of these assessments may be highly subjective. Whether additional investigation is warranted can be evaluated as a value of information problems.5

Recognize that biases do exist. Some of these can be avoided or minimized by carefully structuring the problem. Use an appropriate value measure, and separate judgments from preferences. A common error, for example, is to increase the present value discount rate to compensate for risk. It is far better to use probabilities to represent judgments about uncertainty, and ’use the discount rate only to represent preference for time value of money.

Perhaps the most powerful way to improve evaluations is to do post-audit analyses. These are follow-up analyses of decisions after the general results have become known. There is a psychological bias toward rationalization. People are often unable to reconstruct their thought process after learning the outcome. It is important to preserve the original reasoning by documenting the analysis. Provide performance feedback to the assessors. Use what is learned to improve the evaluation process.

Summary

As professionals, most of us spend a great portion of our workdays processing or preparing information for others to act upon. This work is either evaluation itself, or preparation for evaluation. We can serve our employers and clients best by ensuring that our evaluations are objective—one hallmark of professionalism.

In the next article, we'll be addressing evaluation issues where decision policy is inherently not objective by circumstance or by design. For example, some organizations must address multiple objectives. Another example is an organization wishing to be conservative and handle this in a systematic way.

NOTES

1. A deterministic analysis is one where all input variables are singly determined.

2. An 80 percent confidence interval is the range bounded by “low” and “high” numbers such that there is a 10 percent chance that the outcome will be below the “low” and a 10 percent chance that the outcome will be above the “high.”

3. In “Expected Value—The Cornerstone,” PM Network, January 1993, p. 27-31.

4. Ibid.

5. Comparing the cost of additional information or analysis versus the impact on schedule and other costs and benefits. Evaluating an alternative to obtain additional information was discussed in “Value of Information,” PM Network, October 1993, p. 19-23. img

 

John R. Schuyler, PE, CMA, is principal of Decision Precision®, an Aurora, Colorado, firm specializing in risk and economic decision analysis. He teaches Petroleum Risks and Decision Analysis in association with Oil & Gas Consultants International.

This material has been reproduced with the permission of the copyright owner. Unauthorized reproduction of this material is strictly prohibited. For permission to reproduce this material, please contact PMI.

PM Network ● January 1995

Advertisement

Advertisement

Related Content

  • PM Network

    Battle Plan

    By Mustafa, Abid At a time of increasingly complex business problems, it's natural for project teams to turn to war rooms to solve those challenges. The war room concept gathers all key stakeholders and critical…

  • Project Management Journal

    Identifying Project Factors that Affect an Investor's Escalation of Commitment in Public-Private Partnership Projects member content locked

    By Liu, Jiaqi | Liu, Jicai | Gao, Ruolan | Gao, Huanzhu Oliver | Li, Yahui The study comprehensively discusses 18 project factors affecting investors' escalation of commitment (EOC) in a public-private partnership (PPP) project. Using factor analysis, five factor groupings…

  • PM Network

    Human Nature

    By Grgurich, Hayley At a time when creative problem solving can make or break a business, design thinking is fast moving beyond the fringe and squarely into the mainstream. By focusing on the people for which projects…

  • Project Management Journal

    Contingency Release During Project Execution member content locked

    By Ayub, Bilal | Thaheem, Muhammad Jamaluddin | Ullah, Fahim Risk is inherent in construction projects and managed through contingency. Dynamic management of contingency escrow accounts during project execution poses decision-making challenges. Project…

  • PM Network

    A Journey to Solutions

    By Espy, Leigh Having the right information at the right time can make or break a project's success. But it's a mistake for project managers to assume this means that they have to have all of the answers at the…

Advertisement