Critical decision-making skills for project managers

Introduction

In project management we make decisions on a daily basis. Most are relatively unimportant; some are critical and will cause the project to be successful or to fail. To ensure we make rational, unbiased decisions, critical decision-making requires you to:

- Obtain complete and accurate data

- Exam your logic and your biases

- Examine your premises

- Be aware of your motivations

- Think through both short-term and long-term impacts

- Know your own limits (be humble)

- Check with others

- Be comfortable with resulting uncertainty

A decision requires several things:

  • An event that drives a need for a decision
  • A timeframe within which to make the decision
  • Facts and information that need to be taken into account
  • Assumptions about
    • Things we don't know – what can we safely assume?
    • The facts themselves – are they accurate and unbiased?
    • The outcomes of the decision – can we predict the future accurately enough to select the best long-term solution?
    • Our own objectivity
    • The acceptability of the decision – will everyone involved accept our decision?

Most of us consider ourselves competent decision makers based on our own history of making reasonable decisions in past projects. Yet there is a great deal of recent neurological research that indicates our brains really are not normally logical; in fact, most decisions are made emotionally and only later justified if questioned by the rational portion of our brains. Once a normal person has made a decision, he or she searches for data to support that decision rather than the other way around.

How can you ensure that your critical project decisions are as good as they can be? Here we will look at the most common flaws in decision making:

  • Errors in logic
  • False assumptions
  • Unreliable memories
  • Mistaking the symptom for the problem
  • Biases

Classical Decision-Making Approach

The classical approach to making decisions in management is a very rational set of steps:

Identify the problem – recognize there is a problem, define the goals, and gather the information needed to make a rational decision.

Generate all possible solutions – brainstorm all solutions, preferably in a group. Don't filter anything even remotely reasonable at this point.

Generate objective criteria – generate the measurement criteria to assess the possible solutions for feasibility and reasonableness. Begin taking into account criteria for measuring the success or failure of the decision.

Select the best option – using the filtering criteria, make a decision on the best possible solution.

Implement the solution – put into place the preferred solution.

Monitor the results – track and monitor the outcome of the implemented solution and the results that ensue. This may take some time for long-term outcomes to become apparent. Did the proposed solution work or should another solution be implemented?

Problems with the Classical Model

However, there is virtually no support in real-life management studies for the utility of the rational model. There are a lot of assumptions that must be made in practice:

  • The problem is clear and unambiguous
  • We have identified the real problem and not just a symptom of the problem
  • Managers have perfect information
  • Objectives are known and agreed to by everyone
  • The alternatives and their consequences are known or predictable
  • Managers are rational, systematic, and logical
  • Managers work in the best interests of their organizations
  • Ethical decisions do not arise in the decision-making process
  • Time and resources are not a consideration
  • Decisions will be implemented willingly and supported by all stakeholders

But in real life:

  • Decision makers rarely have access to perfect information
  • Opinions often outnumber facts
  • Decision makers are influenced by other people who have a strong interest in a particular outcome
  • Decision makers are limited in their ability to comprehend vast amounts of information
  • The data may be so poorly presented that a decision is not obvious
  • Decision makers seldom can accurately forecast future consequences
  • Emotions, fatigue, attitudes, and motives can interfere to prevent rational decision making
  • Culture and ethical values influence the decision-making process

The rational approach works much better in the field of science, where there is time to gather sufficient data before making a decision and to take a logical approach to arriving at a decision. As project managers, we often must make decisions under time pressure and with inadequate data of questionable accuracy. In this environment we base decisions mostly on our own experience and the experience of people we trust well enough to ask their opinion.

Recent Studies

Advances in the field of neurology have made it possible to measure different areas of brain activity during the process of making a decision, even when the differences in time are only microseconds apart. This has provided a fascinating insight into how the human brain makes decisions. The old statement that “Man is not a rational animal, he is a rationalizing animal” turns out to be very true.

In the evolution of our species, the limbic system evolved to enable us to avoid danger. Before we can rationally think about a decision, it starts acting automatically and starts the decision-making process before we're consciously aware that anything is happening. (Walton, et al. 2004) Without such an automatic response mechanism our ancient predecessors would have not been aware of predatory animals before it was too late.

Only much later in evolution did out neocortex develop, specifically the anterior cingulate cortex and the orbitofrontal cortex. These are brain areas involved in rational decision making, but because they developed much later in the evolutionary process, they are slower and do not start acting until after the more primitive areas of our brain have already made a decision. To quote Cain (2012) “The new brain is responsible for thinking, planning, language, and decision-making - some of the very faculties that make us human. Although the new brain also pays a significant role in our emotional lives, it's the seat of rationality.”

Only a fraction of our brain is involved with conscious, rational behavior. The majority of it works madly regulating everything from breathing to how much to eat when we're at dinner. Eagleman (2011) states, “the unconscious workings of the brain are so crucial to everyday functioning that their influence often trumps conscious thought.”

Standovich and West (2000) divided up the brain into two areas involved in decision making. They called these areas System 1 and System 2. System 1 is the more primitive area of the brain and System 2 is the more rational area. Kahneman (2012) says “… busy System 1 carries out fast thinking, while sluggish System 2 handles slow thinking. Fast thinking is intuitive—it engages the automatic mental activities of perception and memory. Slow thinking is deliberate and effortful. When System 1 presents a plausible story, System 2 will often pass it through uncritically. One may believe a rational choice has been made, when in fact it was not.”

Can we eliminate the processes in our “System 1”? No, we can't. It's an automatic system. But we can learn to identify its involvement in our decisions. To cite Kahneman again, “System 1 is automatic and ever active, sorting through feelings and memories to make suggestions to System 2, which produces the decision. Usually, this process serves one well. However, System 1 tends to have biases, and relies on the most readily available answers, which can cause judgment errors System 2 can't detect. System 2 is too slow and effortful to sort through every decision, and so the two systems end up compromising”

Why is this important? Why should we care as project managers?

The reason this is important is that the more we understand about our own decision-making abilities and processes, the better our decisions will be. Now that we see that most of our decisions are made in a less-than-optimal fashion, we can better control that process and make a more intelligent effort to make rational decisions.

Flaws in Decision Making

When faced with the task of making a difficult decision, the brain uses simple, efficient rules, called heuristics, to make decisions. One of these rules, which Tversky and Kahneman (Tversky & Kahneman, 1973) called the availability heuristic, is the tendency to make a decision based on how easy it is to recall similar decisions.

The availability heuristic is an unconscious process that operates on the notion that, “if you can think of it, it must be important.” In other words, how easily a similar decision can be called to mind determines our view of the decision we need to make now.

There are well-documented common flaws that cause us to make less-than-optimal decisions. These are often cited as:

  1. Errors in logic
  2. False assumptions
  3. Unreliable memories
  4. Mistaking the symptom for the problem
  5. Biases

1.   Errors in Logic

A logical fallacy is an error of reasoning. When someone comes to a conclusion based on a bad piece of reasoning, he or she commits a fallacy, an error in logic. The conclusion may or may not be a good one, but the process of arriving at it is flawed and continued use of fallacious reasoning will lead to many more bad conclusions than good ones.

Decision making in a project context requires deductive logic far more often than inductive logic. Deductive arguments are supposed to be rational and logical. For a deductive argument to be “valid” it must be impossible for its premises to be true and its conclusion to be false. The classic example of a deductively valid argument is:

(1) All men are mortal. (2) Socrates is a man. Therefore: (3) Socrates is mortal.

It is simply not possible that both (1) and (2) are true and (3) is false, so this argument is deductively valid.

Errors in logic are very difficult to identify because they produce what are seemingly reasonable decisions. They are generally not the best decisions possible and often do not take into account the long-term impacts of the decision, but they seem reasonable and so are not questioned.

Errors in logic do not only occur because our thinking processes are faulty. They can also occur when we are under a lot of stress and under time pressure. Hallowell (Hallowell) says: “studies have shown that as the human brain is asked to process dizzying amounts of data, its ability to solve problems flexibly and creatively declines and the number of mistakes increases.”

The best approach to identifying that this is happening is to log every major decision, the assumptions you made when you made the decision, and the resulting long-term impacts of the decision. Comparing the actual results to what you thought they would be can help identify any logical errors in the decision process.

2.   False Assumptions

Assumptions, as we have all been taught, are fraught with danger. If they turn out not to be true we now have a problem. Yet our brains (see the description of System 1 above) are perfectly happy to make assumptions because they make the decision process a lot easier. We don't have to think hard. With the right assumptions our decision is straightforward.

The most dangerous assumptions we make are that we, and the people working with us and providing us information, are always unbiased and rational. As a rule this is not true, but only by understanding our own biases and those of the people around us can we recognize this as an assumption and compensate for it. It is almost always true that information from any one person is biased in a way that emphasizes one decision over another. The person providing the information may or may not realize that it is biased.

The most obvious examples are the “talking heads” on radio and television shows. These paid “experts” are espousing their own particular viewpoint over those of others by over-emphasizing some facts and ignoring other, inconvenient facts that don't support their pet theories. Blindly thinking that you are getting accurate and unbiased information from them is a very dangerous assumption.

3.   Unreliable Memories

What is experience? It is the accumulation of our memories of what happened in the past that we witnessed or were a part of.

Memories are stored in the more primitive areas of our brain and so less subject to rational analysis.

Medical research in the past twenty years shows increasingly how unreliable those memories are. Our memories are highly biased to increasing our own self-image. We remember our actions, behaviors, and their effects in a very positive way and by comparison denigrate the actions of others. The longer ago those memories are they more positive they become. Memory is also highly manipulatable. Criminal trials for violent crimes are relying less and less on eyewitness testimony because their memory of what happened can be very easily manipulated.

When our brains process data to make a decision, the information is held in short-term memory. We juggle the information to make sense of it, prioritizing some of it as relevant and other data as irrelevant. Unfortunately, the data in short term memory decays very rapidly. As Kahneman says (Kahneman 2012) “Like a juggler with several balls in the air, you cannot afford to slow down; the rate at which material decays in memory forces the pace, driving you to refresh and rehearse information before it is lost. Any task that requires you to keep several ideas in mind at the same time has the same hurried character.”

Short-term memory is very limited and easily overloaded, with new material crowding out something we just read (or heard) a few minutes ago. As we go through the process of thinking through a decision we consider all the information held by our short-term memory. The more recent the information we learned, the more likely it is to have a larger impact on our decision than information we learned earlier. Books on giving effective presentations often point out that when you give a list of options in a presentation, you should make your preferred options either the first on the list or the last on the list. These are the options that are most likely to be remembered; the options in the middle tend to run together in people's minds. This is often called the Primacy effect, Recency effect, or Serial position effect: those items near the end of a list are the easiest to recall, followed by the items at the beginning of a list; items in the middle are the least likely to be remembered.

An interesting side effect of this is called the Modality effect: that memory recall is higher for the last items on a list when the list items were received via speech than when they were received via writing. When we hear something, the information goes into short-term memory but is quickly replaced by what we hear next. When we're given something in writing, we can reinforce the memory by re-reading the material.

How can we compensate for the shortcomings of our short-term memory? The best approach is to keep lists. When you write things down it reinforces the material in memory and helps the transfer from short-term memory to long-term memory.

4.   Mistaking the symptom for the problem

If you solve the wrong problem you haven't accomplished anything — no matter how impressive you looked while you were making the decision. This may be the most difficult part of the decision-making process, because in real-life most problems are not obvious. They're subtle, fuzzy. How many times have you and your spouse or current significant other had a discussion because one of you saw a problem in the relationship and the other didn't?

Recognizing that your sales have fallen 10% is a good start, but this is a symptom, not the problem itself. You need to determine WHY your sales have fallen. Not easy — was it poor product design, poor marketing, high prices, and so on?

What are some symptoms that tell us there's a problem?

Deviation from planned performance is a primary indicator on a project. On a one-year long project, if the project is a week behind schedule six months into it, then that is rarely a problem. But if you notice a slow slippage from one reporting period to another, that's a strong sign that something is happening that impacts the schedule. The slippage itself is not the cause of the problem; it's a symptom that's telling you something is happening that's causing the schedule to slip.

One common occurrence on projects is that critical stakeholders do not show up for meetings. Is this a problem? Well, yes, because we need their awareness and support. But if the same stakeholders consistently do not show up, there is something that's causing that to happen and you should start identifying the causes of why they're not showing up. There are many similar occurrences in projects where we tend to treat the symptom and not the cause of the problem.

Deviation from past performance is another common indicator.

If the project team isn't performing as you expect based on past performance, instead of berating the team you need to identify the reasons for why they're not performing as you think they should. This is the hardest part if you're new to an organization — you don't know what past performance is.

Stakeholder criticism is also a problem, but more than being a problem it's a symptom of a more serious problem. What is causing the stakeholders to complain? You need to identify the reasons why they're complaining and fix those. If senior management is complaining they're not aware of the project's status you should improve your communications with them. If your client is complaining about the quality of the product you're giving them you need to identify the causes of the poor quality and fix those.

Defining problems in terms of the solutions

Personal bias shows up very strongly here. If sales are down — each person you ask is going to have a different perception as to why based on his or her past experience and personal viewpoints. Trying to dig out objective reality from a variety of different perceptions is very difficult.

Sales is going to blame product design. Marketing is going to blame sales. R&D is going to blame marketing, etc. If you state the symptom one way, say that sales are down because of poor product design — you bias the solution.

Especially problematic if there are available technological solutions to relieve the symptoms — then someone in the IT department is going to tell you that your problem can be solved if only they would buy a LINUX-based server instead of the old Windows-based server. In fact the true problem may have nothing at all to do with the technology — it may lie somewhere else entirely.

5. Biases

There is no person who is truly objective. Each one of us has biases. Some of them are strong biases; some of them are weak biases. A rare few are conscious biases that we're aware of; the majority are unconscious. The unconscious ones are far more powerful than the ones we are aware of. Searching Wikipedia for the term “cognitive biases in judgment and decision making” produces a pages-long list with multiple cross-references and links to more detail in particular pages. A partial list of those biases that are applicable to decision making in a project context includes:

Ambiguity effect — the tendency to avoid options for which missing information makes the probability seem “unknown.”

Anchoring or focalism — the tendency to rely too heavily, or “anchor,” on a past reference or on one trait or piece of information when making decisions.

Availability heuristic — the tendency to overestimate the likelihood of events with greater “availability” in memory, which can be influenced by how recent the memories are, or how unusual or emotionally charged they may be.

Bandwagon effect — the tendency to do (or believe) things because many other people do (or believe) the same; related to groupthink and herd behavior

Belief bias — an effect where someone's evaluation of the logical strength of an argument is biased by the believability of the conclusion

Bias blind spot — the tendency to see oneself as less biased than other people, or to be able to identify more cognitive biases in others than in oneself

Choice-supportive bias — the tendency to remember one's choices as better than they actually were

Cognitive inertia — Unwillingness to change existing thought patterns in the face of new circumstances

Confirmation bias — the tendency to search for or interpret information or memories in a way that confirms one's preconceptions

Endowment effect — the fact that people often demand much more to give up an object than they would be willing to pay to acquire it

Focusing effect — the tendency to place too much importance on one aspect of an event; causes error in accurately predicting the utility of a future outcome

Framing effect — drawing different conclusions from the same information, depending on how or by whom that information is presented

Hindsight bias — sometimes called the “I-knew-it-all-along” effect, the tendency to see past events as being predictable at the time those events happened

Illusion of control — the tendency to overestimate one's degree of influence over other external events

Loss aversion — “the disutility of giving up an object is greater than the utility associated with acquiring it”

Negativity bias — the tendency to pay more attention and give more weight to negative than positive experiences or other kinds of information.

Optimism bias — the tendency to be over-optimistic, overestimating favorable and pleasing outcomes (see also wishful thinking, valence effect, positive outcome bias)

Overconfidence effect — excessive confidence in one's own answers to questions. For example, for certain types of questions, answers that people rate as “99% certain” turn out to be wrong 40% of the time

Planning fallacy — the tendency to underestimate task-completion times

Premature termination of search for evidence — People tend to accept the first alternative that looks like it might work

Pro-innovation bias — the tendency to reflect a personal bias towards an invention/innovation, while often failing to identify limitations and weaknesses or address the possibility of failure

Selective perception — We actively screen-out information that we do not think is important. In one demonstration of this effect, discounting of arguments with which one disagrees (by judging them as untrue or irrelevant) was decreased by selective activation of right prefrontal cortex

Semmelweis reflex — the tendency to reject new evidence that contradicts what is already believed. (The term originated from Ignaz Semmelweis, who discovered that childhood fever mortality rates could be reduced tenfold if doctors would wash their hands with a chlorine solution between having contact with infected patients and non-infected patients. His hand-washing suggestions were completely rejected by his contemporaries.)

Source credibility bias — A tendency to reject a person's statement on the basis of a bias against the person, organization, or group to which the person belongs. People preferentially accept statements by others that they like.

Status quo bias — the tendency to like things to stay relatively the same (see also loss aversion, endowment effect, and system justification)

Subjective validation — perception that something is true if a subject's belief demands it to be true. Also assigns perceived connections between coincidences

Choice-supportive bias — remembering chosen options as having been better than rejected options

False memory — a form of misattribution where imagination is mistaken for a memory

Hindsight bias — the inclination to see past events as being predictable; also called the “I-knew-it-all-along” effect

Illusion-of-truth effect — that people are more likely to identify as true statements those they have previously heard (even if they cannot consciously remember having heard them), regardless of the actual validity of the statement. In other words, a person is more likely to believe a familiar statement than an unfamiliar one.

Summary

Decision making is critical to being successful as a project manager. It is something we do on a daily basis as we juggle schedule, resources, risks, quality, and other factors. The academic world has taught for many years that decision making should be rational, looking at all possible facts and logically arriving at the best solution.

But modern research in physiology has shown us how the brain really arrives at decisions, and it is not as logical as we would like to think. The more we understand how our brains arrive at conclusions, the better our ability to make more rational decisions.

Much of the neurological research on thinking was done by Daniel Kahneman, who won the Nobel Prize in economics for his research in how people make economic decisions in their lives. His work was supplemented by later researchers and is completely applicable to other areas where decisions have to be made.

There are five areas in particular where flaws in decision making occur:

  1. Errors in logic
  2. False assumptions
  3. Unreliable memories
  4. Mistaking the symptom for the problem
  5. Biases

We examined those areas and offered mitigation approaches for the first four of them. The fifth area, biases, can best be mitigated by simply understanding what our personal biases are so we can compensate for them.

References

Cain, S. (2012). Quiet: The power of introverts in a world that can't stop talking. New York: Crown Publishers.

Eagleman, D. (2011). Incognito: The secret lives of the brain. New York: Pantheon Books.

Hallowell, E. M. (1999). Overloaded circuits: Why smart people underperform. Harvard Business Review reprint R0501E.

Kahneman, D. (2012). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.

Standovich, K.E., & West, R. F (2000). Individual differences in reasoning: Implications for the Rationality Debate. Behavioral and Brain Sciences 23, 645-665.

Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology (1), 207-233.

Walton, M. E., Devlin, J.T., & Rushworth, M. F. S. (2004). Interactions between decision making and performance monitoring within prefrontal cortex. Nature Neuroscience 7, 1259- 1265.

© 2013 Frank R. Parth, PMP
Originally published as a part of 2013 PMI Global Congress Proceedings – Istanbul, Turkey

Advertisement

Advertisement

Related Content

  • Project Management Journal

    Top Ten Behavioral Biases in Project Management member content locked

    By Flyvbjerg, Bent This article identifies the 10 most important behavioral biases for project management.

  • Project Management Journal

    Perceived Complexity of a Project’s Optimal Work Plan Influences Its Likelihood of Adoption by Project Managers member content locked

    By Brokman-Meltzer, Mor | Perez, Dikla | Gelbard, Roy Perceived complexity is a factor when project managers adopt suboptimal work plans, even when optimal plans are readily accessible.

  • Project Management Journal

    Executives' Decision Processes at the Front End of Major Projects member content locked

    By Chenger, Denise | Woiceshyn, Jaana This article reports on an inductive multiple-case study of how executives made such decisions in major upstream oil and gas projects.

  • Project Management Journal

    The Missing Link in Project Governance member content locked

    By Ferrer, Paulo Sergio Scoleze | Araújo Galvão, Graziela Darla | de Carvalho, Marly Monteiro This study aims to understand how the information about corporate governance permeates the the project environment and influences decisions.

  • Project Management Journal

    Project-As-Practice member content locked

    By Kalogeropoulos, Theodoros | Leopoulos, Vrassidas | Kirytopoulos, Konstantinos | Ventoura, Zoe This article applies Bourdieu’s practice theory within the field of project management through a qualitative study into 17 successful and experienced Greek project managers.

Advertisement