Assessing risk

is it a black swan?

Abstract

This study discusses decisions made and how our own biases convince us that significant events are improbable and their impact acceptable because of a failure to recognize the severe consequences of the risk.

The study will take us from the description of a Black Swan, through a discussion of probability selection, expectation, risk tolerance, determining the odds, and how risk accumulates.

The introduction describes the recent event at the Fukushima Nuclear Plant (Ohmae, 2011), asking the question: “Should they have recognized the risk?” Reference is made to Nassim Taleb's notable book Fooled by Randomness (Taleb, 2004) in which he introduced the concept of the Black Swan as it relates to the marketplace. The study examines the importance of probability versus expectation, how confidence affects choice; suggests a method of equating verbal descriptions to probability selection (Hamm, 1991); and a discussion of how the odds will catch up (Kerl, 2010).

The study concludes that we should not overlook risk that is considered improbable, don't think it can't happen, and don't accept risk if the failure is too costly to bear.

Introduction

On 11 March 2011, an earthquake and subsequent tsunami caused extensive damage at Tokyo Electric Power Co.'s (TEPCO) Fukushima Nuclear Plant. The tsunami sent massive amounts of water into the reactor buildings of Units 1–4, soaking the emergency diesel engines and batteries, which were stored in the basement of these buildings (Ohmae, 2011, paragraph 6). Without emergency power, Reactors 1–4 were doomed.

Reactors 5and 6 survived because their emergency power was provided by an air-cooled diesel engine sitting atop a hill close by. Was the generator placed there as the result of a risk assessment? No, the engine was too big to fit into the basement of the reactor building so it was placed on the hill behind the plant. The probability of a tsunami of the size that caused the damage was so low, that they didn't think it was necessary to plan for such an unlikely event (Ohmae, 2011, paragraph 7).

We depend on probability when assessing risk, and while doing so, often convince ourselves that the risk is unlikely, while failing to understand the real risk. Many believed the earthquake and tsunami at Fukushima was a natural disaster far beyond anything anyone could have imagined or planned for. But, was it? Should the risk have been recognized and planned for? In hindsight, the answer is yes.

Events with a severe consequence that seemingly take us by surprise are referred to as Black Swans, they occur more often than you think. To understand how that happens we will examine the process by which we assess risk and how our own biases and understanding play a role in the decisions we make.

In the typical probabilistic risk management process we identify the risk, determine a basis for the risk, and estimate the probability of occurrence to calculate the Estimated Monetary Value (EMV). Once the EMV is determined, we develop risk plans to deal with risk that is considered substantial. In doing so, the determination of probability is essential to proper risk classification. Therefore, we should challenge the assumptions the probability selection was based upon. Does the assessor really understand probability? How was it determined? Should you consider other factors? And, finally, how confident are you?

What are Black Swans and how do the decisions we make exasperate them?

Black Swan: A Low Probability (Absolutely Impossible) High Impact Event

In Europe, all anyone had ever seen were white swans; indeed, “all swans are white” had long been used as the standard example of a scientific truth. So, what was the chance of seeing a black one? Impossible to calculate or at least they were, until 1697, when explorers found cygnus atratus in Australia.

img

Nassim Taleb included the idea of the black swan (as an event) as first popularized by John Stuart Mill and immortalized through this statement from David Hume: “No amount of observations of white swans can allow the inference that all swans are white, but the observation of a single black swan is sufficient to refute that conclusion.” (Taleb, 2004, p. 100)

In order for an event to be considered “highly improbable,” it must have three primary characteristics: first, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable (Taleb, 2007, p. xviii).

Black Swan logic makes what you don't know far more relevant than what you do know. Consider that many Black Swans can be caused and exacerbated by their being unexpected (Taleb, 2007. p. xix). We just don't think it can happen.

If the risk of a 9.5 magnitude earthquake and 30-foot tsunami at the Fukushima plant had been considered probable, would the outcome have been the same? Probably not, the plant would have planned for the event, procured emergency generators that were air-cooled, and placed them on roofs, or a least on higher ground. In hindsight, should they have anticipated the event?

Tsunamis occur most frequently in the Pacific, particularly along the “Pacific Ring of Fire.” This zone is found at the northern edge of the Pacific Plate and refers to the geologically most active fields of the earth. Several times a year, strong earthquakes of at least a magnitude 7 on the Richter scale result in tsunamis. Japan is hit by a tsunami at least once a year (Tsunami Warning System, 2012); yet this tsunami seemed to be a surprise.

Let's look at a probabilistic risk assessment.

Probability versus Expectation

Let's assume you're conducting a risk assessment and you have two risks. The first has a cost basis of US$200,000 and you've determined it has a probability of occurrence of 75%. Calculating the Expected Monetary Value (EMV) you find:

EMV = US$200,000 * .75 = US$150,000

Using a probability of 75%, your expectation is that there is a good chance of losing US$150,000 on this risk.

The second risk has a cost basis of U$3 million, and you've selected a probability of occurrence of 5%. The EMV then is:

EMV = US$3 million * .05 = US$150,000

Having selected a probability of 5%, your expectation is that there is a small chance of losing US$150,000 on this risk.

Risk Register

Exhibit 1 – Risk Register

Both expectations are the same. Should one concern you more than the other? Determining the probability for a potential risk is like placing a bet. Accepting the bet has a lot to do with the expectation you have of the outcome. But what if the probability is wrong?

Expectation versus Tolerance

Expectation matters, but consider your tolerance for risk. You're given the choice between two scenarios, one with a guaranteed payoff and one without. In the guaranteed scenario, you receive US$5. In the uncertain scenario, a coin is flipped to decide whether you receive US$10 or nothing. What should you do? It depends on your risk tolerance. In these scenarios, your expectation is US$5 for both.

People have different risk tolerances. A person is said to be:

  • Risk-averse — you'd rather take the US$5 than risk getting nothing.
  • Risk-neutral — you're indifferent; after all, it's only US$5 or US$10.
  • Risk-loving — flip the coin!

But what if the bet was bigger? Let's make it a US$10,000. Take the US$10,000 or flip a coin for double or nothing. As a risk management professional, you quickly calculate that the expected value is the same for both outcomes. Chances are your risk tolerance will change depending on how big the bet is. Why? Confidence affects your risk tolerance.

Interestingly, tolerance and confidence sometimes change depending on whose money you're gambling with. Some project managers wouldn't bet a nickel on a horse race, but easily accept a US$1 million risk for their company. Risk assessment starts to look a lot like Las Vegas—big bets, big payouts, and often, catastrophic losses!

Kwak (2012) recommends that a firm should review its compensation policies for project managers and other employees. People weigh the possible rewards in making decisions that impact projects. By initiating a compensation structure, whereby a portion of a person's salary is at risk and based on performance, a firm influences that person's likelihood of taking risks (Kwak, 2012, p. 695).

Getting back to one of the principal themes of this paper, let's see how well you understand probability. Let's start with a classic example of probability. In the Monte Hall Problem, named after the game show host, you're a contestant on a game show.

The Monte Hall Problem

You're given the choices of three doors: Behind one door is a desirable gift (65” TV); behind the other two doors, something undesirable (pigs, unless, of course, you like pigs). You select a door, say No. 1, and Monte Hall, who knows what's behind the other two doors, opens one door, say No. 2, showing you a pig. Monte then asks you, “Do you want to change your pick?” Should you?

Now you know you don't want No. 2, but you don't know what's behind door No. 1 or No. 3. Most people conclude that it doesn't matter which one they pick, assuming both have an equal probability of holding the 65” TV. They would be wrong.

If you always switch doors after Monty reveals a pig, then your odds of winning are 2:3, or 66.7%. If you keep your original choice, your chances of winning are just 1:3, or 33%. So, how did you do?

Probability

Probability seems easy to define: the relative possibility that an event will occur, as expressed by the ratio of the number of actual occurrences to the total number of possible occurrences, or the relative frequency with which an event occurs or is likely to occur. (Dictionary.com) Why is probability hard to predict?

I review risk assessments completed by project managers for their projects. When I do, I always make of point of asking them how they determined the probabilities of the risks identified. I often see probabilities of 10%, 25%, 50%, 60%, or 75%. Sometimes I even see 32% or 63% or 78%, which is very precise! I like to ask the project manager to give me an example of something in real life that describes 10%, 20%, 30%, and so forth. What I usually find is that there is no basis and they can't tell me how they selected the probability. It's more a gut feeling than anything else.

The general methods of assigning probability values are the classical method, the empirical method, and the subjective method. Let us briefly consider each.

In the classical method we count the total number of possible outcomes and the number of ways the event can occur. We compare the two to get the probability; for example, drawing an ace from a deck of cards (4/52 = 8%). This limits its use in probabilistic risk determination; it's theoretical, works well for card games and dice, but doesn't apply well to real-world risk.

In the empirical method, actual data are collected, analyzed, with results distributed by subject matter measured. It's a statistical method that is best used in quality control. For example, company X manufactured 1000 widgets. Eighty-five present were defect free, 10% had one defect, and 5% had two or more defects; therefore, there is a 5% probability of a widget manufactured by X to have two or more defects.

We mostly rely on the subjective method. That's right, sometimes we just guess. Since we sometimes rely on our gut feeling, would it be helpful to establish odds instead of a probability?

What are the Odds?

Odds are expressed as a ratio of one number to another. We read the odds as a ratio “A to B.” For example, a horse may have 3:1 odds in a race. This means that the horse is three times more likely to win than lose.

We can start with odds and then derive the probability. The odds in favor of an event are A to B; this means that there were A successes for A + B trials. The probability of the event then is A/(A + B). In our example, 3/(3 + 1) = 75%.

Try this example. What are the odds you will be hit by lightning?

The odds of being hit by lightning in the United States in any one year are 1 in 1,000,000. The odds of being struck in your lifetime are 1 in 10,000. (NWS, 2012) That's a probability of 1.43 × 10-6. If something has one in one million odds of happening to any particular person in a given year, it will happen, on average, to over 6,000 people in the world, each year. If the probability is so low, why do we stay indoors during a storm? We don't think it will happen, but the consequence of the risk scares us to death!

Equating Words with the Probability

It might be easier to equate how we feel about risk to a probability. We often hear risk debated in day-today language. In a conversation with amateur risk assessors I once asked the question: “Do you think the team has a chance to go to the championship?” The responses I received ranged from, “they have a good chance” to “it's highly unlikely.” Interestingly, it's often easier to access probability if you just express your thoughts in words. These words can be expressed in numerical terms.

Selection of verbal probabilities (Hamm, 1991, pp 191–223)

Exhibit 2 – Selection of verbal probabilities (Hamm, 1991, pp 191–223)

Predictability: The Value of Data

How do we make the mistakes during risk assessment that cause us to underestimate the real consequences? Often, it is our own biases that affect the outcome of probabilistic risk assessment. We overlook inconvenient data. We take shortcuts. We just can't process the enormous amount of data. We have emotional, moral, and social influences.

Our biases affect our choices. Here's a select list of cognitive biases found on Wikipedia (List of Cognitive Biases):

Ambiguity effect — the tendency to avoid options for which missing information makes the probability seem “unknown.”

Framing effect — drawing different conclusions from the same information, depending on how or by whom that information is presented (spinning)

Hindsight bias — sometimes called the “I-knew-it-all-along” effect, the tendency to see past events as being predictable at the time those events happened (hindsight is 20/20)

Normalcy bias — the refusal to plan for, or react to, a disaster that has never happened before (head-in-the-sand)

Optimism bias — the tendency to be over-optimistic, overestimating favorable and pleasing outcomes (wishful thinking)

Wishful thinking is a bias that is prevalent in a probabilistic risk assessment. Is this a legitimate statement? How often have you heard someone say that this has never happened before, so it will not happen now? It might be a fact that it hasn't happened, but can you use the fact to claim it can't happen? History teaches us that things that never happened before do happen (Taleb, 2004, p.93).

Thinking back to Fukushima: a tsunami of 30 feet has never happened here before; therefore, it will never happen. What do you think now?

Will Your Luck Run Out?

Can you predict the chances of being injured as the result of a motor vehicle accident in any given year? According to the Fatality Analysis Reporting System (FARS, 2010), 2.24 million people were injured in the United States in 2010. There are approximately 300 million people in the United States, which gives you odds of about 1 in 134 of being injured, or a probability of 0.0075 (less than very unlikely). However, we recognize the risk so we take precautions—drive the speed limit, wear our seatbelts, and so forth.

Over time, does the risk increase? Other than environmental factors, age, equipment condition, and so forth, does the likelihood of an accident increase just because you're still driving?

In the long run, risk accumulates (Kerl, 2010, p. 1). Let's look at the math by using the probability density function: P = 1 - (1 - p)n, where p = probability and n = the number of times you repeat the risk.

If there is a 1 in 5 chance (20%) chance that an event will occur … then how many times (n) will it take before the odds catch up with you…..using the probability density function:

  • Year 1: 20%
  • Year 2: 1 − (1 − p)n = 1- (1-0.2)2 = 36% (somewhat likely)
  • Year 10: 1- (1-0.2)10 = 90% (highly probable)

The moral of this story is that for every year you drive the chances of being injured in an automobile accident increase.

  • Year 1: 0.75%
  • Year 20: 1 − (1 − p)n = 1- (1-0.0075)20 = 14% (seldom)
  • Year 40: 1- (1-0.0075)40 = 26% (fairly unlikely)

As long as you recognize the risk and do not increase the risk factors (drinking, texting, equipment, and so forth) you should be okay, but wear your seatbelt just in case.

Conclusion

– If it happened once, it will happen again (rather likely)

– Consider your confidence before you accept the risk.

– Not confident, increase the probability of the event occurring (hedge your bet).

– Don't believe it can't happen.

And, finally, in the measurement of risk versus reward, it doesn't matter how frequently something succeeds if failure is too costly to bear (Taleb, 2004, p. 10).

References

Black Swan Theory (n.d.). In Wikipedia, the free encyclopedia. Retrieved from http://en.wikipedia.org/wiki/Black_swan_theory

FARS (2010). Fatality Analysis Reporting System: 2008. Retrieved from http://www-fars.nhtsa.dot.gov/Main/index.aspx

Hamm, R.M. (1991). Selection of verbal probabilities: A solution for some problems of verbal probability expression. Organizational Behavior and Human Decision Process, 48(2), 193–223.

Kwak, Y.H., & LaPlace, K.S. (2004). Examining risk tolerance in project-driven organization. Technovation 25(6), 691–695.

Kerl, J. (2010). Getting hit by lightning… in the long run from. Retrieved from http://www.johnkerl.org/doc/iidsmallp.pdf

List of Cognitive Biases (n.d.). In Wikipedia, the free encyclopedia. Retrieved from http://en.wikipedia.org/wiki/List_of_cognitive_biases

Ohmae, K. (2011, April). Fukushima: Probability theory is unsafe, special to The Japan Times. Retrieved from http://www.japantimes.co.jp/text/eo20120418a4.html

Taleb, N.N. (2007). The black swan: The impact of the highly improbable. New York: Random House.

Taleb, N. N. (2004). Fooled by randomness. New York: Random House.

NWS (2012). The National Weather Service. Lightning Safety. Retrieved from http://www.lightningsafety.noaa.gov/medical.htm

Risk Aversion (n.d.). In Wikipedia, the free encyclopedia. Retrieved from http://en.wikipedia.org/wiki/Risk_aversion

Tsunami Warning System (2012). Occurrences of tsunamis in the Pacific Ocean. Retrieved from http://www.tsunami-alarm-system.com/en/phenomenon-tsunami/occurrences-pacific-ocean.html

This material has been reproduced with the permission of the copyright owner. Unauthorized reproduction of this material is strictly prohibited. For permission to reproduce this material, please contact PMI or any listed author.

© 2012 Westinghouse Electric Company LL, All Rights Reserved
Originally published as a part of the 2012 PMI Global Congress — Vancouver, BC

Advertisement

Advertisement

Related Content

Advertisement

Publishing or acceptance of an advertisement is neither a guarantee nor endorsement of the advertiser's product or service. View advertising policy.