Life is full of risk and uncertainty and, as a result, life can often be unfair. These are statements of fact—the real point of interest is how one responds to them. Some people and organizations accept the unexpected, whereas others try to ensure that the unexpected never happens. The former situation is commonly termed risk and, in its extreme form, uncertainty—a relatively passive lack of knowledge of what might occur. The latter course of action is commonly termed risk management, an attempt to control what may (or may not) occur.
This range of conditions can be expressed as follows, where “P(X)” represents the probability of a state of knowledge.
There is a full body of literature on this spectrum of states and the associated topic of risk management. This paper, however, will briefly survey the primary principles to be found on this theme and will focus on the concept of “risk homeostasis” and the unintended consequences in behavior that frequently occur as a result of it.
The Meaning of Risk Management and Mitigation
Prior to delving too deeply into the topic of risk management, it is appropriate to define the terms under discussion. As a point of departure, it is useful to note that risk commonly has an adverse connotation, reflecting potential hazards, but it can also be construed as an opportunity with a positive perspective. Regardless of its nuance, risk reflects a potential occurrence as opposed to an issue, which denotes an actual event. This difference between a risk and an issue is an important one, because the former involves management through planning, forecasting, and analysis, whereas the latter requires administration and execution.
The actual management of risk begins with an appreciation of its myriad sources, which can be generally summarized by the representative, if abbreviated, list as shown below:
- Business Structure (Organizational, Processes, etc)
- Critical Activities
- Scope, Cost, and Duration of Effort
- User Experience
- Externalities (Economy, Competitive Markets, etc)
- System Complexity
Given that risk exists in virtually every aspect of individual and organizational endeavors, both entities spend a significant amount of time in the identification, analysis, and administration of risks and their anticipated outcomes in an attempt to manage and control them. This overall effort, conducted under the aegis of “risk management,” can take on many forms, but essentially addresses the following concerns (Exhibit 1):
Exhibit 1: Components of risk management
Note that a risk priority is a function of the probability of occurrence times the impact of the outcome of that occurrence, or P(X) · Impact (X). Furthermore, active risk mitigation can take on many forms, but they all essentially revolve around the factors of reductions in the
- Probability of Occurrence and
- Impact of Occurrence
How is this accomplished? Again, boiled down to its essentials, risk management and mitigation have two, primary components:
- Risk Assessment – Identification , analysis, and prioritization of risks (as shown above)
- Risk Control – Taking steps to reduce risk, providing a contingency factor, and monitoring of improvements
The tasks of identification, analysis, prioritization, and mitigation in the name of risk management reflect the fact that, as previously noted, individuals and organizations spend a considerable amount of time and effort in these endeavors. Why? Simply stated, individuals and organizations tend to be risk adverse in general and openly apprehensive about the more severe circumstances of uncertainty in particular; however, because risk is everywhere, entities at all levels have come to develop a sense of acceptable risk, a sense of what constitutes degrees of comfort and relative safety.
Relationship to Quality
The ideas of comfort and relative safety can also be viewed from the perspective of quality. As discussed by Hyatt and Rosenburg (1996), there are at least five possible views of quality:–
- Transcendental –something that can be recognized but difficult to specifically define
- User – fitness for intended purpose
- Manufacturer – conformance to specification
- Product – adherence to product features
- Value – dependent on what the end-user is willing to pay for it
Each of these views of quality can be expanded in much greater detail, but the point to note is that they each represents a type of risk if not met in its intended manner. A generalized individual/organizational point of view, though, is based on what has been identified as a sense of acceptable risk, a sense of what constitutes a degree of comfort and relative safety. Stated differently, there is a much more pragmatic view of quality (and risk)—does the product or service work well enough to serve its intended purpose and is it available when needed? Viewed from this perspective, quality measures can be seen to be a leavening mechanism for the development of a sense of relative comfort at an acceptable level of risk to the individual or organization.
The net result is that risk identification and analyses are conducted, priorities are established, and mitigation efforts (including quality testing and control) are put into place as parts of an overall risk management plan to increase reasonable and acceptable (i.e., pragmatic) individual and organizational levels of comfort and safety. However, the concept of risk homoeostasis states that an individual/organization has an overall, inherent level of acceptable risk that is not easily altered; hence, when the general level of acceptable risk in one part of an organization’s operations changes, there will almost always be a corresponding and opposing change in the level of acceptable risk elsewhere.
Despite the best intentions of risk management, a change in risk behavior frequently supports the impact of unintended consequences in that actions that actually result in greater risks are often generated in other areas. In turn, a conundrum is created in terms of what to measure and control without a complete stifling of individual/organizational leadership and creativity. Recognition of this distinct possibility leads back to the concept of risk management, with the addition of the need to account for changes in risk behavior beyond the immediate arena of a risk mitigation course of action. In short, a fifth general concern of risk management needs to be recognized:
- Outcome What might be the unintended behavior of a mitigation strategy?
To achieve a true increase in overall comfort and safety, organizations need to recognize that the simple identification of risks and application of controls is insufficient. It is an understanding of the changes in behavior generated by the institution of controls that must also be clearly recognized and measured for meaningful levels of comfort and safety to be achieved while still permitting effective operations. An organization can always ensure that it is completely comfortable and safe through the application and enforcement of rigorous controls and insulation from a volatile environment, but that same organization is then quite likely to become moribund and very slow to accept the necessary change in order to remain competitive in its industry.
Homeostatic Control Mechanisms
Since actual risk behavior can run counter to planned risk management expectations, it is important to digress for a moment to consider the homeostatic control mechanisms. All such mechanisms are based on three, primary elements:
- Receptor – the sensing component that monitors and responds to changes in the environment or process being measured
- Control Center – the analytical engine that receives stimuli from a receptor and determines an appropriate response
- Effector – the instrument that takes the necessary action to correct a deviation from a pre-determined standard (negative feedback) versus a positive action that seeks to enhance the deviation
Note that the “necessary action” taken by an effector in response to a receptor’s stimulus to the control center can be either negative or positive in nature. A negative action tends to suppress variation and return the condition to the desired standard. The degree of response is often dependent on the magnitude of variation from the standard. Positive actions, while generally viewed in a favorable light, can, if unchecked, lead to significant deviations from a desired norm, so upper bounds are frequently established to keep them within a reasonable, or manageable, level. The underlying concern is that, if left unchecked, positive actions may cause unintended changes to the rest of a system.
- Think of the recent financial crisis in this regard where unchecked investments and returns led to the well-known outcome for the financial industry and economy.
- When viewed from an information systems point of view, the collection and processing of transactional data can actually lead to the detriment of overall performance.
Balance of Risk – Risk Compensation
To repeat the core argument presented above, the theory of risk homeostasis states that an individual or organization has an inherent target level of acceptable risk that does not readily change; this level can vary widely between individuals and organizations. When the level of acceptable risk in one part of an entity’s environment or desired level of comfort and safety changes, there will often be a corresponding rise and/or drop in acceptable risk elsewhere. This is a process known as risk compensation. It is a tendency in individuals and organizations to increase risky behavior proportionately as safeguards are introduced and it is very common. It is so common, in fact, as to render predictions of how well any given aspect of risk management will work highly suspect, if not almost useless.
Consider several examples of the combined elements of risk management, risk homoeostasis, and risk compensation, and the effectively unintended responses that they generate, as follows:
- A well-known study of Munich taxicab drivers was conducted while the taxicab fleet was being changed over to ABS braking systems. The drivers were tracked by observers unaware of which kind of brakes each cab had. Against the expectations of traffic experts who recommend ABS brakes as a safety advance, the drivers with ABS brakes actually had more accidents per vehicle mile than those without these brakes. The drivers braked more sharply, made tighter turns, drove at higher speeds, and made a number of other (negative) adjustments to their driving, all of which more than compensated for their supposedly safer vehicles.
- The National Football League (NFL) has recently been enforcing “safe” game activities by fining players who engage in risky behavior, such as helmet-to-helmet hits. However, the question has arisen as to whether these activities would be practiced if the players were not fully wrapped in equipment and padding meant to protect them from injury. If the players had the same limited protection that is found in such field sports as soccer, lacrosse, or rugby, would there be the same or reduced level of risky behavior?
- Motorists tend to give cyclists not wearing helmets a wider berth when passing, indicating that the act of wearing a helmet actually increases the chance that a cyclist will be hit by an automobile or incur a greater number of near misses.
- The use of birth control pills is widely known to prevent unwanted pregnancies, but the unintended consequence of their use seems to be a change in social mores in general and the increased practice of riskier sex in particular.
- In the area of project management, it is common to provide for a “management reserve” or contingency fund as a means of addressing risk. When discussed in this context, a “pad” can refer to a cushion (as suggested) or a de facto cushion provided by a contingency fund. The overall thought is that a “pad” is some source of extra allowance beyond a point estimate of scope, time, cost, quality, etc. If there is, in fact, a “pad” (or cushion or contingency fund or allowance) available to a team, there is reason to believe that it (the team) may be more willing to take risks that it would otherwise not take.
This list of examples could be extended, but these five examples bring forth the following two key aspects of risk management to consider:
- The first is that individuals and organizations may not base their actions on actual danger as much as their perception of risk to them. Recognition of this approach suggests that it may be better to adjust the individual/organizational sensitivity of risk instead of, or in addition to, improving the safety of the internal or external environment.
- Second, if risk homeostasis and compensation are based on perception, an argument can be made that transparent safety measures would have little, if any, effect on behavior. Thus, for example, the use of a hidden management reserve for projects would have little impact on the behavior of members of the project team.
Consideration of this overall discussion and its associated examples leads to the concept of moral risk or hazard. If one party (e.g., a project team or manager) has more information about a situation and/or takes what appears to be an unwarranted action by an unsuspecting second party (perhaps management), then it is not uncommon for the second party to feel that the first is acting in an inappropriate manner. This level of informational asymmetry can also be found in such diverse areas as producers versus consumers, software developers versus users, and so forth. The overall situation need not be a concern, because different levels of information, especially of a technical nature, are common unless the party that possesses greater information acts in a manner that does not recognize and assume responsibility for the full consequences of its actions. If this occurs, the knowledgeable party may have a tendency to act less carefully than otherwise, leaving another party to hold some degree of responsibility for the consequences of those actions.
This consideration could rapidly turn into a discussion of either ethics or the effects of economic externalities, but the point to be noted here is that risk management and its resulting (often unintended) behavior can be, and often is, a highly interactive process. Even though it can be argued that moral risk/hazard is a function of the effect of events outside the control of an entity’s actions, or at least how others credit and blame that entity for those actions, the fact remains that risk management activities can have significant impacts on behavior and outcomes, whether intended or not.
This entire discussion boils down to the recognition that the impact of unintended consequences is extraordinarily applicable when talking about risk management and control through mitigation and safety innovations. Actions intended to make an individual or organization more secure may not make any improvement at all to overall safety or a perceived level of comfort—they may actually create a less safe set of processes or environment. Stated bluntly, the tendency to take compensatory risks may trump all the efforts of risk analysts and safety engineers; so, perhaps in the end, no one can save us from ourselves.
Setting aside this fatalistic orientation, it would be easy enough to place greater emphasis on establishing and enforcing a set of rigorous rules to achieve a level of safety and/or comfort. Rules are measurable, so accountability is fairly easy; however, to achieve perfect success in training people to follow these rules, they will always depend on a system in which the rules are consistently clear, fair, and well enforced, a feat far easier said than done. This approach also runs the distinct risk of the unintended consequence of developing unthinking automatons. What is preferable is a model in which individuals and organizational entities use their best judgment and treat others as they would want to be treated. This attitude works well in most situations, but, of course, it hinges on people operating under good judgment and an acceptable level of risk, neither of which is guaranteed.
Risk Management Models
So, if the concept of risk homeostasis and compensatory actions is correct, are the role and purpose of risk management exercises in futility? As suggested above, this is a defeatist mindset. It is far more appropriate to recognize and account for the elements of risk homeostasis and compensatory actions than to either ignore or blindly accept them. Stated differently, the full consideration and open communication of risk is crucial as a beginning step to effective risk management and its resulting behavior (de Bekker, et. al., 2011).
As a first step in this direction, it is necessary to develop a risk management plan that has the key components of identification, analysis, mitigation, and monitoring. This first step can be initiated through the development and use of a risk management plan template, similar to the one shown here (Exhibit 2):
Exhibit 2 – Risk Management Plan Template
Note that this template, although quite satisfactory in that it explicitly addresses each major component of risk management and clearly promotes the open and full discussion of risk, it is missing one key item. It does not focus on the monitoring of individual or organizational activities after the actions taken to control probability and impact to avoid the occurrence of unintended consequences. What is also not shown by this template is who completes it and when it is completed; the schematic shown below addresses these omissions. Note that each of the four steps shown in this model promotes both analysis and communication through the use of different, but related and supporting, tools of analysis by varying entities.
Exhibit 3: Levels of risk management
Relationship to Change Management
Up to this point, it has been implicitly suggested that organizational change is a natural outcome of risk management. It is therefore appropriate to explicitly consider how that change can be implemented and accepted by a target audience in its intended manner. The single most important factor in motivating a change of any type, but especially one whose necessity may not be immediately clear (as is often the case with risks before they become issues), is the provision of clear and convincing evidence of the need for the change and its associated behavior. Simply put, anyone who is expected to recognize the need for and adopt the change must be convinced that the benefits from the to-be system outweigh the individual and organizational costs of changing.
There are two major strategies used to providing a motivation for the adoption of a change: informational and political. Both strategies may be used simultaneously. With an informational strategy, the goal is to convince potential adopters that the changes in process and their behavior are for the better, more specifically, their better. This strategy works when the perception of the change to the target adopters has more benefits than costs. In other words, there are clear reasons for the potential adopters to welcome the change. This is often the case when risks have become issues, providing the basis for clear (and repeated) communication to the affected parties. In general, informational campaigns are more likely to be successful if they stress the reduction or elimination of problems/issues rather than focusing on the provision of new opportunities or the institution of new processes and procedures.
The other approach used to initiate change is a political strategy. With this method, organizational power, not information, is used to motivate a change in process and associated behavior. It is often used when there are more perceived costs than benefits to the target adopters. In other words, although the change may benefit the organization, there are no immediately apparent reasons for the potential adopters to welcome the change because the risk is not readily recognized. The political strategy is usually beyond the direct control of the project team, requiring someone in the organization, who holds legitimate power over the target group, to influence it to adopt the change. In general, for any change that has true organizational benefits, approximately 20% to 30% of potential adopters will be ready adopters; they will recognize the benefits, quickly adopt the change, and become proponents of the new processes and system. Another 20% to 30% will be resistant adopters; they will simply refuse to accept the change and fight against it, either because the new system has more costs than benefits for them personally or because they place such a high cost on the transition process itself that no amount of benefits from the new system can outweigh the change costs. The remaining 40% to 60% can be termed reluctant adopters; they can be viewed as being the swing vote for the success of a change in process and intended behavior.
Typically, successful adoption of a change for a perceived need has signaled a confidence in its potential to alleviate a particular problem or to make a job easier or more efficient, which is the case with an “issue.” In contrast, it is much harder for the perception of a possible condition (a “risk”) to bring about new organizational and functional conditions. In essence, the adoption and diffusion of a change within an organization do not guarantee its successful integration into the organizational fabric for its continued use and reduction in risk through expected behavior. In addition to the strong, stable advocacy needed to ensure the conditions necessary for change adoption and diffusion, training in its technical aspects and application to real and readily perceived needs are crucial to its integration beyond the innovators and early adopters. Time for experimentation and the development of applications is essential for the institution of intended behavior through successful peer users to lead its integration into the organizational fabric. If the change is perceived as being difficult to learn and/or too time consuming to prepare and use, or is in some other way perceived as threatening, it probably will not be used. No amount of administrative force is likely to be effective in reversing a negative trend, giving rise, again, to unintended consequences of behavior and outcomes.
A Special Excursion into the Unknown – Black Swans
While the foregoing risk models are a step in the right direction, they also represent a potential hazard if their users assume that they have accounted for all known risks or that the process is sufficient to counter all anticipated risks. This reflects the dangerous sin of hubris and has been described by the phrase “black swan,” as taken from the title of the 2007 book by Nicholas Nassim Taleb, The Black Swan: The Impact of the Highly Improbable.
This new term has become popular among managers, including some risk specialists, when they talk about risk, As discussed by Hillson (2010), the way most people use this term is different from Taleb’s original definition. In popular conversation, the black swan event is something with an extremely low likelihood of occurrence and an extremely high potential effect. It is seen as the thing that is thought will never happen, but, if it did happen, then the individual/organization would be affected in a really big way. In contrast, Taleb states in his book that black swans have three characteristics: they are unexpected and unpredictable outliers, they have extreme impacts, and they appear obvious after they have happened.
The term comes from the idea that in the western world a few centuries ago, it was a known fact that all swans were white; by definition, any similar bird of a different color could not be a swan. Then, explorers travelled to Australia in 1697 and discovered true swans that were black, and the known fact had to be modified in light of the new evidence. In today’s world, the concept of a black swan changes the rules and creates a new paradigm. Examples include the fall of the Berlin Wall, the 11 September 2001 terrorist attacks in the United States, the rise of Google, the recent financial and housing crises, or the recent Japanese tsumani. However, events or circumstances with extremely low probability and extremely high impact are, in fact, still just risks, and they can and should be tackled through the normal risk process. Given this perspective, it can be argued that there is no useful reason to give them the special name of black swans.
Another popular use for the black swan term is to describe “unknown unknowns,” which are things that we do not know, but where we are unaware of our ignorance. This is almost correct, but not quite; in fact, Hillson shows that “unknown unknowns” can be divided into two types, one of which is a true black swan and the other is not:
- The first group consists of “unknown-but-knowable unknowns.” There are some uncertainties that we currently do not know, but which we could find out about. This is where the risk process can help through creative risk identification, exploration, and education. The aim is to expose those unknowns that could be known so that we can deal with them effectively using a standard risk management approach. They are not black swans because we could know about them if our predictive or discovery processes were better.
- Second, there are “unknown-but-unknowable unknowns.” These are much more difficult to address, because, by definition, they can never be discovered unless and until they happen. These are the true black swans, which could not be predicted with even the best risk process. Risk management cannot help here because it only targets uncertainties that can be seen in advance and for which we can prepare or address proactively.
If risk management cannot be used to address black swans in advance, is there anything else that can be done? At the strategic level, business continuity is a form of risk management that can help deal with “unknown-but-unknowable unknowns.” This approach identifies areas of vulnerability and ensures that resilience and flexibility are built into the organizational structure so it can cope with the impact of the unexpected, whatever its source. Business continuity also reflects risk analysis in that it looks for early warning indicators or trigger events to warn that something is different from the norm. Finally, it uses environmental scanning to help discover potential black swans before they strike. It is possible to apply this at other levels in the organization, including projects and programs or at an operational level, creating an “enterprise-wide continuity” approach.
In summary, the black swan is a valuable concept that can warn the project team or overall organization to expect the unexpected. The only certainty is uncertainty, and we know that we will continue to be surprised in all areas of life, both personal and professional. We should be careful to use the term properly and not dilute it through misuse or laziness. If we mistakenly think that all risks with very low probability and very high impact are black swans, then we are likely to remain blind to the existence of true black swans. That, in turn, will leave us unaware of how vulnerable we are to genuinely unknowable unknowns, and the real purpose of a risk management plan will be a failure.
To re-state the beginning of this paper, life is full of risk and uncertainty, and, as a result, life is often unfair. These are statements of fact—the real point of interest is how one responds to them. The concepts of risk homeostasis and compensation, often leading to unintended consequences from efforts to make life and business operations safer and more comfortable, may appear to denigrate the efforts of risk management and the various ways used to mitigate risk. However, to paraphrase the old saying, it is better to light a single candle of risk assessment and control than to curse the darkness of uncertainty and unknown risk behavior. To do otherwise is to completely relinquish control to the vagaries of fate—a situation of great angst and potential peril to the individual or organization.
de Bakker, K., Boonstra, A., & Wortmann, H. (2011). Risk management affecting IS/IT project success through communicative action. Project Management Journal, 25(4), 75–89.
Gale, S.F. (2011). Controlling chaos. PM Network, 25(4), 27–31.
Hillson, D. (2010). When are black swans white? PM World Today. Retrieved from http://www.pmworldtoday.net/tips/2010/dec/When-are-Black-Swans-White.html
Homeostasis. (2011) In Wikipedia, the free encyclopedia. Retrieved from http://en.wikipedia.org/wiki/Homeostasis.
Hyatt, L., &, Rosenberg, L. (1996). A software quality model and metrics for identifying project risks and assessing software quality. Retrieved from http://satc.gsfc.nasa.gov/support/STC_APR96/quality/stc_qual.html
Keech, D. (eZine Articles) (2010). Risk homeostasis. Retrieved from http://ezinearticles.com/?Junior-Risk-Homeostasis&id=3544528
Luftman, J.N., Bullen, C.V., Liao, D., Nash, E., & Neumann, C. (2004). Managing the information technology resource: Leadership in the information age. Upper Saddle River, NJ: Pearson Education.
Moore, G.A. (1991). Crossing the chasm: Marketing and selling technology products to mainstream customers. New York: Harper Business.
Risk Homeostasis (2010) In Wikipedia, the free encyclopedia. Retrieved from http://en.wikipedia.org/wiki/Homeostasishttp://www.worldlingo.com/ma/enwiki/en/Risk_Homeostasis
Risk Management (2011) In Wikipedia, the free encyclopedia. Retrieved from http://en.wikipedia.org/wiki/Risk_Management
RMA Journal (2010). Is your risk system too good? Retrieved from http://findarticles.com/p/articles/mi_m0ITW/is_2_92/ai_n45539296
Taylor, A. (2011). Arts Journal. Retrieved from http://www.artsjournal.com/artfulmanager/main/risk-homeostasis.php
Wood, C. (2011). The balance of risk. Retrieved from http://www.damninteresting.com/the-balance-of-risk
©2011, Michael E. Thorn
Originally published as a part of 2011 PMI Global Congress Proceedings – Dallas/Ft. Worth, TX