RISK ASSESSMENT PROVIDES an estimate of the severity of a risk. Without this assessment, a project manager can waste time on risks that may be of little importance to the project, or, worse, fail to give sufficient attention to significant risks.
While detailed quantitative analysis of risks is always preferred, in many cases this is neither practical nor possible. Qualitative assessment of risks, however, can always be performed, and will usually take far less time and resources than quantitative analysis.
Basic Concepts
The severity of any risk can be defined in terms of two quantities: impact, the effect that a risk will have on the project if it occurs; and likelihood, the extent to which the risk effects are likely to occur. To these we must add a third quantity defining the extent to which we trust our information: precision, the degree to which the risk is currently known and understood. This article looks at how to define and measure these quantities, and how to use them for practical risk management purposes.
Impact
The impact of a risk (sometimes called its consequence) is defined in terms of a discrete scale, such as 1=very low, 2=low, 3=medium, 4=high, and 5=very high.
There is no particular significance to the use of a five-level scale, and other scales can be used. However, most people feel that five levels are a reasonable compromise between too little discrimination, such as a simple three-level scale (low/medium/high), and too much, such as a 10-level scale.
Roger Graves, Ph.D., is an aerospace engineer, project manager, and professional risk manager. He blundered into project risk management based on the empirical observation that most projects have a natural tendency toward chaos. He is currently president of Davion Systems Ltd., an Ottawa, Ontario, based consulting and software company, and was recently a member of the team that rewrote the risk management chapter of A Guide to the Project Management Body of Knowledge.
Exhibit 1. The four possible effects of any risk can be rated on a five-level scale.
To give this scale meaning, consider the four effects that any risk can have: cost, project costs can increase; schedule, project deliverables can be late; functionality, the level of performance or capability provided by the project deliverables can be reduced; and quality, the level of excellence of the deliverables can be reduced. Quality in this sense includes factors such as safety, reliability, environmental impact.
We can rate all four effects on a five-level scale (assuming we are using a five-level scale for impact). Examples of generic impact scales are shown in Exhibit 1.
The scales in Exhibit 1 are merely for illustration and do not imply, for example, that a 5–10 percent cost increase should always be considered a medium risk. Project-specific scales, in particular for cost and schedule effects, should be set wherever possible. A project-specific cost scale, for example, would give cost effects in actual monetary units.
Overall Impact Rating. The overall impact rating of a risk is the greatest of these four risk effects. The fact that we use the greatest rather than the average of them can be seen from a simple example. Consider a risk whose effect in a cost-sensitive project will be to increase costs by 100 percent, but otherwise have no other effects. The four ratings for cost, schedule, functionality, and quality will therefore be 5, 1, 1, 1, respectively. If we were to use the average of these, we would arrive at an overall impact level of 2, implying that a 100 percent cost increase is only a low risk. This is patently absurd.
Exhibit 2. Assessing the levels of both probability of occurrence and intervention difficulty and then taking the lower of the two ratings determine the likelihood of a risk occurring.
By using the greatest of the four risk effects to determine impact, we can ignore effects that we know in advance will be insignificant. Suppose that schedule is unimportant compared to cost and quality. In such a case, we can safely ignore schedule effects when assessing risk impacts.
Likelihood
Likelihood is the extent to which the risk effects are likely to occur. As with impact, we generally define likelihood on a five-level scale, such as 1=very unlikely, 2=low likelihood, 3=likely, 4=highly likely, and 5=near certain.
Before we can arrive at an acceptable method of measuring likelihood based on this scale, there are some difficulties we must overcome.
First, we must decide under what conditions to measure likelihood. The likelihood of an event often depends not only on blind statistical chance but also on human intervention. A common way of expressing this would be “I know this is highly likely to happen if I do nothing, but then I don't intend to do nothing.”
Exhibit 3. Once impact and likelihood have been determined, the precision level of a risk is useful.
For this reason, we divide likelihood into two components: probability of occurrence (the probability that the risk events will occur if we take no action), and intervention difficulty (the level of difficulty that we would experience in preventing the risk event from occurring). Note that intervention difficulty does not define response actions, but merely indicates the availability of such actions.
To see how these two components work together, consider the risk involved in standing on a railway track. If you stand there long enough you will be hit by a train. The probability of that occurrence depends on how often trains use the track. However, the likelihood of being hit also depends on how easy it is to get off the track. If the track is on flat, open land, so that you can easily step off when a train approaches (low intervention difficulty), then the likelihood of being hit is effectively independent of the frequency of trains. But if you are in the middle of a long tunnel, then you may not be able to get off the track in time (high intervention difficulty), so the likelihood of being hit depends in this case on how often trains use the track.
This leads to a simple relationship among likelihood, probability of occurrence, and intervention difficulty: likelihood is the lower of the two ratings for probability of occurrence and intervention difficulty.
Likelihood Ratings. Both probability of occurrence and intervention difficulty are measured on five-level scales. A generic scale for intervention difficulty is shown in Exhibit 2. However, this brings us to the second difficulty: how to define probability of occurrence.
It is tempting to give numerical values to probability. (“What's the probability that this supplier will deliver late?” “Oh, about 30 percent, I'd say.”) Nevertheless, one must always ask from whence such numbers come. An estimate of 30 percent in the case cited is acceptable if the supplier had made 10 previous deliveries, three of which were late. However, if this is only the first or second time that you have used this supplier, you are in effect trying to calculate statistics based on a single sample.
For this reason, a non-numeric probability scale is generally recommended, such as the one in Exhibit 2. (Of course, if you do have reliable numeric data, use it.)
Unique Situations. One advantage of this method of expressing likelihood is that it can deal with unique situations. If a particular situation is indeed unique, it is impossible to make any estimate of its probability of occurrence because, by definition, it has never happened before.
In such cases, we first decide if the situation could reasonably occur. If the answer is yes, we assume that it will happen (Level 5 probability of occurrence), in which case the lower-of rule means that likelihood is determined by intervention difficulty.
Precision
Precision defines the extent of one's current knowledge and understanding of a risk. More specifically, it defines the level of confidence placed in the estimates of impact and likelihood.
While precision does not tell us anything in itself about the severity of the risk, it does tell us how much we can trust our assessment of that severity. For example, consider the risk of an archeological discovery while excavating a construction site. If this were to happen to an American company working on a site in New York, you know that this would have a very high impact, since there would be a major delay while the discovery was examined. It is possible to estimate the probability of this occurrence by examining the frequency of similar events in the New York area. Intervention difficulty can be accurately assessed as very high, since there is little that the construction company can do to speed matters. We can therefore characterize our assessment of this risk as being of high precision.
Exhibit 4. A measurement of a risk's severity level can be estimated by using a risk matrix to combine impact and likelihood.
Exhibit 5. An alternate method to define precision is based on probable errors in impact and likelihood estimates.
Now consider the same company working on a site in a less developed part of the world with which the company is not familiar. In the event of an archeological discovery while excavating, do you know what the reaction of the responsible authorities would be? Who are the responsible authorities in such a case anyway? Has the site you are excavating been used for human habitation for the last 5,000 years, or has it only been in use for the last 50 years? In this case, your estimates of impact and likelihood, and hence of risk severity, will be much less precise.
Precision is given a rating of low, medium, or high, as shown in Exhibit 3.
A low precision rating serves as a warning that a risk may be more serious than currently estimated. Of course, the converse is true: it may be less serious than estimated. Nevertheless, one should always be aware of the limits of one's knowledge.
Exhibit 6. Rankings use an inverse numerical order; that is, a numerically low ranking indicates a high severity.
The Risk Matrix
Impact and likelihood are combined within the risk matrix to provide a measurement of risk severity.
A risk matrix consists of a 5 x 5 array of elements, as shown in Exhibit 4, each element representing a different set of impact and likelihood values. (Of course, if you decide to use six levels for impact and likelihood, you would use a 6 x 6 matrix, and so on.)
The risk matrix is normally divided into red, yellow, and green zones, representing major, moderate, and minor risks, respectively. The red zone is centered on the top right corner of the risk matrix (high impact and high likelihood), while the green zone is centered on the bottom left corner (low impact and low likelihood).
Many different algorithms, of greater or lesser understandability, have been used to delineate the risk matrix zones. However, matrix zones become obvious once one understands that they represent the relative importance placed on impact and likelihood in determining risk severity.
Conventionally, impact is considered more important than likelihood (a 10 percent chance of losing $1,000,000 is usually considered a more severe risk than a 90 percent chance of losing $1,000). The matrix in Exhibit 4 is based on the relationship of Severity = Likelihood + 2 × Impact, which expresses the somewhat arbitrary belief that impact is twice as important as likelihood in determining severity. This relationship assigns values to each element of the matrix, ranging from 3 in the lower left corner to 15 in the upper right corner. Risk zones are then defined as red=12–15, yellow=8–11, and green=less than 8.
Other relationships between impact and likelihood, which can be expressed as Severity = Likelihood + N × Impact, where N is a numerical value, will result in different zone boundaries. Organizations are free to choose whatever zone boundaries they wish, as long as the same boundaries are used consistently throughout an organization.
Effect of Precision. When we say, for example, that a risk is located at impact 2, likelihood 4 on the risk matrix—that is, it has impact Level 2 and likelihood Level 4—we seem to imply that we know exactly the impact and likelihood values. However, this is only true for risks with precision levels rated as high. At lower precision levels, our knowledge of impact and likelihood is less exact.
This leads to an alternative method of defining precision, based on the probable errors in our estimates of impact and likelihood, as shown in Exhibit 5.
Risk Ranking. If a project has identified a large number of risks, it can often be difficult to decide which should be dealt with first. In such cases, a risk ranking system based on position in the matrix can be used to determine the order of priority. In effect, this introduces a finer risk classification system than the simple three-level red/ yellow/green method.
The same algorithm as that used to divide the matrix into zones defines risk rankings: each square is given a numerical value of Likelihood + N × Impact, and the squares with the highest values become the highest ranks. Any squares with equal value are ranked in order of their impact levels. The risk rankings for N = 2 are shown in Exhibit 6.
Management Priority Order. Deciding which risks to deal with on a priority basis is a continuing problem. The common method of selecting only the 10 highest-ranked risks for priority consideration has its dangers. If there are more than 10 risks in your red zone, then you may be neglecting some very severe risks. Alternatively, if there are only one or two risks in the red zone and all the rest are in the green zone, the top-10 method means that you will be wasting management effort.
A risk matrix with rankings, as shown in Exhibit 6, is an effective tool for deciding which risks to deal with first; one simply deals with the highest-ranked risks, down to an agreed level. For example, one might decide to deal on a priority basis with risks down to rank 13, and with everything else on a lower priority. Whether this means you are dealing simultaneously with five or 15 risks, you will know you are neither wasting effort nor neglecting important areas.
RISK ASSESSMENT BASED on the concepts of impact, likelihood, and precision provides a consistent, comprehensive, and surprisingly easy method of risk assessment. Impact and likelihood between them define risk severity, while precision defines the level of confidence in the severity estimate. Assessing a risk is simply a matter of setting scales and comparing what you know about the risk with those scales. Precision is estimated by considering the level of confidence that you feel in the resultant estimates. ■
Reader Service Number 080