Prioritizing project risks using AHP

Introduction

The Analytic Hierarchy Process (AHP) (Saaty, 1977; 1981; 1984; 1990; 1991; 2003) is way to handle quantifiable and/or intangible criteria in the decision making process. It is a multiobjective multicriteria decision-making approach that is based on the idea of pairwise comparisons of alternatives with respect to a criterion (eg, which alternative, A or B, is preferred and by how much more is it preferred) or with respect to a goal (eg, which is more important, A or B, and how much more important is it). By employing pairwise comparisons, the relative importance of one criterion over another can be easily assessed. This concept was pioneered by Thomas Saaty in the late 1970's.

By employing a hierarchical structure, AHP helps one make decisions when a complexity of goals and criteria are involved (Saaty, 1991). Once the hierarchy is established, the criteria within the hierarchy are evaluated based on paired comparisons. The elements are compared in relative terms as to their importance or contribution to a given criterion that occupies the level immediately above the elements being compared. The final weights of the elements at the bottom level of the hierarchy are obtained by adding all the contributions of the elements in a level with respect to all the elements in the level above. This is known as the principle of hierarchic composition (Saaty, 1990; 1991; Vargas, 1990).

There are four basic axioms that drive this theory (Saaty, 1993; Vargas, 1990):

  1. Reciprocal Comparison: The decision maker must be able to make comparisons and state the strength of his preferences. The intensity of these preferences must satisfy the reciprocal condition:

    If A is x times more preferred than B then B is 1∕x times for preferred than A.

  2. Homogeneity: The preferences are represented by a bounded scale img
  3. Independence: When expressing preferences, criteria are assumed to be independent of the properties of the alternatives.
  4. Expectations: For the purpose of making a decision, the hierarchic structure is assumed to be complete.

The AHP process is composed of four discrete steps (Zahedi, 1986):

  1. Set up the decision hierarchy
  2. Collect the pairwise data
  3. Calculate the eigenvalue to estimate the relative weights
  4. Aggregate the relative weights of the decision elements

The AHP Matrix

The pairwise comparison is accomplished via the use of a square matrix. The rows (and columns) represent the different criteria (or elements) being compared and the entries in each cell represent the ranking of the two items being compared. Based on Axiom 1, the entries below the diagonal are reciprocal values. Hence, the matrix that results is a positive, reciprocal matrix A of the following form:

img

In this matrix A ,img represents the importance (or weight) of criterion i over criterion j. Clearly, the diagonal elements are all equal to 1; they represent the comparison of a criterion to itself. Matrix A is a rank 1 matrix which has some special properties, one being that there is only one non-zero eigenvalue and its value is equal to the number of rows (or columns). This is important because the normalized eigenvector associated with the maximum eigenvalue is used to determine the relative weights of the different criteria. This use of the eigenvector in the establishment of composite ratings (step 4 of the process) is the most controversial aspect of the AHP approach.

Consistency

The controversy involves the issue of consistency. The results of the AHP are dependent on the how close the eigenvalue that is calculated is to the “perfect” value and this closeness is affected by the degree to which the matrix A is consistent (or transitive). Perfect consistency results in a matrix of rank 1 and inconsistency increases the rank of A and therefore, the extent to which the eigenvalue that is calculated differs from the “perfect” value. If the matrix whose entries are img is consistent, then aij represents the importance of criterion i over criterion j and ajk represents the importance of criterion j over criterion k and aij × ajk must equal the importance of criterion i over criterion k.

In the real world, these weights are estimates, not known values and therefore the ratios themselves are only estimates. Because of this, Saaty argues that perfect consistency or transitivity need not hold and he proposes a measure which determines the degree to which these can be violated. He proposes a consistency ratio (C.R.) that provides a range of acceptable consistency values. Forman (Forman, 1993) comments that the goal of this process should be an accurate decision, not low consistency. He notes that ''it is most possible for one to be entirely consistent and, also, entirely wrong''.

Braunschweig (Braunschweig, 2000) discusses Chile’s use of AHP to prioritize seven biotechnology research projects using economic, social, environmental and institutional criteria. To test the sensitivity of the ranking, scenarios using different criteria weights were established. The results showed that the AHP provided a fairly stable rank order when moderate changes were made in the criteria weights.

Scale

The number of elements being compared, that is, the size of the matrix, impacts consistency. Based on Miller’s (Miller; 1956) work which studied the number of things that people could reliably distinguish from one another, Saaty suggests that the number of items being compared should not exceed 9 (7±2) elements. Thus, he proposes hierarchical decomposition in which the elements are grouped in classes of about 7 elements each, limiting the number of comparisons required and minimizing the number of errors (inconsistent judgments) that could arise.

Saaty’s Scale for Comparison

Exhibit 1: Saaty’s Scale for Comparison

Jensen (Jensen:84) uses the 1 to 9 scale of values to highlight an inherent consistency problem. He shows that, if aik = akj = 9 then aik × akj = aj = 81 which is not a permissible value (given that the maximum value allowed by Exhibit 1 is 9). Hence, by definition, this entry must be inconsistent.

Uses of AHP

Vargas (Vargas, 1990) notes that the AHP has found uses in a wide range of problem areas, including economics, management, finance, marketing, forecasting, resource allocation, war games, arms control and political candidacy. He attributes its success to its simplicity and robustness. Zahedi (Zahed, 1986) performs a similar study and notes that these applications share some common features. They are all decision problems, they involve rating alternatives for selection, evaluation or prediction and all involve qualitative aspects.

Rank Reversal

Another controversial area is that of rank reversal. If the addition of a new criterion causes the existing criteria to be re-ordered, rank reversal is said to occur. Belton and Gear (Belton, 1982) show a simple example in which the addition of a new element results in a ranking that is inconsistent with the ranking established without this additional element. Saaty argues that it is the judgment magnitudes for the new alternatives and the subsequent priorities that can result in rank reversals. Additionally, he points out that rank reversal can be a good thing in that it shows how a new and important attribute can affect preferences. He cites two examples to support this assertion; one is a production example in which the addition of a new product changes the production quantities (and subsequent order) of the original products. The other example involves investment opportunities in which the addition of a new opportunity reverses the way the original opportunities are viewed. Saaty provides the following observations:

  • If a new alternative is very strongly dominated by the least preferred alternative for every criterion, then it is not likely to affect rank order.
  • If a new alternative falls between two specific alternatives for each criterion, then its final rank will be between these two alternatives but rank may be reversed elsewhere.
  • If a new alternative dominates the most preferred alternative for every criterion then, in general, it will not affect rank order.

AHP Example

An example of this approach is shown in this section. Suppose that you are in the market for a new car. You have narrowed your choices down to three but you cannot decide which car to buy. Suppose your criteria are comfort, price and gas mileage. In this example, we evaluate our goal (buy a car) in terms of our 3 criteria. This completes step 1 of the process. We will have 3 individual matrices, one for each criterion. This is step 2. The AHP matrix for comfort is shown in Exhibit 2. From this Exhibit, we see that Car 3 is much much more comfortable than Car 1. This is the same as saying that Car 1 is much much less comfortable than Car 3.

Comfort Matrix

Exhibit 2: Comfort Matrix

Calculation of the eigenvalue can be done by finding the roots to the characteristic equation det (AλI) = 0 . Using this eigenvalue, the principal eigenvector can be found by solving the problem (AλI)x = 0. A very close approximation to the principal eigenvector can be calculated by normalizing each column of A and taking the average of the resultant rows. An example is shown in Exhibit 2. The difference in the magnitudes of the eigenvector weights show that Car 3 (.35) and Car 2 (.346) are similar in terms of comfort and that Car 1 (.305) is the least preferable.

Example Calculation of Comfort Eigenvector

Exhibit 3: Example Calculation of Comfort Eigenvector

Matrices for Mileage and Price

Exhibit 4: Matrices for Mileage and Price

Decision Priorities

Exhibit 5: Decision Priorities

Example comparisons of the different cars with respect to the other 2 criteria, mileage and price, are shown in Exhibit 3. Also, the criteria themselves need to be evaluated in terms of preferability or importance (Exhibit 4) to yield a decision priority. Exhibits 3 and 4 also include the respective eigenvector weights in the last column. Exhibit 4 shows that price is our most important criterion and comfort is the least important. A summary matrix comprised of the eigenvectors from Exhibit 2 and 3 is created as shown in Exhibit 5. This completes step 3.

Summary

Exhibit 6: Summary

The sum of the products of the values in each row of Exhibit 5 times the decision priorities from Exhibit 4 yield the overall ranking of each car. This completes the last step of the process. These values are shown in Column 2 of Exhibit 7 and the overall ranking is given in column 3 of the same exhibit. In this example, Car 1 is the best choice based on the criteria we selected.

Example Results

Exhibit 7: Example Results

Empirical Study: AHP in Project Risk Analysis

In this section, we show the results of an AHP approach to project risk management. This pairwise approach has been implemented as a method for ranking risks associated with several projects being constructed at the Department of Energy’s Spallation Neutron Source (SNS) user facility in Oak Ridge Tennessee. We will compare these results with those from a more qualitative approach.

The qualitative assessment typically used on SNS projects begins with an itemization of all risks. These risks are assigned one of three likelihood values; very likely represents a probability of occurrence that is greater than 90%, unlikely represents a probability of occurrence that is less than 10% and likely represents all values in between. Based on a specific set of criteria, each risk element is also assigned an importance factor (critical, significant or moderate). As an example, a risk that costs more than $500K would be considered critical. The combination of likelihood and importance results in a category assignment. As an example, the combination of very likely and critical ratings would be assigned the category of high.

The pairwise comparison process was applied using a 1-7 scale. Each project manager evaluated the risk elements under two different criteria, importance and likelihood. Importance relates to the magnitude with which this risk could affect the project. Likelihood represents the probability of the risk actually occurring. Likelihood was not evaluated with respect to importance and importance was not evaluated with respect to likelihood. Hence, there is no hierarchical structure involved. The normalized product of the importance and the likelihood weights that are generated via the eigenvector evaluation provide an aggregate risk ranking.

Thirty-three risk elements from 5 different projects were evaluated using this method starting in September 2005. These risks were re-evaluated semi-annually thereafter. In most cases, the risk ranking was performed by the same individual, the project manager. Not only do these data show that this process is capable of eliciting stable information over time, they show that the project leadership (comprised of the project manager and the project engineer), view risks in the same relative ordering.

Most project managers found that the pairwise comparisons were less time consuming than the more qualitative assessment. This observation supports research results (see, for example, Leach:00, Machina:87) that show that people find it difficult to assign probability expressions and yet have little to no difficultly comparing two elements with respect to some criterion (see, for example, Vargas:90, Zahedi:87). Saaty (Saaty:77) also noted that, in over 30 applications, people were content with the rankings provided by AHP. Here too, the project managers were quite satisfied with the ordered risk list. In no case did they feel that the ordering was inappropriate.

The results of both assessment methods for 4 successive periods for a particular project are shown in Exhibit 8. The scores are the results of the pairwise comparison (the normalized eigenvector products) and the ranking is the ordering of the scores. The column entitled `Category (UM)' represents the category that resulted from the qualitative process (before a proposed mitigation strategy is applied) and the column entitled `Category (M)' represents the category of the residual risk that remains after a proposed mitigation strategy is implemented. Over the course of time, some risks were retired and new risks were added. As an example, Risk #6 was retired in June 2006 and Risks #8 and #9 were added during the January 2007 assessment.

Empirical Risk Data Results

Exhibit 8: Empirical Risk Data Results

Several observations can be derived from Exhibit 8.

  • The rankings from the pairwise comparisons do not always correlate with the qualitative categories, neither mitigated nor unmitigated. As an example, one would imagine that a risk that was ranked high via the AHP would also receive a high classification via the qualitative approach. The color coded cells of Exhibit 8 highlight some examples. Cells in red font identify uncorrelated risk rankings; cells in green font highlight rankings that are consistent across the two processes. Risks #1 and #6 are examples. In 3 of the 4 semi-annual reviews, Risk #1 was ranked as the number 1 risk via AHP; the qualitative process assigned to it the category of medium. Thus, these entries are highlighted by red font. On the other hand, Risk # 6 ranked low via AHP and was assigned the category of low in the qualitative process and thus it is highlighted in green.
  • One cannot get a ranking from the qualitative categorization process.
  • Assessments by the same person, even when separated in time, are stable; that is, they do not fluctuate wildly over time. This supports the claim that the pairwise values are not generated arbitrarily or randomly.
  • The normalized eigenvector weights of the high priority (top ranked items) are significantly different from the lower priority weights. Hence, even if some inconsistency exists within the decision process, the relative ordering will be maintained because the largest weights are far from the smallest weights.

These empirical data were also reviewed for consistency. Consistency values were calculated for both the Importance matrix and the Likelihood matrix for each project using Saaty’s calculation aik × akj - aij.

There are a large number (n3) valid combinations that need to be evaluated. In the case of the September 2005 risk matrix shown in Exhibit 8, n = 7 and 343 different combinations needed to be assessed. During this evaluation, we realized that Saaty’s algorithm rejected orderings or combinations that we believed to be valid. Our objective was to obtain a ranked risk list and therefore, it was more important to know if one risk was more important than another than it was to know how much more important it was. Hence, an alternative algorithm was developed that utilized the ordinal values ofaik , akj and aij and was independent of the magnitudes. This was done by evaluating the relationship of the values in a selected pair. As an example, aik < akj would be assigned an L for “Less Than”. Two elements that are equal are assigned an E (for equal) and a G implies that aik > akj. In this way, each element was assigned a value and all possible combinations ( 33 ) were evaluated. Only 12 of the 27 possible combinations were valid. As an example, if aik = akj and akj = aij , then aik cannot be greater (or less) than aij.

Using Saaty’s criteria, almost 60% of the project values were found to be inconsistent. However, when evaluated against the criteria just described, 90-100% of the assessments were consistent. This shows the impact that the magnitude of the value has on Saaty’s consistency determination and this finding is consistent with Jensen’s example in which aik × akj = 81.

Summary

In this paper, Saaty’s Analytic Hierarchy Process (AHP) is applied to project risk management. It was found that this process delivers results that are consistent with the decision-maker’s intuition and that the results are consistent over time. The data also show that there can be a big difference between the risks the decision maker is really worried about and the “high” risk items identifed through an alternative method. AHP is easy to understand and simple to implement. By offering a unique way to determine “what it is that we should be worrying about”, AHP can serve as a valuable component of a project’s Risk Management portfolio.

Belton, V. & Gear,T. (1982) On a Shortcoming of Saaty’s Method of Analytic Hierarchies. Omega, 11(02), 228-230

Braunschweig,T. (2000) Priority Setting in Agricultural Biotechnology Research, Research Report No. 16. 110 Retrieved on March 18, 2005 from http://ifpri.catalog.cgiar.org/dbtw-wpd/exec/dbtwpub.dll

Forman, E. H. (1993) Facts and Fictions about the Analytic Hierarchy Process, Mathematical and Computer Modeling, 17(4/5), 19-26

Jensen, R.E. (1984, September) An Alternative Scaling Method for Priorities in Hierarchical Structures. Journal of Mathematical Physchology, 28(3), 317-332.

Leach, L. (2000, June) Schedule and Cost Buffer Sizing: How to Account for the Bias Between Project Performance and Your Model. Project Management Journal, 3(2), 34-47

Machina, M.J. (1987) Choice Under Uncertainty: Problems Solved and Unsolved. The Journal of Economic Perspectives, 1(1), 121-154

Miller,G. (1956) The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information. The Psychological Review, 63, 81-97

Saaty, T.L. (1977, June) A Scaling Method for Priorities in Hierarchical Structures. Journal of Mathematical Psychology, 15(3), 234-281

Saaty, T.L. & Alexander, L.G. (1981) Thinking with Models. New York:Pergamon

Saaty, T.L. & Vargas, L.G. (1984, June) Inconsistency and Rank Preservation. Journal of Mathematical Psychology, 28(2), 205-214

Saaty, T.L. (1984) The Legitimacy of Rand Reversal. Omega,12(5), 513-516

Saaty, T.L. (1990, September) How to make a decision: The Analytic Hierarchy Process. European Journal of Operational Research, 48(1), 9-26

Saaty, T.L. & Vargas, L.G. (1991) Prediction, Projection and Forecasting. Norwell, MA: Kluwer Academic Publishers

Saaty, T.L. (1993) What is Relative Measurement? The Ratio Scale Phantom. Mathematical and Computer Modeling, 1(4/5), 1-12

Saaty, T.L. (1998) Reflections and Projections on Creativity in Operations Research and Management Science: A Pressing Need for a Shift in Paradigm. Operations Research, 46(1), 9-16

Saaty, T.L. (2003, February 16) Decision-making with the AHP: Why is the principal eigenvector necessary? European Journal of Operational Research, 145 (1), 85-91

Vargas, L.G. (1990, September 5) An overview of the Analytic Hierarchy Process and its applications. European Journal of Operational Research, 48(1), 2-8

Zahedi, F. (1986) The Analytic Hierarchy Process- A Survey of the Method and Applications. Interfaces, 16(4), 96-108

SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U.S. Department of Energy

© 2007, Barbara Thibadeau
Originally published as a part of 2007 PMI Global Congress Proceedings, Atlanta Georgia

Advertisement

Advertisement

Related Content

Advertisement

Publishing or acceptance of an advertisement is neither a guarantee nor endorsement of the advertiser's product or service. View advertising policy.