Integrating performance measures for effective project and program portfolio leadership
Department of Decision Sciences, School of Business, The George Washington University
This paper shows how to use the analytical hierarchy process (AHP) to integrate numerical measures from the earned value management (EVM) method with other evaluation methods to enhance the likelihood of success in exerting leadership in the management of a portfolio of projects and programs. AHP is described and used first to select and then to measure the performance of individual projects and to evaluate their contribution toward an organization’s tactical and strategic objectives. These measures are crucial components in the iterative process of selecting projects and programs for a portfolio; monitoring and if necessary controlling their progress; allocating and re-allocating resources to them; and terminating projects and programs when they underperform significantly and are no longer competitive in light of new opportunities, or because of shifts in organizational strategy or tactics. Examples of applying the integrated approach are presented for a large project and a small project. Management action is discussed at the project level and at the organization’s portfolio level.
In many organizations, the selection of projects that constitute an organization’s portfolio—and their regular adjustment, continual refinement, and possible termination—are important, recurring efforts. These efforts involve effective prioritization and adjustments of resource allocations among projects within the portfolio. These processes influence the future of the organization extensively. We show how AHP can be used to integrate project measures from EVM (Anbari, 2003; Fleming & Koppelman, 2005; Project Management Institute, 2005) and other sources to enhance the likelihood of successful portfolio management and leadership. The prioritization and selection of projects for an organization’s portfolio have been discussed elsewhere (Forman & Gass, 2001; Forman & Selly, 2001). We focus on the measurement of individual project performance and the synthesis of the performance of the portfolio’s constituent projects into measures reflecting the performance of units at higher levels in the organization.
Our understanding of a successful project has evolved throughout the past 40 years; Jugdev and Muller (2005) offered a “Retrospective Look” at this evolution. Often judgment occurs after the completion of a project, and we are reminded to differentiate end-project deliverables from the processes, that is, the management required to produce them (Wateridge, 1995, 1998; de Wit, 1988; Lim & Mohamed, 1999). We should also differentiate success criteria—how we measure success—from the factors that generate success (Cooke-Davies, 2002). Leaders need to understand these factors and the relevant criteria, both during the active life of a project and after its completion, so that they can implement their organization’s strategy by guiding a dynamic portfolio of projects. In this paper, we demonstrate that through AHP, disparate measures and managerial judgment can be integrated to synthesize a cohesive view of individual project performance as well as the performance of a portfolio of projects.
The Analytical Hierarchy Process (AHP)
AHP is a method for structuring complexity, measurement, and synthesis. AHP has been applied to a wide range of problems, including selecting among competing strategic and tactical alternatives, allocation of scarce resources, and forecasting. It is based on the well-defined mathematical structure of consistent matrices and the ability to generate true or approximate ratio-scale priorities using them (Mirkin & Fishburn, 1979; Saaty, 1980 and 1994). Forman and Gass (2001) discussed the objectives of AHP as a general method for a variety of decisions and other applications, describe successful applications of AHP, and elaborate on its efficacy and applicability compared to competing methods. We illustrate where ratio measures produced by AHP are instrumental in deriving sound, mathematically meaningful measures of individual projects as well as measures of a portfolio of projects.
Dyer and Forman (1992) discussed the benefits of AHP as a synthesizing mechanism in group decision-making and pointed out that because AHP is structured yet flexible, it is a powerful and straightforward method that can be brought into almost any group decision support system and applied in a variety of group decision contexts. Archer and Ghasemzadeh (1999) highlighted the importance and recurrence of project portfolio selection in organizations. They indicated that individual techniques are available to assist in this process, but without AHP they saw no integrated framework. They developed a framework that separates the work into distinct stages, allowing users to be free to choose the techniques they find most suitable for each stage or to omit or modify a stage. Al-Harbi (2001) discussed the potential use of AHP as a decision-making method in project management and used contractor prequalification as an example. He constructed a hierarchical structure for the prequalification criteria and potential contractors, and applied AHP to generate a descending-order list of contractors to perform the project. Mahdi and Alreshaid (2005) examined the compatibility of project delivery methods (design-bid-build, construction management, and design-build) with specific types of owners and projects, and used AHP as a multi-criterion decision-making method to assist decision-makers in selecting the proper delivery method for their projects.
Selecting an Organization’s Portfolio of Projects and Programs
Deciding what projects to include in an organization’s portfolio of projects is extremely important and challenging. Forman and Gass (2001) described how AHP is used in allocating scarce resources to optimize the achievement of the organization’s objectives within the constraints of scarce resources and project dependencies. They pointed out that effective allocation of resources is instrumental to achieving an organization’s strategic and tactical objectives and that resource allocation decisions are an extremely political aspect of organizational behavior. They maintained that a process such as AHP is necessary to measure and to synthesize conflicting objective and subjective information. To achieve that, an organization must be able to identify and structure its goals into objectives and sub-objectives; identify design alternatives such as alternative R&D projects; measure the relative importance of the objectives and sub-objectives and the contribution they realize from each alternative; and find the best combination of alternatives.
Priorities of the organization’s top-level objectives are typically determined by senior executives, who compare the relative importance of the objectives at a face-to-face meeting(s). Judgments can be anonymous at first, and then shared so that individuals can see other perspectives. Because there may be a considerable difference of opinion about the relative importance of objectives, a meeting facilitator can be employed to lead a discussion to bring out what the executives have in mind, including definitions, assumptions, and other information. This discourse often leads to a high degree of consensus. Priorities derived from the complete set of combined judgments for the objectives are calculated with standard AHP mathematics, which uses the geometric mean to calculate participants’ combined judgment. As an example, an IT organization’s goal may be to optimize its “Project Portfolio Performance,” which consists of a top-level cluster of five objectives (with their calculated priorities): (1) Leverage Knowledge (.278), (2) Improve Organizational Efficiency (.269), (3) Maintain Serviceability (.191), (4) Minimize Risk (. 181), and (5) Financials (.080). Each of these objectives may include several sub-objectives. For example, Leverage Knowledge may consist of: (1) Vendor/Partner Access, (2) Customer Access/Service, (3) Internal Access, and so on.
The derived priorities within any cluster are called local priorities and always sum to one. Because all these priorities are ratio-level measures, any element may then be further subdivided into smaller elements that conserve the parent element’s total priority, and their fraction of the global priority is easily calculated through simple multiplication. For example, “Leverage Knowledge” represents a global priority of 0.278 of the total priority with respect to “Project Portfolio Performance.” We can further divide this fraction into three sub-elements that have priorities 0.324, 0.276, and 0.400 with respect to “Leverage Knowledge.” If we want to understand these sub-elements as a fraction of the whole (that is, as grandchildren of “Project Portfolio Performance”), we simply multiply their sub-element proportions by the global priority of their parent, “Leverage Knowledge,” to obtain global proportions of 0.324*0.278=0.090; 0.276*0.278=0.077; and 0.400*0.278=0.111.
AHP supports Project and Program Alignment with organizational objectives. Evaluating the anticipated benefits of projects toward an organization’s objectives is useful: (1) to decide which projects should be included in the portfolio of projects; and (2) to roll up the individual project’s performance to derive measures of how well the portfolio is performing at various levels of the organization. The lowest-level elements of the organization’s objectives hierarchy (for example, Vendor/Partner Access) are called “covering objectives,” and a project’s anticipated benefit is the sum of its anticipated contributions to those covering objectives. Each of these is in turn the product of the covering objectives’ relative importance and the relative contribution of the project toward that covering objective. The relative contribution of a project toward a covering objective can be evaluated using either pairwise comparisons, or a ratings scale of intensities that possess ratio-level priorities. Measures of the anticipated benefits as well as the priorities of the objectives must be ratio-scale measures if they are to be multiplied and rolled up to derive integrated or synthesized performance measures for higher levels in the organization.
Evaluating Project and Program Performance
A meaningful measure of a project’s performance cannot be made in isolation, and must be relative to how well the project is performing in relation to the organization’s goals. It requires an integration of multiple numerical measures as well as factors requiring judgment that may be expressed non-numerically. AHP is well suited to eliciting subjective judgments and producing accurate ratio-scale measures from those judgments, thereby enabling integration of all factors relevant to the performance of a project. Project performance measures have widened beyond measuring against the planned budget, schedule, and scope. Anbari (1985) maintained that project performance needs to be measured against the quadruple objectives of scope, time, cost, and quality. Customer satisfaction has become an essential ingredient of success, although it “remains a nebulous and complex concept” (Jugdev & Muller, 2005) that might be largely explained simply by bringing projects in at cost (Cioffi & Voetsch, 2006). Measuring project success now demands a “diversified understanding” at both the project management and executive levels of the organization (Jugdev & Muller, 2005). Atkinson (1999) suggested measuring a project’s success based on the technical strength of the project deliverable; direct benefits, that is, to the organization itself; and indirect benefits to a wider stakeholder community. AHP can help with these measures.
Project and Program Performance Measurement Components
Project success is some combination of the project’s management performance, its deliverables, and its contribution to the organization’s objectives. Project performance measurement usually involves multiple measurement components. The complexity and details of these components are, in general, a function of the size of the project as well as its importance to the organization. For large projects, formal, standardized, more objective performance indicators such as those provided by EVM are becoming more common. Kim, Wells, and Duffey (2003) found that EVM is gaining higher acceptance due to favorable views of diminishing EVM problems and improving utilities. They also found that a broader approach considering users, methodology, project environment, and implementation process can improve significantly the acceptance and performance of EVM. EVM performance indicators may be supplemented by other factors that may require judgments that are more subjective. Judgment is also required to integrate various performance components into one measure of a project’s performance. This integration can be accomplished using ratio-scale priorities derived from pairwise comparisons. Small projects may not warrant the effort and expense needed to implement EVM, and one or several factors that are more subjective may play a greater role in evaluating project performance. Instead of using the same set of performance measurement components to evaluate every project, we propose defining a set of measures, each with one or more components, such that the performance of each project is evaluated with that measure (and its constituent components) most suited to the size, impact, type (such as product or service), environment (for example, international or domestic), or other characteristics of the project.
Example of a Measure and its Components for a Large Project
Exhibit 1 shows an example of a large project performance measure, consisting of a hierarchy of measurement components that could be applied to projects that merit the expense. Each of the lowest-level elements in the hierarchy of measurement components is measured objectively or subjectively and we transform the measure into a value between 0 and 1 using a direct-rating scale, a step function or an increasing or decreasing utility curve that may be linear, concave, or convex. To integrate or synthesize the measure components for a project, managers must obtain ratio-scale priorities that represent the relative importance of the measure components. These priorities can best be derived using the traditional AHP pairwise comparison process. Humans are more capable of making relative rather than absolute judgments, and much of the AHP process involves making pairwise relative judgments.
Exhibit 2 shows an example of the pairwise comparisons of the relative importance of EVM components of a schedule-driven project, such as a project to implement an organization’s compliance with a regulation going into effect at a near date. The diagonal in Exhibit 2 shows the numerical representations of the verbal judgments made by two experts. Because project schedule is so important, the Schedule Performance Index (SPI) was judged “strongly” more important than the Cost Performance Index (CPI). Despite this emphasis, SPI is only “moderately” more important than the Cost Variance (CV), indicating that performance indices are less readily understood. Schedule Variance (SV) was judged “very strongly” more important than CV, because this project is “very strongly” schedule-driven. Nonetheless, SV is only “moderately” more important than the Cost Variance at Completion, because although this project is schedule-driven, expected cost overruns at the end of the project cannot be ignored.
Exhibit 1 – Hierarchy of component measures
The verbal judgments for elements not on the diagonal are not necessary for calculating the relative numerical priorities of measure components. However, they provide redundancy that leads to priorities that better approximate the ratio-scale priorities in the decision-makers’ minds. Saaty (1980) showed that the principal eigenvector of a pairwise verbal judgment matrix often produces priorities that approximate the true priorities seen in ratio scales of common physical parameters such as distance and area. Therefore, given enough variety and redundancy, errors in judgment, such as those introduced by using an ordinal verbal scale, can be reduced greatly (Forman & Gass, 2001).
Exhibit 2 – Pairwise relative comparisons
The priorities resulting from the judgments shown in Exhibit 2 are shown in Exhibit 3. An important advantage of AHP is its ability to measure the extent to which an expert’s judgments are consistent, as shown by the inconsistency ratio. (The inconsistency of this set of judgments is a bit high, but the experts felt that each judgment was warranted and the resulting priorities accurately reflected what they thought at the time.)
Exhibit 3 – Resulting priorities
Once we have ratio-scale measures of a project’s performance with respect to each of the components and the relative importance of the components, we can roll up the performance to higher levels in the hierarchy. As an example, consider the subcomponents of EVM, their priorities, and performance priorities discussed above. The SV subcomponent has a performance priority of 49% and an importance priority of 46%, which when multiplied and added to the corresponding products of the other sub-components, results in a performance priority of 49.92% for the EVM component. We emphasize that the multiplication of the priorities of the components by the project’s performance measures in the roll-up process is mathematically meaningful only because the measures are ratio-scale measures. Moving up one more level in the hierarchy of measure components (Exhibit 1) we see EVM measures integrated or synthesized with the other measure components for the project (which we will call the AS/400 Replacement project, for example). Ultimately, the AS/400 Replacement project attains, say, a 16.15% priority, which represents its relative contribution toward one of the organization’s objectives.
Example of a Measure for a Small Project
Small projects (in some organizations, $20,000 or less) may not warrant measure components as involved as those shown above for a large project. The simplest measure might be a verbal ratings scale, consisting of rating adjectives such as High Performance, Strong Performance, Moderate Performance, and so forth. The priorities associated with these adjectives must be ratio-scale measures if they are to be combined with the other measures to produce an integrated measure that is mathematically meaningful and in proportion to the project’s performance. This is accomplished by first performing pairwise comparisons of the rating intensities themselves – for example, comparing “High Performance” to “Strong Performance,” and “Strong Performance” to “Good Performance” -which results in ratio-scale priorities for the rating intensities that are then used to evaluate one or more projects.
Management Action at the Project and Program Level
The results obtained can be used for management action at the project and program level, to better understand the progress of the project in light of the stated priorities. Consequently, one may analyze tradeoffs among the traditional project dimensions (scope, schedule, budget) and other project objectives (quality, customer satisfaction, repeat business); request additional resources; reassign personnel; crash selected project activities; adjust scope; conduct an audit; and so forth.
Evaluating Portfolio Performance
Organizational leadership at the senior executive level needs to track and understand the performance of the entire portfolio of projects. Projects may be performing well or not so well, and may be relatively important or less important. The question arises—how do we aggregate the performance of individual projects to derive composite measures of performance at higher levels in the organization to produce a “performance dashboard”? The answer lies with the ratio-scale measures of the anticipated project benefits toward the organization’s objectives (that were derived when the projects were considered for funding) and the ratio-scale measures of the actual performance of each of the projects that were selected to be in the organization’s portfolio (such as the performance of the AS/400 Replacement project).
These ratio-scale measures can be summed (“rolled up”) to determine performance toward meeting the higher-level organizational objectives to obtain a single integrated measure of the performance of the project portfolio. For example, we find that there are two projects that contribute to the Vendor Partner Access objective, one of which is the AS/400 Replacement project and the other is, say, the Cisco Routers project, with derived ratio-scale performance measures of 51.80 and 95.17%, respectively. The relative ratio-scale priorities of these two projects (16.15% and 83.85%), which are used to roll up the performances to the next higher level, are determined by normalizing each of the project’s anticipated contribution to the Vendor Partner objective.
The results can be portrayed on a “dashboard” that contains colors according to a selected legend. The colors enable management to get a fast visual impression of the performance throughout the organization. However, the colors are somewhat arbitrary and subject to the ranges selected, whereas the ratio-scale measures of performance are more meaningful because colors are ordinal measures. For example, if one measure were just below an arbitrary cutoff and another just above, they would show yellow and green respectively, even though their performance might be almost the same. Thus, a manager should examine the actual ratio performance values and not just the colors.
The contribution of a specific objective to organizational performance can differ depending on the objective to which it contributes. For example, the relative contribution of the AS/400 Replacement project’s performance toward, say, the Leverage Proven Technology objective is 48.45%, as compared to only 16.15% toward the Vendor Partner Access objective.
Whereas a picture of the performance toward the organization’s entire hierarchy of objectives would require many dashboard views, a more elaborate “wallboard” view containing the same colors can be posted on the wall of a room to depict the performance of the portfolio of projects toward the entire hierarchy of objectives.
Leadership through Management Action at the Portfolio Level
Projects—which by definition effect change—represent the major tactical mechanisms for implementing an organization’s strategy. Projects exist at all levels in an organization and should align with organizational goals. Thus, the project portfolio should be viewed as an integrated unit that contributes to advancing organizational strategy. Leadership is required to move away from established plans, when such change is needed. The combination of project performance measures and organizational priorities determines the actions to be contemplated. Often resources need to be re-allocated. Project management allows tradeoffs among schedule, budget, and scope while maintaining an integrated project. For example, with careful, iterative planning, schedules can be lengthened or shortened by temporarily removing or adding resources. This resource re-allocation will affect a project’s short-term budget, but it need not always change the total project budget. The priorities of the organization determine the proper mix of short-term and long-term goals, and these change too.
Resource re-allocation can result in termination of projects. Terminating a project before its planned completion may seem to represent an extreme or punitive action, but if a project’s performance measure indicates poor achievement and it is not adequately contributing to important organizational objectives, that project’s resources can best be used elsewhere. The sunk costs of a project should not be allowed to affect an organization’s future. The proper questions to ask are, “What costs are necessary to complete a project,” “What are the currently anticipated benefits,” and “What other project opportunities are competing for scarce resources?” The original plans serve only as mileposts against which to measure progress. An organization’s strategic direction may change. New opportunities continually emerge. Terminating a project forcefully requires courage, but too often projects are allowed to continue until they die by a variety of means such as attrition (Meredith & Mantel, 2006). A decision-maker’s job is to use available (or to-be-available) resources in a way that maximizes the organization’s objectives. This maximization can and should include qualitative “morale” costs of terminating projects that are meeting their goals but are no longer competitive for the organization. The mechanism that we described in this paper allows decision-makers to measure their project portfolios against the objectives that they themselves have carefully prioritized.
We have shown how the analytical hierarchy process can be used to (1) measure and integrate project performance based on EVM and other sources, and (2) roll up project performance to derive measures of performance at higher levels in the organization. This ability enhances the likelihood of success in leading project portfolio management. The approach presented in this paper is an extension of the process of selecting projects for an organization’s portfolio. It brings consistency and rationality to the efforts subsequently required: the continual refinement of projects, their effective prioritization, adjustments of resource allocation, and possible termination of projects that no longer are in optimal alignment with the organization’s goals. In a business world that depends more and more on successful project implementation for tactical and ultimately strategic achievement, these important management and leadership efforts are vital to the future of any organization.
Al-Harbi, K. M. Al-S. (2001). Application of the AHP in project management. International Journal of Project Management, 19(1), 19–27.
Anbari, F. T. (1985). A systems approach to project evaluation. Project Management Journal, 16(3), 21–26.
Anbari, F. T. (2003). Earned value project management method and extensions. Project Management Journal, 34(4), 12–23.
Archer, N. P., & Ghasemzadeh, F. (1999). An integrated framework for project portfolio selection. International Journal of Project Management, 17(4), 207–216.
Atkinson, R. (1999). Project management: Cost, time and quality, two best guesses and a phenomenon, its time to accept other success criteria. International Journal of Project Management, 17(6), 337–342.
Cioffi, D. F., & Voetsch, J. (2006). A single constraint? Evidence that budgets drive reported customer satisfaction. Manuscript submitted for publication.
Cooke-Davies, T. (2002). The “real” success factors on projects. International Journal of Project Management, 20(3), 185–190.
de Wit, A. (1988). Measurement of project success. International Journal of Project Management, 6(3), 164–170.
Dyer, R. F., & Forman, E. H. (1992). Group decision support with the analytic hierarchy process. Decision Support Systems, 8(2), 99–124.
Fleming, Q. W., & Koppelman, J. M. (2005). Earned value project management (3rd ed.). Newtown Square, PA: Project Management Institute.
Forman, E. H., & Gass, S. I. (2001). The analytic hierarchy process- An exposition. Operations Research, 49(4), 469–486.
Forman, E. H., & Selly, M. A. (2001) Decision by objectives. River Ridge, NJ: World Scientific Press.
Jugdev, K., & Muller, R. (2005). A retrospective look at our evolving understanding of project success. Project Management Journal, 36(4) 19–31.
Meredith, J. R., & Mantel, Jr., S. J. (2006). Project management: A managerial approach, (6th ed.). New York: Wiley.
Mirkin, B. G., & Fishburn, P. C. (1979). Group choice. V. H. Winston: Distributed by Halsted Press, Washington, DC.
Lim, C. S., & Mohamed, M. Z. (1999). Criteria of project success: An exploratory re-examination. International Journal of Project Management, 17(4), 243–248.
Mahdi, I. M., & Alreshaid, K. (2005). Decision support system for selecting the proper project delivery method using analytical hierarchy process (AHP). International Journal of Project Management, 23(7), 564–572.
Project Management Institute. (2005). Practice standard for earned value management. Newtown Square, PA: Project Management Institute.
Saaty, T. L. (1980). The analytic hierarchy process. New York: McGraw-Hill.
Saaty, T. L. (1994). How to make a decision: The analytic hierarchy process. Interfaces, 24(6), 19–43.
Wateridge, J. (1995). IT projects: A basis for success. International Journal of Project Management, 13(3), 169–72.
Wateridge, J. (1998). How can IS/IT projects be measured for success? International Journal of Project Management, 16(1), 59–63.
© 2007, Frank T. Anbari, Denis F. Cioffi, and Ernest H. Forman
Originally published as a part of 2007 PMI Global Congress Proceedings – Atlanta, GA