Integrating performance measures to exert effective leadership in managing project portfolios


Denis F. Cioffi, PhD
Associate Professor, School of Business,
The George Washington University, Washington, DC, USA

Ernest H. Forman, PhD
Professor, School of Business,
The George Washington University, Washington, DC, USA


The analytical hierarchy process (AHP) is used to integrate measures from the relatively objective earned value management method (EVM) and from other, more subjective evaluation methods to enhance the likelihood of success in leading the management of project portfolios. AHP can be used to not only measure the performance of individual projects, but also to evaluate the contribution of these projects toward an organization's tactical and strategic objectives. These measures are crucial components of the iterative process of selecting projects for a portfolio, monitoring and controlling their progress, allocating and re-allocating resources among them, and terminating projects when they under-perform or are no longer competitive in light of new opportunities or because of shifts in strategy or tactics.


In many organizations, the selection of projects that constitute an organization's portfolio, and their regular adjustment, continual refinement, and possible termination are important, recurring efforts. These efforts involve effective prioritization and adjustments of resource allocations among projects within the portfolio. These processes often influence the future of the organization in a big way. In this paper, we show how the AHP can be used to integrate project measures from the EVM and other sources to enhance the likelihood of successful portfolio management and leadership.

Our understanding of a successful project has evolved throughout the past 40 years; Jugdev & Müller (2005) offer a “retrospective look” at this evolution. Often judgment occurs after the completion of a project, and we are reminded to differentiate end-project deliverables from the processes, i.e., the management, required to produce them (Wateridge, 1995; Wateridge, 1998; de Wit, 1988; Lim & Mohamed, 1999). We should also differentiate success criteria — how we measure success — from the factors that generate success (Cooke-Davies, 2002). Leaders need to understand these factors and the relevant criteria both during the active life of a project and after its completion so they can implement their organization's strategy by guiding a dynamic portfolio of projects. Here we demonstrate that through AHP disparate measures and managerial judgment can be integrated to synthesize a cohesive view of individual project performance as well as the performance of a portfolio of projects.

The Analytic Hierarchy Process

The AHP is a method for measurement; synthesis; and for structuring complexity in projects and in their objectives and sub-objectives; and in the organization's objectives and sub-objectives. The AHP has been applied to a wide range of problems, including selecting among competing strategic and tactical alternatives, the allocation of scarce resources, and forecasting. It is based on the well-defined mathematical structure of consistent matrices and the ability to generate true or approximate ratio scale priorities using them (Mirkin & Fishburn, 1979; Saaty, 1980 and 1994). Forman & Gass (2001) discuss the objectives of AHP as a general method for a variety of decisions and other applications, briefly describe successful applications of AHP, and elaborate on the efficacy and applicability of AHP compared to competing methods.

We will illustrate where ratio measures produced by AHP are instrumental in deriving sound, mathematically meaningful measures of individual projects as well as measures of a portfolio of projects. According to Stevens (1946), there are four levels of measurement. The levels, ranging from lowest to highest in terms of meaning, are nominal, ordinal, interval, and ratio. Each scale has all of the properties (both meaning and statistical) of the levels above, plus additional ones. For example, a ratio measure has ratio, interval, ordinal, and nominal properties. An interval measure does not have ratio properties, but it does have interval, ordinal and nominal properties. Ratio measures are necessary to represent proportion and are fundamental to good physical measurement. They are also important in combining measures derived from subjective judgments.

Prior Applications of AHP in Project, Program, and Portfolio Management

Dyer & Forman (1992) discussed the benefits of AHP as a synthesizing mechanism in group decision-making and explained why AHP is so well-suited to group decision-making. Because AHP is structured yet flexible, it is a powerful and straightforward method that can be brought into almost any group decision support system and applied in a variety of group decision contexts. Archer & Ghasemzadeh (1999) highlighted the importance and recurrence of project portfolio selection in many organizations. They indicated that many individual techniques are available to assist in this process, but without AHP they saw no integrated framework to affect it. They therefore developed a framework that separates the work into distinct stages to simplify the project portfolio selection process. Each stage accomplishes a particular objective and produces inputs for the next stage. Users are free to choose the techniques they find most suitable for each stage or to omit or modify a stage to expedite the process or tailor it to their individual specifications. The framework may be implemented in the form of a decision support system, and Archer & Ghasemzadeh described a prototype system that supports many related decision-making activities.

Al-Harbi (2001) discussed the potential use of AHP as a decision-making method in project management and used contractor prequalification as an example. He constructed a hierarchical structure for the prequalification criteria and the contractors wishing to prequalify for a project. He applied AHP to prioritize prequalification criteria and generated a descending-order list of contractors to select the best contractors for performing the project. He performed a sensitivity analysis to check the sensitivity of the final decisions to minor changes in judgments, and pointed out that AHP implementation would be simplified with Expert Choice software, which is available commercially.

Mahdi & Alreshaid (2005) examined the compatibility of various project delivery methods with specific types of owners and projects. Options for project delivery include design-bid-build, construction management, and design-build methods. Depending on the requirements of the project, one method may be better suited than another. Project requirements should be evaluated to determine the option most likely to produce the best outcome for the owners. Mahdi & Alreshaid used AHP as a multi-criterion decision-making method to assist decision-makers in selecting the proper delivery method for their projects, and they provided an example of selecting the proper project delivery method for an actual project.

Selecting an Organization's Portfolio of Projects

Deciding what projects to include in an organization's portfolio of projects is extremely important and entails a variety of challenges. Forman & Gass (2001) described how AHP is used in allocating scarce resources to optimize the achievement of the organization's objectives within the constraints of scarce resources and project dependencies:

An effective allocation of resources is key to achieving an organization's strategic and tactical objectives. Information about what resources are available to management is usually easy to determine. Much more difficult to ascertain is the relative effectiveness of resources toward the achievement of the organization's goals, since all organizations have multiple objectives. Resource allocation decisions are perhaps the most political aspect of organizational behavior. Because there are multiple perspectives, multiple objectives, and numerous resource allocation alternatives, a process such as the AHP is necessary to measure and to synthesize the often-conflicting objective and subjective information. An organization must be able to:

  • Identify design alternatives (e.g., alternative research and development (R&D) projects, or operational plans for alternative levels of funding for each of the organization's departments)
  • Identify and structure the organization's goals into objectives, sub objectives, sub-sub objectives, and so on
  • Measure (on a ratio scale) how well each alternative contributes to each of the lowest level sub objectives
  • Find the best combination of alternatives, subject to budgetary, environmental, and organizational constraints.

We will not discuss the process of deciding what projects to include and exclude in an organization's portfolio of projects in this paper, but will focus on the measurement of the performance of those projects that have been selected. Portions of the process common to selecting projects and evaluating their performance toward an organization's objectives that are common are discussed below.

Using AHP to Synthesize and Integrate Project Performance Measures and Project Portfolio Measures

The overall performance of a project (in the sense of how well the project is performing in relation to all the original goals, not just the formal definition of “earned value”), requires an integration of multiple objective, numerical measures as well as factors requiring subjective judgment that may originally be expressed non-numerically. AHP is well suited to eliciting subjective judgments and producing accurate ratio scale measures from those judgments, thereby enabling integration of all the factors relevant to the performance of a project.

Organizational leadership above the project management level typically has no way to understand the performance of the portfolio of projects being undertaken by the organization. As illustrated below, AHP is also well suited to deriving ratio scale priorities for an organization's hierarchy of objectives. These priorities can be used to roll up the performance measures of individual projects for consideration at an organization's higher levels (such as departments, divisions, and strategic business units).

Evaluating Project Performance

Project performance measures have widened beyond measuring against planned budget, schedule, and scope. For example, customer satisfaction has become an essential ingredient of success, although it “remains a nebulous and complex concept” (Jugdev & Müller, 2005) that might be largely explained simply by bringing projects in at cost (Cioffi & Voetsch, 2006). Whatever the collected criteria, measuring project success now demands a “diversified understanding” at both the project management and executive levels of the organization (Jugdev & Müller, 2005). Atkinson (1999), for example, suggested three categories for measuring a project's success after it has been completed: the “technical strength” of the project deliverable; “direct benefits,” i.e., to the organization itself; and “indirect benefits” to a “wider stakeholder community.” AHP can help with these measures, too.

Measures and Components

Project success, as discussed above, is some combination of the project's management performance, its deliverables, and its contribution to the organization's objectives. We will discuss the contribution to the organization's objectives later in the paper.

Project performance measurement usually involves multiple measurement components. The complexity and details of these components are, in general, a function of the size of the project as well as its importance to the organization. For large projects, formal, standardized, more objective performance indicators such as those provided by the EVM (see Appendix I) are becoming more common. Kim, Wells, & Duffey (2003) found that EVM is gaining higher acceptance due to favorable views of diminishing EVM problems and improving utilities. They also found that a broader approach considering users, methodology, project environment, and implementation process can improve significantly the acceptance and performance of EVM in different types of organizations and projects. EVM performance indicators may be supplemented by other factors, such as project quality, that may require more subjective judgments. Judgment is also required to integrate the various performance components into one measure of a project's performance. This integration can be accomplished using ratio scale priorities derived from pairwise comparisons, as is typical when using AHP (but not necessarily typical with other methods). Small projects may not warrant the effort (and thus the expense) needed to implement EVM, and one or several more subjective factors may play a greater role in evaluating project performance.

Instead of using the same set of performance measurement components to evaluate every project, we propose defining a set of measures, each with one or more components, such that the performance of each project is evaluated with that measure or measures most suited to the size, impact, type (e.g., product or service), environment (e.g., international or domestic), or other characteristics of the project. Figure 1 shows one such measure, consisting of a hierarchy of measurement components that could (or should) be applied to a class of projects large enough to merit the expense:

Example of a Measure and Its Components for a Large Project

Hierarchy of Component Measures

Figure 1. – Hierarchy of Component Measures

Each of the lowest-level elements in the above hierarchy represents something that is measured either objectively or subjectively. In either case, we advise transforming the measure into a value between 0 and 1 using a direct rating scale, a step function, or an increasing or decreasing utility curve that may be linear, concave, or convex. For example, a concave increasing utility curve, such as that shown in Figure 2, might be appropriate for transforming a project's earned value cost performance index to a range from 0 to 2, where a cost performance index of .1 or less maps to a priority value of 0 and a cost performance index of 2 or more maps to a priority value of 1.0.

Utility Curve for Cost Performance Index

Figure 2. Utility Curve for Cost Performance Index

Figure 3 shows a linear utility curve for a project's schedule variance percentage. It is defined such that a project that is 200% or more behind schedule has no priority value for this measure, whereas one that is 200% or more ahead of schedule has a value of 100%. As the example in Figure 4 shows, applying this linear function to a project for one of the projects, AS/400 replacement, that is 15% behind schedule yields a value of 46%. The priority of the Schedule Variance (49%) will be discussed shortly.

Utility Curve for Schedule Variance Percentage

Figure 3. Utility Curve for Schedule Variance Percentage

Schedule Variance Percentage Data and Performance for the AS/400 Replacement Project

Figure 4. Schedule Variance Percentage Data and Performance for the AS/400 Replacement Project

Component Priorities

To integrate or synthesize the various measure components for a project, managers and team members must obtain ratio scale priorities that represent the relative importance of the measures. These priorities can best be derived using the traditional AHP pairwise comparison process. Humans are more capable of making relative, rather than absolute judgments, and much of the AHP process involves making pairwise relative judgments.

In any given organization, some projects are more time-sensitive than others; some are so heavily schedule driven that “time is of the essence” is often used in the contract to highlight this mandate. Figure 5 below shows an example of the pairwise comparisons of the relative importance of the Earned Value components of projects where meeting schedule is mandatory, such as the Year 2000 (Y2K) systems remediation projects or projects that will implement an organization's comportment with some governmental regulations going into effect at a specific, near date. The rationale for these judgments was made by two experts with more than 50 years of combined project management experience, and the diagonal in the figure below shows the numerical representations of their verbal judgments, which follow.

Pairwise Relative Comparisons

Figure 5. Pairwise Relative Comparisons

To start, because project schedule is so important, the Earned Value Schedule Performance Index was judged “strongly” more important than the Earned Value Cost Performance Index. Despite this emphasis, the Schedule Performance Index is only “moderately” more important than Cost Variance because performance indices are less commonly used and therefore less readily understood. Schedule Variance was judged “very strongly” more important than Cost Variance because this project is so “very strongly” schedule driven. Nonetheless, Schedule Variance is only “moderately” more important than the cost Variance at Completion because although this project is schedule driven, expected cost overruns at the end of the project cannot be ignored completely.

The verbal judgments for elements not on the diagonal (and not discussed here) are not necessary for calculating the relative numerical priorities of measure components. However, they are important because they provide redundancy that leads to derived priorities that more accurately approximate the ratio-scale priorities in the decision-makers’ minds. Although the fundamental verbal scale used to elicit judgments is an ordinal scale, Saaty's (1980) empirical research showed that the principle eigenvector of a pairwise verbal judgment matrix often produces priorities that approximate the true priorities seen in ratio scales of common physical parameters such as distance, area, and brightness (because, as Saaty showed, the eigenvector calculation has an averaging effect–corresponding to finding the dominance of each alternative along all walks of length k, as k goes to infinity). Therefore, if there is enough variety and redundancy, errors in judgment, such as those introduced by using an ordinal verbal scale, can be reduced greatly (Forman & Gass, 2001).

The priorities resulting from the judgments shown above are exhibited in Figure 6,below. An important advantage of AHP is its ability to measure the extent to which an expert's judgments are consistent, as shown by the inconsistency ratio in Figure 6. (The inconsistency of this set of judgments is a bit high, but the experts felt that each judgment was warranted and the resulting priorities accurately reflected what they thought.)

Resulting Priorities

Figure 6. Resulting Priorities

Integrating Component Measures

Once we have ratio scale measures of the project's performance with respect to each of the components, as well as ratio scale measures of the relative importance of the components, we can roll up the performance to higher levels in the component hierarchy (see Figure 7). We emphasize that the multiplication of the priorities of the components by the project's performance measures in the roll-up process is mathematically meaningful only because the measures are ratio scale measures.

Integrated Earned Value Measure of a Project's Performance

Figure 7. Integrated Earned Value Measure of a Project's Performance

Figure 8 shows the earned value measures integrated or synthesized with other measure components for this project (see Figure 1. – Hierarchy of Component Measures). The 16.15% priority for this project represents the relative contribution of this project toward one of the organization's objectives and will be discussed below.

Integrated Project Performance Measure

Figure 8. Integrated Project Performance Measure

Example of Measure for a Small Project

Small projects (in some organizations, $20,000 or less) may not warrant measure components as involved as those shown above for a large project. The simplest measure might be a verbal ratings scale, consisting of rating adjectives such as High Performance, Strong Performance, Good Performance, Moderate Performance, and so forth. However, the priorities associated with these adjectives must be ratio scale measures if they are to be combined with the other measures to produce an integrated measure that is mathematically meaningful and in proportion to the project's performance. This is accomplished by first performing pairwise comparisons of the rating intensities themselves, e.g., comparing “High Performance” to “Strong Performance” (see Figure 9), and “Strong Performance” to “Good Performance” (see Figure 10), which results in ratio scale priorities for the rating intensities (see Figure 11) that are then used to evaluate one or more projects (see Figure 12).

Relative Preference for a High Performance vs. Strong Performance Project

Figure 9. Relative Preference for a High Performance vs. Strong Performance Project

Relative Preference for a Strong Performance vs. Moderate Performance Project

Figure 10. Relative Preference for a Strong Performance vs. Moderate Performance Project

Ratio Scale Priorities for Rating Intensities

Figure 11. Ratio Scale Priorities for Rating Intensities

Rating a Project's Performance with Only One Component Measure

Figure 12. Rating a Project's Performance with Only One Component Measure

Management Action at Project Level

The results obtained thus far can be used for management action at the project level, to understand better the progress of the project in light of the stated priorities. Consequently, one may analyze tradeoffs among the traditional project dimensions (scope, schedule, and budget) and other project objectives (quality, customer satisfaction, and repeat business); request additional resources; reassign personnel; crash the schedule; adjust scope; conduct an audit; and so forth.

Evaluating Portfolio Performance -- Synthesizing to Derive Performance Measures above the Project Level

Alignment with Strategic Objectives

The benefit that each project brings to the organization must ultimately be measured by its contribution to the organization's objectives, both tactical and strategic. This requires a thorough understanding and structuring of these objectives and the participation of personnel from throughout an organization.

Identifying and Structuring Organizational Objectives

An organization must relate project performance to its strategic objectives. AHP proponents cite the process of clarifying and translating strategic goals into a concrete set of objectives, such as those shown in Figure 13, as a major benefit of the process. Beyond this important benefit, AHP facilitates deriving reliable priorities that precisely communicate an organizations strategy as well as enable the measurement of overall performance.

Hierarchy of Corporate Objectives

Figure 13. Hierarchy of Corporate Objectives

Evaluating Anticipated Benefits

Evaluating the anticipated benefit of projects toward an organization's objectives is useful for at least two purposes: (1) to decide which projects should and should not be included in the project portfolio—a resource allocation problem that is not discussed in detail in this paper; and (2) to roll up the individual project's performance (discussed above) to derive measures of how well the portfolio is performing at various levels of the organization. This evaluation in turn requires (1) prioritization of the organization's strategic, intermediate, and operational objectives, as well as (2) an evaluation of the project's anticipated benefits to those objectives to which it contributes. Both of these, in turn, must be ratio scale measures if they are to be multiplied and rolled up to derive integrated or synthesized performance measures for higher levels in the organization, such as Vendor/Partner Access, Leveraging Knowledge, and Project Portfolio Performance shown in Figure 13.

Deriving Priorities for Organizational Objectives

Priorities for the elements in the organization's objectives hierarchy are typically derived by teams, the compositions of which are determined by where in the hierarchy the priorities are being derived. For example, top-level executives (e.g., at the vice presidential level) make pairwise comparisons of the relative importance of the organization's primary objectives. The actual procedure often occurs at a meeting(s). Electronic keypads could be used to elicit judgments, although this is not common in some organizations. The judgments are anonymous at first, but then are shared so that individuals can see other perspectives. Figure 14 shows judgments about the relative importance of the top-level objectives from five executives. Since there is considerable difference of opinion about which are more important, a meeting facilitator could be used to lead a discussion to bring out more fully what the executives had in mind, including definitions, assumptions, and information that might not be commonly available or expressed. This discourse, which most often leads to a high degree of consensus, is an important part of the process. Because of AHP's reciprocity axiom (if A is 5 times B, then B is 1/5th A), the geometric mean is used to calculate a combined judgment for the group. If desired, a supporting evaluation can be used to weight each executive's judgment based on criteria such as knowledge, experience, and responsibility. This extra, outside step is rarely practiced because discussion and eventual consensus will lead to more buy-in by the participants.

Pairwise Comparisons of Relative Importance of Two Top-Level Objectives

Figure 14. Pairwise Comparisons of Relative Importance of Two Top-Level Objectives

Priorities derived from the complete set of combined judgments for the objectives in a cluster are then calculated with standard AHP mathematics (using the “normalized principal right eigenvector”). Figure 15, below, shows the calculated priorities for the top-level cluster.

Derived Priorities for Top-Level Corporate Objectives

Figure 15. Derived Priorities for Top-Level Corporate Objectives

The local and global priorities for the organization's objectives are shown in Figure 16. The derived priorities within any cluster (such as the top-level cluster shown in Figure 15) are called local priorities, which always sum to one, and distribute the total cluster priority among its elements. Furthermore, because all these priorities are ratio level measures, any element may then be further subdivided into smaller elements that conserve the parent element's total priority, and their fraction of the global priority is easily calculated through simple multiplication.

For example, “Leveraging Knowledge” represents a global priority of 0.278 of the total priority with respect to “Measuring Project Portfolio Performance.” We can further divide this fraction of the total priority into three sub-elements that have priorities 0.324, 0.276, and 0.400 with respect to “Leveraging Knowledge,” i.e., these new sub-elements together must retain the total priority of their parent and so sum to 1. If, however, we want to understand these sub-elements as a fraction of the whole (i.e., as grandchildren of “Measuring Project Performance”), we simply multiply their sub-element proportions by the global priority of their parent, “Leveraging Knowledge,” to obtain global proportions of 0.324*0.278= 0.090; 0.276*0.278=0.077; and 0.400*0.278=0.111. We stress again that this calculation is possible only because all these numbers represent ratio scale measures.

Prioritized Corporate Objectives

Figure 16. Prioritized Corporate Objectives

Project Alignment

Projects are usually designed with multiple objectives. The organizational benefit of a successful project is determined not just by how much it contributes to these objectives but also by their importance to the organization.

The lowest-level elements of the organization's objectives hierarchy (see for example, Figure 16) are called “covering objectives,” and a project's anticipated benefit can be evaluated by adding its contributions to these covering objectives. These, in turn, are the product of the covering objectives’ relative importance and the relative contribution of the project. The relative contribution of a project to a covering objective can be evaluated using either pairwise comparisons, or a ratings scale of intensities that possess ratio level priorities, similar to that used in evaluating a project's performance (see Figure 11). Figure 17 shows that our example AS/400 Replacements project has a very good (.722) anticipated contribution toward the Leveraging Proven Technology objective.

Rating the Anticipated Benefit of AS/400 Replacements to Leveraging Proven Technology

Figure 17. Rating the Anticipated Benefit of AS/400 Replacements to Leveraging Proven Technology


With ratio scale measures of the anticipated project benefits toward the organization's objectives and ratio scale measures of each of the projects in the organization's portfolio, we can roll up the individual project performances to determine the performance toward meeting the higher-level organizational objectives to obtain a single integrated measure of the performance of the entire project portfolio. As can be seen in Figure 1718 below, the organization's portfolio of projects in this example is 71.77% or “good.” The good, colored green, is an ordinal measure and is useful in visually inspecting displays such as depicted in Figure 1718. The ratio scale measure of 71.77% is more meaningful than the color because the ratio represents a proportion (of an ideal performance of 100%) and is not subject to arbitrary settings or changes in the wording or ranges of the performance legend in the display. For example, in Figure 1718, changing the upper limit of the good range from .70 to .75 would change the description to “very good” and the color to blue.

We stress that the rolling-up calculations are mathematically meaningful only because we have the necessary ratio scale measures. For example, the performance of Vendor Partner Access in Figure 1718 is the weighted sum of the 51.8% AS/400 Replacement Project's performance and the 95.17% Cisco Routers Project's performance (0.518*16.15 + 0.9517*83.85=88.17). The weights are the relative anticipated benefits of these two projects’ contributions to Vendor Partner Access. In general, there may be many projects contributing to such an objective.

Dashboard Expanded to Show Projects Contributing to Leverage Knowledge > Vendor Partner Access

Figure 18. Dashboard Expanded to Show Projects Contributing to Leverage Knowledge > Vendor Partner Access

A different dashboard view is shown in Figure 18, where it can be seen that the AS/400 Project's performance is the same as that in Figure 1718, but its relative contribution toward the Leveraging Proven Technology objective is 48.45% as compared to only 16.15% toward the Vendor Partner Access objective shown in Figure 18.

Dashboard Expanded to Show Projects Contributing to > Minimize Risks > Leveraging Proven Technology

Figure 18. Dashboard Expanded to Show Projects Contributing to > Minimize Risks > Leveraging Proven Technology


Whereas a picture of the performance toward the organization's entire hierarchy of objectives would require many of the dashboard views shown in Figure 18 and Figure 18, Figure 19's “wallboard” view, which can be posted on the wall of a room, depicts the performance of the portfolio of projects toward the entire hierarchy of objectives.

Wallboard Showing Objectives/Sub-Objectives and All Projects

Figure 19. Wallboard Showing Objectives/Sub-Objectives and All Projects

Leadership through Management Action at the Portfolio Level

In a business world where change is constant, projects — which by definition effect change — represent the major tactical mechanisms for implementing an organization's strategy. Projects exist at all levels in an organization, and at all levels project goals should connect with organizational goals. Thus, just as individual project plans need to be integrated to make the best use of organizational resources, an ensemble of projects in a portfolio should be viewed as an integrated unit that contributes to advancing organizational strategy. When possible, improvements in that unit occur by acting on its components, i.e., the projects. Leadership is required to move away from established plans, when such change is needed.

The combination of project performance measures and organizational priorities, as described above, determines the actions to be contemplated. Often resources need to be re-allocated. Modern project management, when properly performed, allows tradeoffs among schedule, budget, and scope while maintaining an integrated project, i.e., the project's schedule, budget, and scope remain consistent. For example, with careful, iterative planning, schedules can be lengthened or shortened (“crashed,” if necessary) by temporarily removing or adding resources. This resource re-allocation will affect a project's short-term budget, but it need not always change the total project budget. The priorities of the organization determine the proper mix of short-term and long-term goals, and these change, too. Instead of the above resource re-allocation, sometimes projects should be terminated.

Terminating a project before its planned completion may seem to represent an extreme or punitive action, but if a project's performance measure indicates poor achievement and it is not contributing to important organizational objectives, that project's resources can best be used elsewhere. The so-called sunk costs of a project should not be allowed to affect an organization's future. The proper questions to ask are: what costs are necessary to complete a project, what are the currently anticipated benefits, and what other project opportunities are competing for scarce resources? The original plans serve only as mileposts against which to measure progress. An organization's strategic direction may change due to changes in competition. New opportunities continually emerge. Terminating a project forcefully requires courage, but too often we see projects allowed to continue until they die by a variety of means such as attrition. A decision-maker's job is to use available (or to-be-available) resources in a way that maximizes the organization's objectives. This maximization can and should include qualitative ‘morale’ costs of terminating projects that are meeting their goals but are no longer competitive for the organization.

The mechanism that we have described in this paper allows decision makers to measure their project portfolios against the objectives to which they themselves have given priority.


The AHP can be successfully used to integrate project measures from EVM and other sources to synthesize and enhance the likelihood of success in leading project portfolio management. The approach presented in this paper can first improve the process of selecting projects for an organization's portfolio. It then brings consistency and rationality to the efforts subsequently required: the continual refinement of the projects, their effective prioritization, adjustments of resource allocations among them, and possible termination of projects that no longer carry sufficient weight toward the organization's goals (some of which may have changed since the original project concept). In a business world that depends more and more on successful project implementation for tactical and ultimately strategic achievement, these important management and leadership efforts are vital to the future of the any organization.

Appendix I – The Earned Value Management Method

The EVM method integrates three critical elements in the management of projects: scope, cost, and time. It requires the periodic monitoring of actual expenditures and scope accomplishments, and it supports the evaluation of project performance against schedule and budget. It allows the calculation of differences in cost and schedule from that which had been planned, and one may also calculate performance indices that permit linear forecasting of total project cost and duration at the completion of the project. EVM can provide early indications of expected project results based on project performance, thus highlighting a possible need for corrective action. Therefore, EVM allows the project manager and project team to adjust project strategy based on cost and schedule requirements, actual project performance and trends, as well as project objectives and the environment within which the project is being conducted. An organization may elect to apply EVM uniformly to all of its projects, or only to projects exceeding its own thresholds for cost and detailed schedule reporting and control.

Key Components of EVM

EVM uses cost as the common measure of project performance:

  • Planned Value (PV) (Anbari, 2003; PMI, 2008 and 2005) is the time-phased budget baseline. It is the scheduled budget for accomplishing the activity, work package, or other project work. It is also called the Budgeted Cost of Work Scheduled (BCWS) (Anbari, 2001; Kerzner, 2009) or the scheduled cost (Cs) (Cioffi, 2006a).
  • The Budget at Completion (BAC) is the total budget baseline for the project, occurring as the last point on the cumulative PV curve, and so holding the highest PV. (Cs1 in Cioffi, 2006a.)
  • Actual Cost (AC) (Anbari, 2003; PMI, 2008 and 2005) represents the actual funds spent to accomplish project work to earn the associated value for either an individual task or the project as a whole; if the latter, it is the cumulative cost for all completed work. This is also known as the Actual Cost of Work Performed (ACWP) (Anbari, 2001; Kerzner, 2009) or just the actual cost, Ca (Cioffi, 2006a).
  • Earned Value (EV) (Anbari, 2003; PMI, 2008 and 2005) is the budgeted cost of the work performed for either an individual task or the project as a whole; if the latter, it is the cumulative value for all the work completed. This is also called the Budgeted Cost of Work Performed (BCWP) (Anbari, 2001; Kerzner, 2009) or budgeted cost (Cb) (Ciofi, 2006a). To obtain the EV for an item, its budget can be multiplied by the proportion of the work that has been completed. (Various schemes are used to estimate this fraction.)

Table 1 shows the Work Breakdown Structure (WBS) of a project with a total budget of BAC = 100 and an earned value of EV = 20 + 20 = 40. This translates project accomplishments from physical units of measure (such as cubic yards of concrete) to financial measures.

Table 1: WBS, Budget, % Complete, and Earned Value ($000)

  Budget % Complete Earned Value



Phase 1


Work Package 1.1

20 100 20

Work Package 1.2

40 50 20



Phase 2


Work Package 2.1


Work Package 2.2







100   40
Planned Value, Actual Cost, Earned Value, and Variances

Figure 1: Planned Value, Actual Cost, Earned Value, and Variances

Figure 1 (adapted from Anbari, 2003) illustrates a project in which the budget at completion is $100, and as of the project status date, the planned value is $50, the actual cost is $60, and the earned value is $40.

Alternatives to using the percent complete to determine physical accomplishments include the 50 / 50 rule, which specifies that to calculate the planned value, 50% of an item's budget is recorded when the item's work is scheduled to start and the remaining 50% is recorded when the item's work is scheduled to be completed. To calculate the item's earned value, 50% of its budget is recorded when the work starts and the remaining 50% is recorded when the item's work is completed. Similarly, the 0 / 100, the 10 / 90, or the 20 / 80 rule might be used.

Performance Measurement

Cost performance is determined by comparing earned value to actual cost, and schedule performance is determined by comparing earned value to planned value.


The so-called Cost Variance (CV) measures the difference between the budgeted and actual cost of work performed: CV = EV - AC. For the above project: CV = 40 - 60 = -20. The Schedule Variance measures the difference between the budgeted cost of work performed and scheduled: SV = EV - PV. For the above project: SV = 40 - 50 = -10. Project variances are based on cumulative data. A value of zero indicates on-target performance, positive indicates performance better than planned, and negative indicates performance worse than planned. The Cost Variance Percentage (CV%) is: CV% = CV / EV, and the Schedule Variance Percentage (SV%) is: SV% = SV / PV. For the above project: CV% = -20 / 40 = -50%, and SV% = -10 / 50 = -20%.

Performance Indices

The Cost Performance Index (CPI) is a measure of the conformance of actual cost to the budget of the work performed: CPI = EV / AC. For the above project: CPI = 40 / 60 = 0.67. The Schedule Performance Index (SPI) is a measure of the conformance of actual progress to the schedule: SPI = EV / PV. For the above project: SPI = 40 / 50 = 0.80. (Cioffi (2006b) inverted the ratios and termed them factors.) Performance indices are based on cumulative data, where a ratio that equals 1 indicates on-target performance, greater than 1 indicates good performance, and less than 1 indicates poor performance. These indices can be combined to provide a single indicator of the project's scope, schedule, and budget performance (Anbari, 2001; Cioffi, 2006a; Lewis, 2001; Meredith & Mantel, 2009).


EVM is useful in forecasting the cost and time of the project at completion, based on actual performance up to any given point in the project. Estimates at completion may differ based on the assumptions made about future performance. The assumption generally associated with EVM is that past performance is a good predictor of future performance, and most efficiencies or inefficiencies observed to date will prevail to completion. Using this assumption, the cost estimate at completion is the sum of the cumulative actual cost plus the original budget for the remaining work modified by a performance factor, usually the cumulative cost performance index. For the above project, the estimate at completion is 150. Similarly, the time estimate at completion (TEAC) can be calculated based on the baseline schedule at completion and actual performance (Anbari, 2003). Other assumptions can be made about future performance and can result in significantly different estimates at completion (Anbari, 2003). The Variance at Completion (VAC) measures the difference between the Estimate at Completion (EAC) and the Budget at Completion: VAC = BAC – EAC. For the above project: VAC = -100 - 150 = -50.

The forecasts discussed above are generally linear projections of performance based on data up to the project's status date. They can vary greatly in the early stages of the project and tend to stabilize as the project progresses. EVM provides project managers and organizational leadership with early warning signals that can allow them to take timely actions in response to indicators of poor performance, thereby enhancing opportunities for project success. Such indicators have been found to be reliable as early as 15% into a project. Better planning and resource allocation associated with the early periods of a project might be the cause of this reliability (Fleming & Koppelman, 2005). Cioffi (2006b) shows how to calculate the performance improvement required for the remaining work to achieve the objectives of a project that is not meeting its schedule or budget targets. He suggests a calculation for the point beyond which recovery is highly unlikely or impossible. Mathematical forecasts of cost and schedule should be reviewed with work package managers, project leaders, program directors, and functional managers, to give them an opportunity to provide their subjective forecasts. Both mathematical and subjective forecasts could be included in management reports, to encourage managers to consider appropriate actions, and help avoid arguments over the numbers during portfolio review meetings.

For the good of the organization, project and program forecasts may well be self-defeating prophecies! Large deviations usually attract the attention of the leadership of the organization and result in corrective action. Small deviations are usually left alone. By highlighting such deviations, EVM helps focus leadership's attention on projects or work packages that need it most. As a result, EVM supports management of projects collectively and enhances the management of the enterprise's project portfolio (Anbari, 2001 and 2003), leading to the attainment of strategic goals and sustained competitive advantage.


Al-Harbi, K. M. Al-S. (2001). Application of the AHP in project management. International Journal of Project Management, 19 (1), 19–27.

Anbari, F. T. (2003). Earned Value Project Management Method and Extensions. Project Management Journal, 34 (4), 12–23.

Anbari, F. T. (2001). Applications and Extensions of the Earned Value Analysis Method [CD-ROM]. Proceedings of the Project Management Institute Annual Seminars & Symposium, November 1–10, 2001, Nashville, TN, USA. Newtown Square, PA: Project Management Institute.

Archer, N. P., & Ghasemzadeh, F. (1999). An integrated framework for project portfolio selection. International Journal of Project Management, 17 (4), 207–216.

Atkinson, R. (1999). Project management: cost, time and quality, two best guesses and a phenomenon, its time to accept other success criteria. International Journal of Project Management, 17 (6), 337–342.

Cioffi, D. F. (2006a). Designing project management: A scientific notation and an improved formalism for earned value calculations. International Journal of Project Management, in press.

Cioffi, D. F. (2006b). Completing projects according to plans: an earned-value improvement index. The Journal of the Operational Research Society, in press.

Cioffi, D. F., and Voetsch, J. (2006). A Single Constraint? Evidence That Budgets Drive Reported Customer Satisfaction, submitted to the European Journal of Operational Research, 10 December 2005.

Cleland, D. I. (Editor). (2004). Field Guide to Project Management, (2nd Ed.). New York: John Wiley & Sons.

Cooke-Davies, T. (2002). The “real” success factors on projects. International Journal of Project Management, 20 (3), 185–190.

de Wit, A. (1988). Measurement of project success. International Journal of Project Management, 6 (3), 164–170.

Dyer, R. F., & Forman, E. H. (1992). Group Decision Support with the Analytic Hierarchy Process. Decision Support Systems, 8(2), 99–124.

Fleming, Q. W., & Koppelman, J. M. (2005). Earned Value Project Management, (3rd Ed.). Newtown Square, PA: Project Management Institute.

Forman, E. H., & Gass, S. I. (2001). The Analytic Hierarchy Process - An exposition. Operations Research, 49 (4), 469–486.

Jugdev, K., Müller, R. (2005). Project success: A retrospective look at our evolving understanding of the concept. Project Management Journal, in press.

Kerzner, H. (2009). Project Management: A Systems Approach to Planning, Scheduling, and Controlling, (10th Ed.). New York: John Wiley & Sons.

Kim, E. H., Wells, Jr., W. G., & Duffey, M. R. (2003). A model for effective implementation of Earned Value Management methodology. International Journal of Project Management, 21 (5), 375–382.

Mirkin B. G., & Fishburn, P.C. (1979). Group Choice. Washington, DC:V.H. Winston: Halsted Press.

Lewis, J. P. (2001). Project Planning, Scheduling, & Control: A Hands-On Guide to Bringing Projects In On Time and On Budget, (3rd Ed.). New York: McGraw-Hill.

Lim, C. S., & Mohamed, M. Z. (1999). Criteria of project success: an exploratory re-examination. International Journal of Project Management, 17 (4), 243–248.

Mahdi, I. M., & Alreshaid, K. (2005). Decision support system for selecting the proper project delivery method using analytical hierarchy process (AHP). International Journal of Project Management, 23 (7), 564–572.

Meredith, J. R., & Mantel, Jr., S. J. (2009). Project Management: A Managerial Approach, (9th Ed.). New York: John Wiley & Sons.

Project Management Institute. (2008). A Guide to the Project Management Body of Knowledge (PMBOK® Guide), (4th Ed.). Newtown Square, PA: Project Management Institute.

Project Management Institute. (2005). Practice Standard for Earned Value Management. Newtown Square, PA: Project Management Institute.

Saaty, T. L. (1980). The Analytic Hierarchy Process, McGraw-Hill Book Co., N.Y.

Saaty, T. L. (1994). How to make a decision: The analytic hierarchy process. Interfaces, 24 (6), 19–43.

Stevens, S. S., (1946). “On the Theory of Scales of Measurement’, Science, (103, 1946), pp. 677–680.

Wateridge, J. (1995). IT projects: a basis for success. International Journal of Project Management, 13 (3), 169–172.

Wateridge, J. (1998). How can IS/IT projects be measured for success? International Journal of Project Management, 16 (1), 59–63.

© 2010 Project Management Institute