Select and prioritize project with the MESA® (Matrix for the Evaluation of Strategic Alternatives)
Whereas early project management was not overly concerned with resource availability, in recent years the need to focus available resources on key, or significant, projects has grown to a point where it is difficult to undertake a project without sound justification. Among projects competing for the same resources, those that get the highest financial rating, are best marketed, or the ones that have the most powerful sponsor, will win the required resources and be implemented. Buckley (1998), as well as Amram and Kulatilaka (1999), argue that strategic projects (or programs) decisions are made, based on strategic considerations that have often little to do with tangible benefits. My own research (Thiry, 2002) has demonstrated that organizations’ measures of success are seldom linked to tangible benefits that correspond to the initial requirements. Traditional project evaluation or prioritization techniques are almost solely based on accounting and financial measures (Dyson & Berry, 1998). According to Barwise et al. (1989) and Brewer et al. (1999), financial-based tools are 'myopic’ instruments as they undervalue overall strategic benefits against pure financial measures; they usually focus on strategic convergence (efficiency), rather than promote strategic advantage (effectiveness).
As organizations are developing program and portfolio management techniques, the need to prioritize resources across the organization is becoming more and more urgent. The necessity to link the expenditure of resources to the fulfillment of needs and expectations is obvious to create long and medium-term satisfaction.
Amram and Kulatilaka (1999) argue that, managers do not fully understand selection tools and that their use is often not appropriate. Managers tend to use simple tools such as NPV, ROI, or others as a required ‘organizational ritual’ (Slater et al., 1998), complement them by applying their judgment and intuition (Ward & Grundy, 1996; Buckley, 1998), and then take decisions based upon perceived strategic output and personal charisma (Amram & Kulatilaka, 1999). Strategic projects (or programs) are therefore often managed unsystematically (Buckley, 1998). Such situations cannot be sustained in the long term in light of the more and more pressing requirement to prioritize limited resources.
A consistent prioritization model based on the satisfaction of needs and the wise use of resources has to be developed. It must take into consideration more than financial feasibility and consider a system's perspective of organizational effectiveness and competitive advantage, as well as achievability.
Developing a Selection Matrix
Crawford, Hobbs and Turner (2002) state that project classification systems are typically developed to provide guidance on the adoption of appropriate management systems, selection of project personnel or choice of project organization. It is therefore important that the classification or prioritization reflects the objective of the process. A program/portfolio selection matrix needs to classify projects, based on their ability to deliver benefits to stakeholders and the effort required to achieve those benefits. The selection matrix developed in this paper has been developed specifically for programs – a coherent group of interdependent projects (Aarto, 2001) –, although it can also be used for project portfolios – projects that are not necessarily interdependent (ibid.).
The technique used for the selection matrix presented in this paper is based on the classification principle used in risk management for prioritizing risks. Whereas risks are assessed against their probability of occurrence and their impact on project objectives, potential projects/actions, will be assessed on their achievability – the equivalent of probability of occurrence – and their contribution to stated benefits – or impact on the organization.
Quantitative (financial-based) scoring
There is a wide range of quantitative financial tools for evaluating projects. Dyson and Berry (1998) identify net income, return on investment (ROI), payback period, internal rate of return (IRR), and net present value (NPV). Brealey and Myers (1996) add sensitivity analysis, cost-benefit analysis and others. Barwise et al. (1989) and Buckley and Tse (1996) argue that, although these tools are familiar to all company executives, they are not sufficient in the current business context.
As seen above, managers have a tendency to use simple financial techniques to prioritize projects and other actions in a portfolio or program context. Simple quantitative tools do not take into account the contextual change, which breeds uncertainty (Hertz & Thomas, 1983; Dixit & Pindyck, 1994) and requires flexibility and adaptability (Barwise et al., 1989; Kester, 1993; Bontis et al., 1999). It also is well known that one project can be favored over the other, simply based on the technique used and that each technique will supply a perfectly defensible justification, although based on different arguments.
Recently a few, more elaborate and systemic, techniques have become quite successful. Stern Stewart & Company have introduced ‘Economic Value Added©’ (EVA®) in the late 1980s (Stewart, 1994); this technique aims to go beyond short-term financial performance measures, but achieves best results if integrated in a complete management framework (Pigott, 2000). Kaplan and Norton (1992, 2000) have introduced the ‘Balanced Scorecard®’ as a technique for developing corporate strategies and their measures of performance, beyond simple financial measures. In Europe, the EFQM (European Foundation for Quality Management) Business Excellence Model© aims for the same goals and sets organizational measures of success as a wide-ranging concept.
These more recent techniques, although providing quantitative measuring data, are relying on well set qualitative criteria that represent the objectives of an organization before any attempt is made at quantification. Their emergence demonstrates the need for more wide-ranging evaluation techniques that set qualitative criteria before aiming to measure quantitatively.
Porter (1997) and McGrath (1999) both claim that managers are usually hesitant about setting measurable objectives because they fear judgment. Sadly, managers are usually judged, and rewarded or punished, on short-term results, while project's actual benefits can only be measured effectively in the medium or long term (Brewer et al., 1999). Cooke-Davies (2002) has clearly linked project benefits to operations and stated that the successful delivery of project outputs cannot be sufficient to measure success. Benefits are a key element of project success and therefore of project significance and priority and benefits delivery goes beyond simple project management. Concentrating on short-term financial evaluation techniques will marginalize the “real” measures of benefits.
Dyson and Foster (1980), as well as others (Guba & Lincoln, 1989; Thiry, 2002), have observed that not all decisions or evaluations can be made, based on quantitative data; some data can only be described quantitatively. When the PMI has revised Chapter 11 of the PMBOK®® Guide (PMI, 2000), they have switched the emphasis from quantitative analysis to qualitative analysis, the former being deemed essential while the latter becomes subject to resource availability. In any way, it is not possible to make a good quantitative evaluation if it is not supported by a sound qualitative analysis. In summary, qualitative analysis comes first and must clearly identify actual expected benefits and assess resource requirements; then, time and resources permitting, quantitative analysis follows, based on these qualitative factors.
Contribution to benefits / achievability
Following more than 25 years of practice in project and value management, as well as over 10 years involvement in program management practice and research, the author has developed a series of measures for the prioritization of projects and other actions that form programs and portfolios. These qualitative measures are based on a value ratio which translates as: satisfaction of needs / resources used to achieve that satisfaction.
In value management, needs are usually expressed as expected benefits; once they are agreed between all the key stakeholders, they become the Critical Success Factors (CSF), which are a qualitative representation of needs and expectations. Resources required to achieve the expected benefits should be assessed against availability; this is the demand / supply equation. In addition to strict resource availability, achievability takes into account a number of other factors that are detailed below; they include: financial factors, parameters/constraints, human resources & people factors, and complexity.
Contribution to benefits
The stakeholders’ needs and expectations should be the foundation of the stated expected benefits of a program/portfolio of projects. The author uses a well-proven value management method to identify, define classify and agree expected benefits. This method is called function analysis and is well described by many authors (Kaufman, 1997; Thiry, 1997; Thiry 2003). It enables a group of stakeholders to identify their needs (expected benefits) in clear terms and classify them by order of importance to agree the CSFs that will establish success.
Critical success factors
Rockart (1979) first introduced the term Critical Success Factor (CSF); he divided CSFs into generic industry-based factors and firm-specific factors. In the MESA© process, we aim to define program-specific CSFs. Clarke (1999, p 139), states that: “if attention is paid to sets of critical elements and their interactions, success is more likely”. CSFs enable the decision-makers to focus on the few factors that will ensure a program's success. In line with Pareto's concept, CSFs, are weighted, typically using paired comparison, so that those that will contribute most to success can be identified.
Establishing the CSFs is a key element of the MESA© process as they will be the measure the contribution to the overall benefits. A few points are important in establishing CSFs:
- The CSFs have to be program-specific to reflect the needs (expected benefits) of the program's stakeholders;
- They must be derived from the needs established by the stakeholders;
- They must cover the whole range of needs, not be focused on only a few (weighting will prioritize them);
- They should not be too numerous (5-8 is ideal) and be detailed enough to enable establishment of measures.
These CSFs are the basis for the evaluation of the contribution of options to the overall benefits. Their weighting translates the key stakeholders’ perception of their relative importance.
MESA© Preparation: Benefits Assessment
The technique uses the following steps:
1. Stakeholder analysis:
|Task:||Identification and classification stakeholders|
|Output:||List of stakeholders divided in influence (impact on program) categories.|
2. Function analysis:
|Task:||Identification of stakeholders’ needs & expectations; classification by importance; agreement on CSFs and weighting of these (combined weight of all CSFs must be 100).|
|Output:||Benefits breakdown structure (see Exhibit 1) and list of weighted CSFs|
3. Options Scoring (First cut):
|Task:||Score each option on a scale of 1 to 10 against each weighted CSFs|
|Output:||List of scored options (maximum points 1000)|
4. Options Improvement:
|Task:||Perform VM & RM on selected options|
|Output:||List of recommended options|
5. Options Scoring (Final cut)
|Task:||Score recommended options against weighted CSFs|
|Output:||Final list of options prioritized|
Once the needs have been identified and classified and the CSFs agreed and weighted; a first scoring is performed to enable the team to eliminate a number of options that are not in line with the program's objectives. Following this scoring, the team performs risk and value management on all the remaining options and includes the improvements/risk responses to the revised options, which will then be reassessed against the weighted CSFs. This provides a final list of prioritized options that will be assessed for achievability. (Exhibit 2)
Exhibit 1: classification of benefits and agreement on CSFs
Benefits (Function) Breakdown Structure
Exhibit 2: Evaluating Options against CSFs:
Project categorization has existed for quite a few years. Usually, as seen above, projects are categorized for investment purposes, based on their expected financial output. Recently, new types of categories have been developed to help manage projects and human resources. More specifically, categorization will cover issues like size, complexity and familiarity (Crawford et al., 2002) or complexity, uncertainty and pace (Shenhar et al., 2002). This type of categorization is directly linked to the achievability of a program or portfolio, taking into account a wide view of organizational issues. Over years of practice and research, the author has defined a series of possible factors for measuring achievability; one must be careful though not to take them for granted and become too dependent on any classification system for fear of losing flexibility and understanding of its elements (Crawford et al., 2002). Any classification system must be “meaningful” for its users (ibid.).
Although some authors have described feasibility as the “difficulty of realisation and implementation” (Asrilhant & Dyson, 2000), this term is traditionally associated with financial feasibility or financial cost-benefit analysis. Achievability seemed a more appropriate term to describe an organization's capability to undertake a set of projects, taking into account current or future workload. This concept is closely linked to the supply/demand ratio, and therefore in line with current developments in project and general management.
Demand vs Supply
Demand and supply are the key elements of achievability; the resources available must be equal or higher than the resources required, for a set of projects to be achieved successfully. If supply ≥ demand, achievability can vary from very high to medium; if supply < demand achievability varies from low to very low. The factors used and their description must be relevant to each organization: e.g. total budget (supply) vs estimated cost of project (demand), expertise of organization (supply) vs innovativeness of project (demand), etc..
Based on his experience and that of other writers, specifically in the domain of organizational effectiveness, the author has divided achievability into 4 major areas:
- Financial Factors: the traditional measure of feasibility.
- Parameters and Constraints: the criteria related to pace, uncertainty and size.
- Human Resources and People Factors: all the factors related to communication, competence and familiarity.
- Complexity: the difficulty to “do the right thing” and deliver actual benefits, also related to familiarity.
These factors are detailed for each organization, prior to evaluation and used in a grid. Measurable descriptors are defined to indicate levels of achievability and are set as a non-linear cardinal scale (PMI, 2000, p.135). They are all detailed below.
These factors are all related to the difficulty to achieve projects in regards of the overall availability of funds. They include: total estimated capital cost; impact on company cash flow; source of funding; delay in expected return/benefits and life-cycle-cost, if the organization is committed to the operation of the deliverable.
Parameters & constraints
This set of factors includes all the criteria imposed by the client, the organization's structure or the project itself, they are: number of members in team; level of familiarity with contract type; geographical spread of work; as well as acceptability of schedule and budget.
Human Resources and People Factors
The quality of human resources is crucial to the achievability of a project within a program; the factors listed here have all to do with the capability of resources to deliver the product. They are: spread of resources; familiarity with resources; other critical work being undertaken by the organization at the same time; customer perception of resources allocated; and staff expertise.
The last point concerns the degree of difficulty to achieve the project, based on its complexity. Complexity factors include: familiarity with type of work / innovativeness; interdependency of deliverables; number of stakeholders; stakeholders spread; clarity of objectives, benefits & CSFs; clarity of scope statement
MESA© Preparation: Achievability Assessment
The technique uses the following steps:
1. Identification of achievability criteria:
|Task:||Agree list of criteria that define achievability for organization and descriptors from very low to very high achievability|
|Output:||Achievability assessment grid (maximum achievability score must be 10).|
2. Weighting of achievability criteria:
|Task:||Agree proportional value of each criterion in regards of organization's capabilities and structures.|
|Output:||Weighted criteria (total weight must equal 100).|
3. Scoring of options:
|Task:||Identify right descriptor for each option against each criterion and calculate final achievability score (maximum score must be equal to 1000).|
|Output:||List of scored options.|
The achievability criteria are detailed and weighted for each program/organization, prior to evaluation. Measurable descriptors are defined to indicate levels of achievability and are set in a non-linear cardinal scale (PMI, 2000, p.135). The author uses 5 weighting descriptors and a scale of 0.625/1.25/2.5/5.0/10.0.
Combined Benefits/Achievability Scoring
Once project options have been scored against benefits contribution and achievability the program manager or portfolio/resource manager will calculate the combined benefits/achievability score in order to be able to decide which options to implement, or create a priority list for the allocation of financial and human resources. The matrix is designed much in the same way as an inverted risk assessment matrix. Because contribution to benefits is considered more important than the achievability (e.g. an option having a high contribution to benefits and a medium achievability will be favored over a highly achievable option with medium contribution to benefits), the scoring factors of benefits grow exponentially whereas the scoring factors of achievability grow linearly. (Exhibit 3)
Exhibit 3: Matrix for the Evaluation of Strategic Options (MESA©)
All options that get a combined score over 0.15 are implemented if the budget allows; all options that get a score between 0.05 and 0.15 are implemented if they are synergetic to a high priority option or if they have a high marketing value, otherwise they are reexamined to see if they can be further improved to make the upper tier of the program. All options that score below 0.05 are rejected because they either do not contribute to benefits enough or are too difficult to implement.
MESA© is a decision-making tool that provides guidance for prioritization or reprioritization of projects that are part of a program or portfolio. It based on both the project's contribution to the program's success and its achievability within the current workload and available resources. If the methodology described in this paper is thoroughly followed, it generates consistent decision-making, based on stakeholder needs and expectations, and on the organizations’ capabilities. When finalizing the selection process, decision-makers should always take a wide view of organizational issues that includes the whole program, interdependencies between actions, interface with other programs and the organizational context, including competitiveness, specifically, factors like: project synergy, marketing value, acceptability, etc.
The MESA© tool reflects the strategic objectives of each organization at the time of the decision. Critical success factors must be directly linked to expected benefits that reflect the stakeholders’ needs and expectations for that program/portfolio/strategy. The author has defined a series of possible factors for measuring achievability; each program/organization must define their own achievability factors, based on the understanding of its own context. The MESA© has to be “meaningful” to its stakeholders.
It is also crucial to involve senior management in the process; this will reduce the probability of high-level decisions overturning investment recommendations. If the process is shared and agreed with the key stakeholders beforehand, the MESA© offers a sound, objective and robust framework that prevents undue political or power-based influence and stabilizes the program or portfolio prioritization and change process.
Amram, M., & Kulatilaka, N. (1999). Real Options: Managing Strategic Investment in an Uncertain World (1st ed.). Massachusetts: Harvard Business School Press.
Asrilhant, B. and Dyson, R.G. (2000, June) Converging Decision-Support Tools On Successful Strategic Project Management: From Theory To Practice. 3rd PMI Europe Conference, Jerusalem, Israel.
Barwise, P., Marsh, P. R., & Wensley, R. (1989, September-October). Must finance and strategy clash? Harvard Business Review, 67(5), 85-90.
Bontis, N., Dragonetti, N., Jacobsen, K., & Roos, G. J. (1999). The knowledge toolbox: A review of the tools available to measure and manage intangible resources. European Management Journal, 17(4), 391-401.
Brealey, A. R., & Myers, S. W. (1996). Principles of Corporate Finance (5th ed.). Singapore: McGraw Hill International Editions.
Brewer, P. C., Chandra, G., & Hock, C. A. (1999, Spring). Economic value added (EVA): Its uses and limitations. S.A.M. Advanced Management Journal, 64(2), 4-11.
Buckley, A. (1998). International Investment: Value Creation and Appraisal (1st ed.). Denmark: Copenhagen Business School Press.
Buckley, A., & Tse, K. (1996). Real Operating options and foreign direct investment: A synthetic approach. European Management Journal, 14(3), 304-314.
Clarke, A. (1999). A practical use of key success factors to improve the effectiveness of project management. International Journal of Project Management, 17(3), 139-145.
Cooke-Davies, T. (2002) The “real” success factors on projects, International Journal of Project Management, 20(3), 185-190.
Crawford, L., Hobbs, B. and Turner, R. (2002, July), Investigation of Potential Classification for Projects, PMI Research Conference 2002, Seattle, WA, USA.
Dixit, A. K., & Pindyck, R. S. (1994). Investment under Uncertainty. New Jersey: Princeton University Press.
Dyson, R. G., & Berry, R. H. (1998). The Financial Evaluation of Strategic Investments. In: Dyson, R. G. and O’Brien, F. A. Strategic Development: Methods and Models (pp. 269-297). UK: John Wiley and Sons Ltd.
Dyson, R. G., & Foster, M. J. (1980). Effectiveness in strategic planning. European Journal of Operational Research, 5(3), 163-170.
Guba, E.G. and Lincoln, Y.S. (1989). Fourth Generation Evaluation., Newbury Park, CA: Sage Publications
Hertz, D., & Thomas, H. (1983). Risk Analysis and Its Applications. London: John Wiley & Sons Ltd
Kaplan, R. S., & Norton, D. P. (1992, Jan-Feb). The balanced scorecard: Measures that drive performance. Harvard Business Review, 70(1), 71-79.
Kaplan, R.S. and Norton, D.P. (2000, Sep.-Oct.) “Having trouble with your strategy? Then map it.”, Harvard Business Review, Harvard College, 167-176
Kaufman, J. (1997) Value Management: Creating Competitive Advantage. Crisp Management Library, Crisp Publications Inc.
Kester, W. C. (1993). Turning Growth Options into Real Assets. In R. Aggarwal Capital Budgeting Under Uncertainty (3rd ed., pp. 187-207). New Jersey: Englewood Cliffs, Prentice Hall.
McGrath, R. G. (1999). Falling forward: Real options reasoning and entrepreneurial failure. Academy of Management Review, 24(1), 13-30.
Pigott, J. (2000, Dec.) Ensuring a Degree of Robustness in your Investment Procedures: The Use of EVA to Build an Effective Governance System. The International Business & Corporate Strategy & Planning Congress 2000, Amsterdam, The Netherlands.
PMI Standards Committee (2000) A Guide to the Project Management Body of Knowledge (PMBOK®® Guide). Drexel Hill, PA: Project Management Institute..
Porter, M. (1997) Interview in: The Financial Times, 19 June 1997Rockart, J. F. (Mar-Apr 1979). Chief Executives Define Their Own Data Needs. Harvard Business Review, 57(2), 81-93.
Shenhar, A.J., Dvir, D., Lechler, T. & Poli, M. (2002) One Size Does Not Fit All: True for Projects, True for Frameworks, PMI Research Conference 2002, Seattle, WA, USA.
Slater, S. F., Reddy, V. K., & Zwirlein, T. J. (1998). Evaluating strategic investments: Complementing discounted cash flow analysis with options analysis. Industrial Marketing Management, 27(5), 447-458.
Stewart, G. B. III (1994, Summer). EVA: Fact and Fantasy. Journal of Applied Corporate Finance, 7(2), 71-84.
Thiry, M. (1997) Value Management Practice, Project Management Institute, Sylva, NC.
Thiry. M. (2002, June) “How can the benefits of PM training programs be improved?” 5th PMI Europe Conference, Cannes. Accepted for publication in International Journal of Project Management, Elseveir Science, Oxford. Scheduled date: April 2004
Thiry, M. (2003) Value Management, Chapter 22 in: Project Management Pathways. High Wycombe, UK: The Association for Project Management.
Ward, K. & Grundy, T. (1996). The Strategic Management of Corporate Value. European Management Journal, 14(3), 321-330.
Proceedings of PMI® Global Congress 2003 – North America
Baltimore, Maryland, USA ● 20-23 September 2003