Program management--benefit measurement evaluation

Abstract

Program Benefit Measurement Evaluation is a challenge for many programs and organizations. Whether it is due to lack of clearly defined benefits for the program or a flawed approach to evaluating the benefits – too many programs are viewed as falling short of delivering on their promise.

Program evaluation is about collecting and analyzing information related to a program or some aspects of it, so we can interpret its actual performance, make sense of it, and potentially get better at doing it. There are multiple types of evaluations, including needs assessments, cost/benefit analysis, effectiveness, efficiency, goal-based, process and outcomes. Many are quantitative in nature and others are more prone to bias but many are related to each other and somewhat integrated.

This paper deals with some of the challenges associated with performing program measurement evaluation. It reviews the most common and beneficial types of evaluations and the main areas of focus associated with them. It also looks at contributors to a successful evaluation: connecting the process to the program's purpose and objectives; defining the evaluation's focus, scope, and span; and determining why and what we try to evaluate.

Introduction

For the most part, program benefit measurement and evaluation are overlooked. It is often not performed properly or even at all and when there is an attempt to do it, many organizations do not know where to begin, how to do it, or what to benchmark it against.

There are those who see it as a rather useless activity that generates a lot of data ‘noise’ and unrealistic conclusions. On the other hand, some try to achieve unattainable levels of accuracy, which may result in useless piles of data. Though times and reality dictate more scrutiny on what and how we do things in the organization and in turn a more pressing need for justification that needs to be both efficient and effective. In simple terms — a growing need for practical and relevant benefit measurement approaches.

Program benefit measurement and evaluation are not about proving success or failure of a program. Most programs will fall somewhere in between the definitions of the two extremes. Our benefit measurement objective is future oriented; how to sustain and, if needed, adjust the program benefits. One major advantage in performing it is the improvement in communication and the creation of a feedback and collaboration-conducive environment among stakeholders.

The process of measuring and evaluating benefits is quite straightforward and it does not need to be expensive or complex. The bigger challenges come from not knowing what we need to measure, or even not having clearly stated program goals and objectives to begin with.

We conduct benefit measurement and evaluation on a regular basis, especially informally. It is now time to formalize it to achieve consistency, clarity, and continuity. The result of an effective and successful evaluation process will be a more strategic-oriented discussion of opportunities, better communication, more productive relationships with the customer and higher returns on our investment.

When referring to evaluation success, our goal is to effectively measure how the program was doing in the context and the intention for which it was taken, and not necessarily to label the program as a success. Unfortunately, a successful evaluation may deem the program a failure.

The Importance of Benefit Measurement

Various studies and observations indicate that as many as three quarters of programs are deemed as failing to deliver the full or a satisfying level of their intended benefits. There are some usual suspects that probably make up a significant chunk of the root causes for these failures, including unclear or unrealistic expectations from the program, unfocused evaluation process, and having business cases focused on target savings instead of clearly expressing business benefits.

Furthermore, there is generally too much focus on deliverables and capabilities, rather than the benefits associated with them, and no mechanisms in place to manage interim or final realization and sustainment. It may be driven by multiple causes, although for the most part it appears to be due to the relative ease in measuring the generally more tangible deliverables and capabilities over the more subjective and hard to identify benefits and their sustainment.

Many programs adopt the ‘mentality’ of projects that are often considered to be finished when their deliverables are complete, without attempting to realize and sustain them over time, or without checking for the (positive or negative) downstream effects of the deliverables. Further, there need to be early definitions of benefit and business objectives, including identifying ownership of those responsible for planning and managing their achievement and for those responsible for realizing them.

These foundational aspects of any program reinforce the criticality of structure, accountability, clarity, and discipline to the definition and measurement of program benefits. Alongside, the context in which the program takes place and the portfolio management should also be considered. Effective benefit realization planning must identify and address the changes that will be required, including any potential resistance along the way.

Without a measurement of the program benefits, we may miss out on many opportunities associated with it, stumble on things that otherwise might be easy to resolve, fail to extend or sustain value created for the organization, and ‘drive blindfolded’ moving forward, without being able to assess how the program did and how to get better in the future.

Challenges

Program evaluation is a systematic method for collecting, analyzing, and using information to provide basic answers about a program. However, there are several challenges associated with the evaluation process; including the scope and span of the measure, its timing (if done during the program), the aspects that are being measured, how indicative the measurements are, and whether they capture the full picture.

In many cases, programs simply span over long periods of time and in some cases are viewed as indefinite in duration. In other cases, there are informal attempts to assess program's effectiveness, supported by questions such as which stakeholders benefit from the program and in what sense, how much the benefits ‘stick,’ how sustainable they are, and whether they aligned with the original intended benefit for which the program was taken and with the organizational strategy.

For the most part, evaluations are not systematic and are often done by evaluators who are not impartial, and with less than sufficient level of access to information.

Where to Start

More formal evaluations address these same questions, but with a systematic method for collecting, analyzing, and interpreting information. They ask some program-related basic questions and look for their answers — supported by facts and evidence. Whether internal or external, the evaluators need to have some level of relevant experience and domain knowledge and context of the program and the environment, while maintaining an unbiased view of the organization.

The most basic way to make a distinction between evaluation types is to break it down into two layers: the program implementation and the outcome objective measurement. The implementation refers to what is planned to do in the program and how it will take place. The outcome objective measurement is about what we expect to happen as a result of the program, including its downstream impact after the program ends and on all aspects of the organization. It almost follows the paths of Quality Assurance and Quality Control; QA checks the process and QC, the results.

A comprehensive evaluation should try to answer both key questions, yet too often programs are evaluated only on one of these aspects; or worse — programs may be successful in attaining the implementation objectives, but there is no answer as to whether its intended outcomes are achieved and whether the programs was actually worthwhile. The opposite may also be the case, where programs may be successful in achieving their outcomes, but if there is no information about the implementation process, it will be hard to identify the parts of the program that are associated with these changes or learn from it moving forward.

Program benefits goals setting and definitions should be made at the start of the program, along with any measures to evaluate them throughout or at the end of it. Although it is possible to ‘back track’ the evaluation and plan it later in the program's life cycle, it is more effective and efficient to design it into the program's objectives upfront.

There is a rationale for thinking about program benefit measurement earlier on and refining it throughout the program, rather than after it ends: it makes it easier to identify problem areas and attempt necessary changes while the program is still operational and before it is too late. This real-time review process also ensures that program resources and stakeholders are available to provide information for the evaluation and act if needed.

Setting up a benefit measurement evaluation strategy and creating a roadmap will help identify when we should expect benefits to emerge, recognize them as they appear, tie them to the program objectives, and realize and plan to sustain them. With that, we will also be able to identify new benefits, capitalize o —and sustain—those realized and change/improve areas in need.

Before the Evaluation Process

There are some questions to consider before starting the evaluation process:

  1. Why are we going to valuate?
  2. Who asked for the evaluation, who is the receiver, and who will use the findings?
  3. What is going to be done with the evaluation's results?
  4. Who is going to be involved in the process? Including collection, analysis, reporting, and review
  5. What information do we need?
  6. Where are we going to look for the information?
  7. How reliable are the sources of information?
  8. Are the timelines and resources dedicated to the process and analysis aligned with the expectations?
  9. How to collect and extrapolate the information
  10. Keep in mind cost/risk/benefit analysis for the effort and the desired detail level and results

How to Perform a Program Measurement Evaluation

To make it easier to benchmark and compare programs in the organization, over different time periods or against programs in other organizations, a systematic approach should facilitate structure and consistency in the data collection, analysis, and interpretation. It is easy to take things out of context, yet also quite simple to ensure we are doing it right:

  1. Discover. Review evaluation objectives and prepare for the evaluation
  2. Determine. What to measure, the type of evaluation, its context
  3. Team. Put together an evaluation team
    1. Can be internal or external, depending on the type of organization and the ability to provide an internal team with the resources, support, and ‘teeth’ to ensure the recommendations are taken seriously
    2. Due to the investment and formality involved, evaluations performed by an external team often provide more tangible and lasting recommendations
  4. Plan. Define roadmap, timelines, and methods for design, data collection, analysis, recommendations, and communication
  5. Buy-in. Start the buy-in process and inform stakeholders of their roles, participation level, and timing. Set up expectations: managing realistic expectations and achieving buy-in and understanding of the process is crucial
  6. Collect. Gather data and information. Put together whatever it takes to support the effort
  7. Organize. Classify the information based on its objectives and filter relevant information from any noise, distractions, and unnecessary data
  8. Analyze. Ensure timely and sufficient analysis and seek feedback or additional information when in need. The analysis will verify whether we have what it takes to continue and validate whether we are moving in the right and intended direction
  9. Report. State your findings. Describe the program, present analysis results, reiterate the objectives, and measure whether they have been met, to what extent, how far we are from achieving both the program's and the evaluation's goals. There needs to be some sentiment as to how effective the evaluation process has been
  10. Forward. Make recommendations and state an action plan to ensure benefit realization and sustainment, along with lessons learned and any identified areas for change and tweaking.

Cost

The cost of a program evaluation is often an issue, because no one wants to pay too much for benefit measurement, not to mention the argument as to who should pay for it. Although failing to perform evaluation altogether may end up being more costly for the organization in the long haul; building in the cost of the evaluation early on in the program into its budget is an option. Realistically, the amount of money needed is a function of multiple factors, including what aspects of the program we evaluate, the size of the program, the number of outcomes we are set to assess, and who conducts the evaluation.

Overall, the cost of benefit evaluation may reach, in some cases, up to 15% to 20% of the total program's budget. It is a balancing act between over spending to the tune of the law of diminishing returns, and sufficient spending to ensure doing it right, and getting some value from the evaluation. Arguably, a successful evaluation of the program will be paid off by the growth, efficiencies, and improvements it will lead to.

Evaluation costs will vary dramatically, depending on the scope, span, objectives, and the ‘efficiency’ level. Though generally the rule that ‘you get what you pay for’ applies. For the most part, conducting a low cost evaluation just in the name of doing something may not even be worth the expense.

Preparing for the Evaluation

Similar to the planning for the program itself, we should conduct the evaluation planning and preparation stage with a high level of focus and detail, to ensure we are setting up the right thing and that we know what ‘right’ means.

The first step is to decide what to evaluate. Is it going to be the entire program or just one or more of its components? The scope and reach of the evaluation will depend on the capacity, resources, and money we allocate toward it. Consideration has to be made for evaluating only one component, rather than the entire program; it is not only easier and lower in cost, but it will yield a higher level of accuracy. It can be done as part of a phased approach that presents a lower cost/risk option.

The next step is about putting together a framework that includes a clear picture of what we try to evaluate, along with structure and a roadmap. It will give stakeholders an idea of what it is all about and involves establishing and identifying a series of premises and assumptions. These are about the program, the evaluation process, the affected stakeholders and their definitions of the current state and the intended program's outcome and benefits. It might help to include an implementation objective for the overall set of challenges presented. It will also serve as an opportunity to discuss the differences between the program's outputs, outcomes, and benefits, and which one or more of them we try to measure.

It is worth to state the obvious and follow the principle of SMART, so there is a way to measure the objectives we evaluate and put them in context. These measurements will turn into performance evaluation criteria through a definition of the desired performance level.

Our benefit measurement evaluation needs to be benchmarked in order to provide additional validity and context. Considerations for benchmarking data include industry standards, the organization's performance and its history and market specifics.

Types of Evaluations

Of all the types of evaluations there are three that are more common and attainable and they usually answer to a broader range of organizational and program needs:

  1. Process/implementation based
  2. Goals/objectives/output based
  3. Outcome/benefit based.

Process-Based Evaluations

This evaluation type is about the way the program works and its efficiency in producing results. It is applicable to long-standing programs, or those that are considered indefinite in duration. It can look into challenging areas and can provide a much needed transparency. This type of evaluation should be performed during the life cycle of the program but it can also yield benefits if done after program closure. It can also serve as an organizational improvement opportunity by checking the alignment of the organization and the program.

A safe approach is to define what questions we try to answer through the evaluation process and what we try to find out about the program. Examples for some questions to consider:

  1. The criteria customers apply to define the services they need
  2. Internal capabilities and environments that enable delivery of the service
  3. Training and support provided to employees to deliver the service
  4. Expectations from the customer for actions, contribution to the effort, and buy-in
  5. Existing processes the customers and employees follow
  6. Perceived strengths of the program by the customer and internally—are they aligned?
  7. Challenges and complaints throughout the program
  8. Lingering issues that cause bottlenecks and other inefficiencies
  9. Areas for improvement in the program: review process effectiveness, relevance, and efficiency
  10. Criteria to define new needs or services instead of the existing one
  11. Has the program cost more money and/or taken more time than planned to deliver the intended value?
  12. Are there lessons learned that are being collected, applied, and implemented?
  13. Are there additional areas in the organization that benefit from the program?
  14. Do stakeholders feel that components are better off thanks to the program?
  15. Are methodologies, tools, techniques, and standards being introduced consistently by the program and are they used by the components?
  16. The organization — Are resources and budgets managed in a way that supports the program goals?
  17. Priorities — How aligned are the program decisions and practices compared with organizational priorities?
  18. Reporting and communication — Does information flow effectively (to whomever needs it, in a timely fashion), and efficiently (optimized, with less effort) through the program organization?
  19. Risk — Are there a central measurement and authority for risk prioritization and best practices at the program level?
  20. Quality — Are quality standards and measurements facilitated centrally and effectively?
  21. Change — Is there a consistent approach to change control processes and management across the program, with a program level impact assessment?
  22. Economies of scale — Is the program adding value to all levels — the projects, the program, and the organization, and what is this value?

Goals-Based Evaluation

This evaluation checks if the program meets its predefined goals or objectives and to what extent. There are some questions to keep in mind for this kind of measurement:

  1. Who established the program's goals and objectives?
  2. How are they established and what is the process that takes place?
  3. How far is the program toward achieving its goals?
  4. Are we on target for success in achieving those goals?
  5. If we are, what is it that we are doing right?
  6. If not, how far are we and what are the reasons for the negative variances?
  7. Are there sufficient/relevant processes, support systems, and resources to achieve the goals? If not, what is missing? If yes, what are the causes for our performance variations?
  8. Is there a need to make a priority change toward achieving the goals?
  9. Should the change be on the program/performance level, or the expectations/evaluation level?
  10. Are efforts behind schedule? Over budget? If so, why?
  11. Is there a need to change the timelines, efforts, or expectations?
  12. Should we review the goals, update, and change, add or remove any?
  13. Is there a need to change the way we look at establishing future goals going forward?
  14. Is there a gap among stakeholders between actual and perceived goal delivery levels?
  15. Are the original measures of goals still relevant and achievable?
  16. Is there a gap in the level of delivering on organizational related outcomes versus external outcomes?
  17. Whether the program is on target for achieving its goals or not — what does it mean?
  18. Have all goals been delivered or on track toward successful delivery? What does it mean?
  19. Have there been new goals realized or identified that were not there at the start?
  20. What are the reasons for these gaps?

Outcomes/Benefit-Based Evaluation

Program evaluations with an outcomes and benefits focus attempt to check the benefit sustainment and how they ‘stick’ to the organization. Beyond the goals, it checks how much the value is being realized, beyond the delivery or the extent of the delivery of the goals.

It provides an indication on whether the organization is really acting properly on delivering the desired results to the customer. These customer benefits could be measured and viewed as enhanced learning, processes, or conditions and should not be confused with program deliverables, outputs, or units of services.

It is recommended that a first attempt is made to evaluate a smaller program or a component, and continue from this point based on the results of our measurements. This will check whether the evaluation is on the right track and can also create a benchmark for subsequent portions of it.

There are some considerations and steps to follow:

  1. Review outcomes and benefits as defined for the program
  2. Check if they are still applicable and whether new benefits are/can be realized on the program level
  3. Define the outcomes and benefit to examine
  4. Measure the impact of delivering the program objectives on both the customer and the organization
  5. Inquire as to what is getting done and why from a value perspective
  6. List and prioritize the outcomes and benefits to measure. Apply the 80-20 rule if applicable
  7. List the indicators, values, and data that will help measure the level of achieving the objectives
  8. Create a performance measurement baseline to define goals and targets for measurement
  9. Plan for data collection and identify valid sources of data that will reduce bias and represent the objectives of the evaluation
  10. It may be a chance to review important processes to ensure they are being followed
  11. Review processes to check that they are relevant and produce their intended outcome
  12. Review the efficiency of processes and whether/to what extent they contribute to value creation
  13. Examine the program goals and question whether each goal delivers its intended benefits and to what extent
  14. Check item by item for economies of scope to check for new, originally unintended benefits
  15. Collect the information from all relevant and representing sources. Maintain cost-benefit considerations for information gathering techniques
  16. Check if the benefits are fully realized and explore ways to extend them/ensure they are sustained to the extent for which they were taken or beyond
  17. Perform a cost-benefit analysis to evaluate the justification of benefits
  18. Perform a cost-benefit analysis for the sustainment of benefits
  19. Make sure the analysis and results of the evaluation are not taken out of context
  20. Always look for new ways to evaluate or opportunities to make adjustments to the evaluation process

Relationship between Evaluations

Each of the three types of evaluation outlined above represents an area of focus. It addresses one of the main problems in program evaluation — the lack of focus on what organizations and programs try to evaluate and how. With that in mind, it is important to note that an evaluation type cannot quite be a standalone and there are some areas of overlap and integration among evaluation approaches.

One of the most relevant overlaps would be the relationship between process and outcome evaluation. As seen in the items for consideration above, the outcome evaluation contains a look at the process to add validity to the benefits measurement and establish a link between the two for context. Exhibit 1, ‘Relationship between Process and Outcome Evaluation Components,’illustrates the relationship between the two types of evaluation and how they interact with each other.

Relationship between process and outcome evaluation components

Exhibit 1 – Relationship between process and outcome evaluation components

(Adapted from http://www.jbassoc.com/reports/documents/evaluation%20brief%20-%20conducting%20a%20process%20evaluation%20-%20final%E2%80%A6.pdf)

Evaluating and Reporting

When collecting information from the customer, consider the types of benefits realized both by the customer and the organization. These include the learning as a result of the program and its benefits, the changes and improvements imparted on skills and behaviors, and how much more effective and efficient both the customer and the organization are in performing their work. A more subjective measure is related to stakeholders' reactions: this is about the overall service and its lasting impacts, and if measured properly it can provide a valuable insight regarding perceptions and attitudes toward the program.

Similar to any other report, the reach, depth, and scope of the benefit realization report should closely reflect the intended audiences. Participants and impacted parties should be able to review and discuss the report, along with a list of recommendations, action items, and next steps.

Many of the intended benefits will not start to materialize until later in the life cycle of the program and therefore it is important to identify the ownership of the benefits realization plan. It involves specifying actionees and timelines, downsides and projected impact on the program/organization/customer. The process should include a post implementation review, to allow time for analysis and a proper evaluation against the original business case.

Additional items should be included in the final report, and while not directly part of the actual evaluation, they have an important role in introducing context and relevance. These include background, organizational description, the program under evaluation, program purpose, a list of benefits to be realized, the scope and objectives of the evaluation costs, timelines (both for program and evaluation processes), executive summary, conclusions, recommendations, and a qualified discussion. Include also supporting information to cover goals, methods, analysis, procedures, and sources of information.

Effective Measure

The Benefits Realization Strategy must be connected to and be driven by the organization's strategic planning and portfolio management. The first step toward achieving it is to establish a framework that defines how benefits should be identified, structured, planned, and realized. It should classify types of benefits and value to the organization, and reference the current strategic goals and objectives. We should not only list the potential benefits identified, but rather identify dependencies to understand where the achievement of one benefit depends on the realization of another.

This discussion of program benefit measurement and realization deals with aspects of program evaluation that, although meaningful, are not financially driven. While financials are generally valid, widely accepted and fairly easy to obtain, they can fall victim to biases, taken out of context and the numbers may mask unrealistic or unreliable inputs. Our ‘preferred’ evaluation options are qualitative in nature.

Delivering Strategic Goals and Objectives

The organization's strategic goals and objectives should be articulated and evident throughout the benefits identification and planning. The business case needs to be evaluated thoroughly to ensure that it is focused on and maximizes delivery or achievement of strategic goals.

During the life of a program it may be necessary to modify the objectives, change priorities, or redefine the desired outcomes in the light of changing circumstances or performance levels. The structure and accountability should continue through and beyond the life of the program, to ensure that the benefits are realized and sustained as intended.

Conclusion

There is no question about the need for program benefit measurement. A range of causes (or excuses)—including organizational priorities, (lack of) long term vision, knowledge of how to properly evaluate a program and against which objectives, and constraints (time, money, resources and capacity)—prevent us from doing it properly or doing it altogether.

Out of the many ways to measure program benefits, a qualitative assessment is the most foundational and provides a higher benefit-cost ratio. This paper has dealt with three main approaches to evaluate programs: through its process/implementation, deliverables/goals, and outcomes/benefits and provided guidelines and checklists of consideration items for each one.

Although not simple or low cost to perform, these three techniques help make program evaluation easier and more relevant—especially when performed by knowledgeable evaluators, who are familiar with the domain and focus on data collection and analysis techniques, rather than financial calculations and templates that can be misused or taken out of context.

References

American Evaluation Association. Retrieved from http://www.eval.org/resources.asp

Babbie, E. R. (1990). Survey research methods (2nd ed.). Belmont, CA: Wadsworth.

Berk, R. A., & Rossi, P. H. (1998). Thinking about program evaluation (2nd ed.). Newbury Park, CA: Sage.

Braverman, M. T., Constantine, N. A., & Slater, J. K. (Eds.). (2004). Foundations and evaluation: Contexts and practices for effective philanthropy. San Francisco, CA: Jossey-Bass Publishing.

James Bell Associates. (2009). Evaluation Brief: Selecting an Evaluation Approach. Arlington, VA: Author. Retrieved from http://www.jbassoc.com/reports/summary.aspx

This material has been reproduced with the permission of the copyright owner. Unauthorized reproduction of this material is strictly prohibited. For permission to reproduce this material, please contact PMI or any listed author.

© 2012 Ori Schibi
Originally published as a part of the 2012 PMI Global Congress Proceedings – Vancouver, Canada

Advertisement

Advertisement

Related Content

Advertisement

Publishing or acceptance of an advertisement is neither a guarantee nor endorsement of the advertiser's product or service. View advertising policy.