This paper encourages moving beyond assessing project execution toward additionally examining whether or not a project has delivered its intended outcome. It offers considerations for assessing the outcome of a project, along with the organizational impact of undertaking outcome measurement. Additionally, it provides tips and techniques for taking a more quantitative approach to assessing the value of a delivered solution.
Introduction
Most project managers, program managers, and business analysts have experience with evaluating whether or not a project was executed successfully by examining project and program duration and effort and explaining variances from estimates. They are also familiar with conducting cost/benefit analyses to justify initiatives. They often compare estimated project costs to actual costs as a measure of success. Yet successful project execution does not guarantee a successful outcome. Too often, feedback after the completion of a project is a wistful comment similar to “it's just what I asked for but not what I wanted.”
PMI's 2014 annual global Pulse of the Profession study revealed that “inaccurate requirements gathering” continues to be a significant cause of project failure. In fact, insufficient requirements elicitation and analysis is also a root cause of other project problems. For example, descoping a project to the point where it can no longer deliver sufficient value is often due to incomplete analysis of the dependencies between requirements.
Many organizations have limited funds for the development of new products and services or the improvement of existing products and services. They have to carefully choose which initiatives they fund. They want to be able to assess the outcomes of the choices they have already made to help inform future choices. Thus, increasingly, project sponsors in the for-profit world and donors in the non-profit world are asking for proof that the initiatives they fund have produced a successful and valuable outcome.
Providing proof of a successful outcome necessitates defining the criteria by which success will be evaluated. Defining what success looks like in terms of measurable evaluation criteria provides multiple advantages to projects:
- Defining evaluation criteria is a complementary activity to requirements elicitation and analysis and test definition. Tackling all three together for each segment of a solution which is to be implemented is very efficient; requirements, testing, and solution evaluation techniques reinforce each other to create greater clarity for all three of these important perspectives.
- Evaluation criteria become part of the “contract” between those who execute on the initiative and those who provide the funding for it. There is less likely to be a mismatch between what was requested and what was delivered when “what success looks like” is clearly understood by all involved.
- Understanding the potential complexity and cost of conducting measurements against evaluation criteria help all involved manage expectations and be realistic about what success looks like and pragmatic about how they will know if they achieved it.
All organizations – from large product-oriented multinational corporations to first startups in a garage to service-oriented for-profits and nonprofits -- need to make sure that they target their funds to areas which will produce successful outcomes for them and their customers/clients. Solution evaluation needs to include those components which we currently measure, such as project and program costs and any readily quantifiable benefits, and must extend beyond these factors and consider and reason about value in an honest and quantitative way. With that in mind, let's consider what project outcomes are and how to measure them.
Project Execution vs. Outputs vs. Outcomes
Ideally, all project activities, project outputs, operational outputs, and project outcomes can be tied (or traced) back to project goals and objectives, which in turn help an organization support its mission, which in turn helps an organization achieve its vision. They can all be measured, but it is the outcomes which are the primary evidence of the project's success. Here's why:
- All of the lifecycle activities which comprise a project are components of its execution. Each one can be tracked and managed and when all of the activities in a project have been completed, we know that the project has completed and we can assess how well the activities were done and how timely and expensive they were. For example, one can track and manage the actual duration of construction activities as part of building a utility power substation or track how long it took to test new customer support center software or how long it took to create the training materials for a new procedure in a homeless shelter. Such measurements are about the project progress; with the exception, perhaps, of being able to confirm a faster time to implementation or to market, which might increase market share or client satisfaction., None of these kinds of project activity measurements prove that the project has delivered value and whether or not it was worth the effort to bring the project or program to completion.
- Project outputs are produced by completing project activities. Whether a project focuses on building a utility power substation or building software for customer support center or creating a new procedure for handling food distribution in a homeless shelter, there are almost always outputs. Project outputs may take the form of “deliverables,” such as specifications, constructed components, procedures or collateral, completed tests, software bugs identified and fixed, and the actual installation/transition to a production or operational environment, either all-at-once or in usable segments. Project outputs are often tracked and monitored as part of project execution. Like project execution, monitoring project outputs monitors project progress and does not usually provide a direct indication of the value created by conducting the project.
- Operational outputs are produced by the completed solution. Examples include: the number of kilowatt hours generated by a substation, the number of customer support calls completed and the average duration of a call, or the number of meals served at a homeless shelter. All of these all can be tracked and managed, too.
- Project outcomes are the results and values which the delivered solution enables. Very often, one can think of outcomes in terms of what the solution will enable individuals or an organization or a system to do, such as reduced call time, or increased electricity capacity, or more meals served in the shelter. Such project outcomes can be measured by operational outputs. Yet, not all outcomes can be measured by outputs because the output and the outcome are not necessarily the same, especially when the solution serves multiple stakeholders. For example, the output of shorter call times at the call center will help the owners of that center realize that they are successfully reducing expenses. Yet, another expected outcome for the new call center software might be increased customer satisfaction with the service offered by the call center; that satisfaction can't be measured by the length of time of a call.
Seen in this light, project execution and project outputs, and even operational outputs, are a means rather than an end onto themselves. But the outcomes are the actual end results, where one finds the value.
The classic high-level outcomes expected by organizations have often been summarized by the mnemonic “IRACIS,” which stands for “Increase Revenue, Avoid Costs, Improve Service.” Thinking about IRACIS is a way to assess operational activities; much of IRACIS can be measured with operational outputs. But some outcomes are not easily measured with operational outputs. In the for-profit world, increased customer satisfaction or retention or referrals are examples of outcomes which do not fit neatly into IRACIS categories. Nonprofit and government organizations want to assess whether the initiatives they institute improve the well-being of their constituents in some way. Improved service supports improving well-being, but does not measure that an improvement in well-being has occurred.
So, project measurements needs to include not only those components which we currently measure, such as project and program costs and any readily quantifiable benefits, but must also extend beyond these factors and consider and quantitatively reason about value in an honest and quantitative way.
Outcome Measurement Challenges
Outcome measurement is not for the faint of heart. Reasons why outcome measurement is complex include the following:
- Measuring outcome depends upon being able to come to agreement on what has value. Value is perceived to be difficult to define and assess. Indeed, the notion of value can be very subjective and highly dependent on who defines it and who receives it.
- Much current wisdom asserts either that value is intangible or cannot be measured. In some businesses and professions, there is a fear that measuring value might involve weighing the value of a life, which is troubling at best and immoral at worst.
- In order to measure outcome, measureable evaluation criteria need to be established. This means that while outcome measurement often occurs at the end of a project or periodically long after the project has completed, thinking about and planning for outcome measurement has become a part of initial project planning, when candidate evaluation criteria may not be as well understood.
- Sometimes, the outcome of a project may appear to be out of scope to the project. For example, a project which creates automated software for a call center might reduce the number of staff members needed to operate it or reduce the amount of time per call. These measurable factors are certainly valuable to the call center. But another value of this project lies in increased customer satisfaction with how calls are handled and consequently, increased customer retention. So, while the project focuses on building and delivering the software, a lot of what makes the project successful depends on whether its usage actually increases customer satisfaction and retention. That usage needs to be kept in mind when building the software and also when evaluating it.
- Development projects and programs can be expensive. Depending on the organization and the kind of data it already tracks and manages, additional data collection which may be needed to measure outcomes can potentially add significantly to project or organizational expenses. Moreover, if assessing whether or not the outcome is successful needs to be conducted over a long duration of time, smaller – and sometimes larger – organizations can fold before they can determine whether or not they achieved their goals. Organizations which undertake outcome measurement need to balance their “need to know” with the cost and difficulty of conducting evaluations.
A Pragmatic Approach for Defining Outcomes
Outcomes should be clearly, realistically, and honestly defined with as much precision as is necessary. The degree of precision with which an outcome should be defined depends on the kinds of decisions and evaluations which will take the outcome into account and the type of outcome being evaluated.
- All outcomes need to tie back to project objectives, which in turn must tie or trace back to organizational objectives, goal, mission, and vision. If an outcome does not relate to what the organization is trying to accomplish, should time and effort be invested in defining it?
- Many outcomes cannot be precisely defined. When it comes to customer service, customer retention, improved well-being, or increased capabilities, we cannot promise or predict some exact outcome. In these situations, always define the outcome criteria as a range, with a worst case value, a best case value, and a most-likely value. Using ranges to define outcomes helps to manage expectations about them.
- Having a format which tabulates qualities of services of desired behaviors or capabilities and expected performance ranges can be helpful. One such vendor-agnostic format is Planguage. It is free and available for all to use. This format is well-suited for outcomes which can be measured quantitatively, such as the amount of time it takes for someone (or some system) to complete a task. It is also used to specify criteria for “ilities” related to the performance of tasks, such as usability, reliability, availability, security, scalability, and efficiency/throughput/performance. It could be used for semi-quantitative measures, such as scaled responses from well-crafted surveys.
- Exhibit 1 provides an annotated example of using the Planguage format to define the usability of the new customer call center software. In this example, an output of the customer call center system is also a desired outcome of its initiative and may reflect whether or not its associated objective had a successful outcome.

Exhibit 1: Annotated example using the free Planguage format to define outcomes.
- Clearly, any outcomes which would impact decisions related to health or safety need to be defined and measured with as much precision as possible. In such situations, control groups are almost always established to confirm the outcome.
- Some types of outcomes not related to health and safety should still be predicted with as much precision as possible, such as the expected, planned responses of an interactive software system which delivers discrete functionality.
- A format which tabulates actions and expected responses can be helpful for defining outcomes where some degree of precision is required. One such vendor-agnostic format is Given/When/Then. The Given/When/Then format has been embraced by those who use an agile approach to software delivery as a way to define tests for small discrete pieces of functionality. It is generic enough that it can also be used to describe and define high-level expected outcomes for any type of delivery lifecycle and any kind of outcome which encompasses an action and a response. Exhibit 1 provides an annotated example of using Given/When/Then at a very high level to define the evaluation criteria for the customer call center example which was previously mentioned. Notice that even though this format allows for defining precise outcomes, whenever appropriate, outcomes should be expressed as ranges which include a minimally acceptable (worst case) value, a desired/“wish” (best case) value, and a planned (likely) value. When ranges are used for areas which require great precision, the ranges should be very, very small.
- This format is very well-suited for looking at outcomes which involve some kind of action and some kind of observable response. For outcomes which are not action/response oriented, consider some of the other formats which are provided in this paper.

Exhibit 2: Annotated example of Given/When/Then used to define a high-level outcome.
- Some outcomes seem so intangible or so long-range that it is tempting to think that they cannot be defined or measured at all. In such situations, a good approach is to look for evidence that the outcome has been or is likely to be achieved. For example, if one was trying to measure whether someone at the homeless shelter has matured enough to achieve income-earning stability, one might look at the number and kind of new skills that person had acquired while on the job or the person's attendance or absentee rate on the job and use this information to infer whether or not the person is likely to remain in that job. A benefits worksheet, such as the one in Exhibit 3, can be used to specify indirect evidence of outcomes.
- Using a benefits worksheet allows those who define outcomes and evaluation criteria to think “out of the box” and explore new ways of gathering evidence. However, creating these kinds of criteria is risky unless subject matter experts are deeply involved, along with organizational data stewards.

Exhibit 3: A benefits worksheet for defining outcomes.
A Pragmatic Approach for Measuring Outcomes
Given that outcome measurement is increasingly necessary and is often complex and expensive, here are a few ideas for how to pragmatically, realistically, measurably, and transparently assess value in support of project, program, and portfolio solution evaluation.
- Find ways to be semi-quantitative about qualitative findings.
- Surveys often have open-ended questions in addition to scaled-response questions. Creating a word cloud from the answers to open-ended questions is a powerful way to provide a semi-quantitative sense about what was important to many respondents.

Exhibit 4: A word cloud visualizing the top five responses to “How do you feel about the change in call center procedures?”
- Consider ways to make it easy for those who give feedback to provide it. In today's world, nearly everyone suffers “survey fatigue,” bombarded with surveys from all kinds of organizations which are all trying to get feedback in a way which is convenient and easy for the organization, but time-consuming for the respondent. Sometimes, conducting quick informal feedback events such as setting up a table in a cafeteria and enabling people to provide feedback by “voting” with colored dots, can provide a lot of insight on an outcome with less wear and tear on all involved.
- Consider using data which an organization already captures for other purposes. Many organizations have defined and collect Key Performance Indicators (KPIs) at an organizational level. Many KPIs relate to operational outputs, but some are outcome-oriented, such as those related to customer satisfaction or corporate or environmental or sustainability considerations. If such KPIs are collected and are relevant to a particular project, determine whether the data used to calculate the KPI can be segmented in such a way as to be indicative of the outcome the project is attempting to achieve.
- If the data needed to measure outcomes is not readily available, look for other data which would provide indirect evidence of the outcome. Again, if one wanted to determine something about income-earning stability, looking at the number and kind of new skills that a person had acquired while on the job and that person's attendance or absentee rate on the job, might be a way to infer whether or not the person is likely to remain in that job.
- If the data or indirect evidence needed to measure a desired outcome is not readily available, consider building the cost of acquiring, tracking, managing it, and reporting on it into the project. If the cost is very substantial or the effort is very time-consuming or complex, consider less quantitative ways of getting a sense of the outcome, such as through surveys or feedback events.
Evaluation in Your Organization
All of the concepts and the techniques presented in this paper are concepts and thought aids. As previously mentioned, how and whether one conducts outcome measurement depends upon the costs of obtaining the desired information to determine whether or not the evaluation criteria are met. The types of evaluations which are conducted also depend upon the degree of precision which is required. Additionally, the types of evaluations which are conducted depend upon the initiative's size and scope and complexity, organizational opportunities and constraints, and possibly industry and regulatory constraints. With that in mind, one can use a checklist such as the one in Exhibit 5 as a thought-aid when deciding what kind of outcomes to examine and how to measure them.

Exhibit 5: A partial checklist for choosing outcome measurement practices.
Conclusion
Sometimes a pithy cliché or aphorism says a lot more than a ton of words. In that spirit, here are two to consider which have bearing on the importance of outcome measurement:
“If you don't know where you are going, any road will get you there…” aphorism paraphrased from Lewis Carroll's Alice in Wonderland
“If you don't know where you are going, you'll end up someplace else.”… attributed to Yogi Berra, New York Yankees catcher, manager, and coach.
If you need to evaluate the success of your project initiatives and make decisions about what future initiatives to pursue, time spent defining and measuring outcomes is time well spent. With that in mind, it's important to be realistic and pragmatic about the types of outcomes to define and measure. Optimally, outcome measurement should take its place as a valuable tool in support of project, program, and portfolio decisions and not take on a separate life of its own.
References
Fowler, M.(2013, August 21). Given/When/Then. Martin Fowler. Retrieved from http://martinfowler.com/bliki/GivenWhenThen.html
Gilb, T. (2005). Competitive engineering: A handbook for systems engineering, requirements engineering, and software engineering using Planguage. Waltham, MA: Butterworth-Heinemann/
Gilb, T. & Gilb, K. (2006). Planguage concepts glossary. Retrieved from http://www.gilb.com/tiki-page.php?pageName=Competitive-Engineering-Glossary
Gorman, M., & Gottesdiener, E. (2012). Discover to deliver: Agile product planning and analysis. Sudbury, MA: EBG Consulting.
Gorman, M., & Gottesdiener, E. (3 June 2014). It's time to put value in the driver's seat. ProjectManagementTimes. Retrieved from http://www.projecttimes.com/articles/its-time-to-put-value-in-the-drivers-seat.html
Gorman, M., & Gottesdiener, E. (17 June 2014). Focus on value: 4 factors every team should consider. Success within Reguirements. Retrieved from http://ebgconsulting.com/blog/focus-on-value-4-factors-every-team-should-consider/
Hubbard, D., (2014). How to measure anything: Finding the value of intangibles in business, (3rd ed.). Hoboken, NJ: Wiley.
Marr, B. (2012). Key performance indicators (KPI): The 75 measures every manager needs to know. Saddle River, NJ: FT Publishing/Pearson.
Project Management Institute. (2014). Business analysis : A practice guide. Newtown Square, PA: Author.
Project Management Institute. (2014, January). Pulse of the profession®: The high cost of low performance. Newtown Square, PA: Author.