Defining an effective program metric
In almost every endeavor we undertake, at some point, we will be asked to describe our progress toward completion. Whether the question is as simple as “How are you coming on the design specs?” or “Daddy, are we there yet?” or as complex as “Where do we stand on a cost basis on the piling installation contract?” we need to be able to respond quickly, accurately, and completely. Our ability to convey this information, as well as the believability of our answer, is dependent upon the measuring tool we use to assess progress.
No single measuring tool has universal applicability, but all possess a set of common characteristics or requirements. This paper will identify the basic characteristics of a time-based, Progress Metric, and show how each contributes to the effectiveness of the measuring tool. The seven key elements we will discuss are: the vision of success for the task or project; the purpose of the metric; the path to completion algorithm and achievement of the vision of success; the starting point for the assessment of progress; an actual vs. plan comparison accomplished regularly over a span of time; the range of allowable values for planned or actual progress; and the appropriate level of detail. (See Exhibit 1.)
By examining these basic characteristics, we will provide a useful guide for new Project Managers who may discover new ways to summarize and present project or program information to their management teams. This article may also serve as a refresher for experienced Project Managers to help them avoid amassing great quantities of data that don't truly help us accurately, effectively, and efficiently portray our progress toward success.
The difference in the questions we are asked sometimes leads to the selection of a different measuring tool. The cost basis question requires an in-depth analysis of design costs, move-in and set up costs, contractor demobilization charges, numbers of piles required, number of piles driven successfully, number of failed piles, budget targets, schedule targets, and a myriad of other details. We might need to calculate the Budgeted Cost of Work Scheduled, the Budgeted Cost of Work Performed, the Actual Cost of Work Performed, etc. This type of question requires an almost constant monitoring by the members of the Project Office, in anticipation of the question, or the answer will be greatly delayed waiting for the counting and accounting to occur.
A “How are you coming?” question may allow you to use a “How You Doing?” meter (see Exhibit 2) and, almost always elicits a response of, “Fine!” The scaling on this meter, however, is not terribly refined, and requires interpretation by the receiver of our assessment: how close to “Awful” can you get and still be “Fine?” How close to “Awful” are we really? If we're “Great,” does that mean we're done, or just that we're right on track for where we should be? Reliance on the information from the “How You Doing?” meter depends upon the integrity, track record, and expert authority of the analyzer. The one thing that this assessment has in common with the detailed analysis previously introduced, is its reliance on preparation of an answer ahead of the question.
A Project Manager needs to anticipate the questions that will come from senior management about his or her project, and develop sufficient data to support the answers. As Project Managers, we are entrusted with someone else's resources (time, money, personnel, reputation) and asked to use them effectively to accomplish a stated goal. We should not be surprised that they would want to know how well we are using those resources and how close we are getting to being done. Standard Project Management tools like schedules, budgets, and specifications give us valuable and viable information about what we hope to accomplish, and can be used to record what we have consumed in the process, but can fall short of describing where we are versus where we need to be. Other, program specific, work-in-progress, earned-value measurements can provide more focused, more in-depth examinations of projects, subprojects, or specific project tasks. These measurements can be used to ensure a consistent evaluation process is used each time and to increase the believability of the resulting assessment. Depending on the level of detail incorporated into the measurement, it may also help the Project Manager identify the area (areas) that requires extra attention to maintain or restore progress toward our goal.
Elements of a Time-Based Progress Metric
So, what is a Progress Metric? For the purpose of this paper, we will define a Progress Metric as any presentation of data (usually “written” or “publishable,” including, of course, electronic media) that measures and/or assesses progress toward a stated goal, where the measurement or assessment is made regularly and periodically, and compared to an anticipated rate of progress that was defined during the planning phase of the task or program. Progress Metrics should be incorporated into the Communications Plan for the project. A Progress Metric can be either quantitative or qualitative in design. In either case, the key elements discussed below are applicable.
Exhibit 1. Elements of a Time-Based Progress Metric
Exhibit 2. A “How You Doing?” Meter
Single point-in-time measurements and reports certainly have merit, and do provide management with information useful in the decision-making process, but we will not consider them to be stand-alone Progress Metrics. Rather, we will treat them as input data to a Progress Metric. By this limitation on our working definition, we will not discuss the elements of a bar graph or pie chart, but will address only the characteristics of a measuring tool that is used repeatedly over a span of time.
Key Elements of a Time-Based, Progress Metric
The key elements of a Progress Metric are its heart and soul, and are independent of its appearance. Whether our metric is a detailed calculation of costs, graphed over time, climbing toward some target maximum, or a reading and reporting of our process using our “How You Doing?” meter, we can define each of the key elements. For more qualitative responses to questions from management, we may not realize that these elements exist. The more thorough we are, however, in recognizing these elements and expressing them, the more understandable and believable will be our response.
Definition (Vision) of Success
In Lerner's and Loewe's Paint Your Wagon, a musical comedy about the California Gold Rush, the miners sing, “Where am I going, I don't know. When will I get there, I ain't certain. All I know is I am on my way.” This may be a reasonable sentiment for a group of itinerant panhandlers, each hoping to find his own pot o' gold wherever it may be. It isn't, however, an acceptable approach to project management. In fact, Dr. Harold Kerzner states almost the exact opposite. Among fundamental lessons for management, Kerzner lists, “Establish and use planning and control systems as the focal point of project implementation: Know where you're going; know when you've gotten there” (Kerzner, 2001, 464).
Exhibit 3. A Linear Path to Completion
Before we can report on progress or formulate a Progress Metric, we must have a clearly defined vision of success. If we're trying to measure our progress toward a goal, we must be able to articulate that goal. Everyone on the program team needs to recognize, understand, and accept the stated goal as the objective of his or her efforts. The goal must be unambiguous, and unchanging. For a Progress Metric to be effective, our assessment against the goal must be made in the same way, each time we look at our progress.
Since we can create a Progress Metric for a single task, for a group of related tasks, for a whole project, or even a group of related projects within a program, the clear statement of our vision of success is critical as it bounds the scope of work we are trying to accomplish. Just as “scope creep” can be a contractor's undoing in a fixed-price contract, an incomplete or variable vision of success will leave the project team frustrated, as they are left to wander aimlessly after an elusive “pot o' gold,” never knowing when they will be “get there,” knowing only that they are on the way…A well-defined vision of success ensures that the project team can bring the task at hand to closure.
Underlying all of the other elements of a Progress Metric is the purpose. We need to know what we are trying to measure, what we are trying to control, and who will benefit from having the information about our progress, in order to identify why we need a Progress Metric. Gathering data, arranging data, plotting data, printing data are all greatly enhanced by the tools available to today's Project Manager—computers, spreadsheets, and statistical software packages let us grind out reams and reams of reports. But just because we can, doesn't mean we should. We need to know how the metric will enhance our ability to manage our program; the metric has to have a purpose.
Progress Metrics are intended to provide the data necessary to adequately answer some question from senior management. We identify the purpose of the metric by the question we intend to answer. By relating the purpose of our metric to a question we anticipate being asked, we also begin to describe the audience for the metric. “How are you doing?” is a different question when asked by the Director of Engineering than it is when asked by the Chief Financial Officer. The Progress Metrics used to answer these questions likely will measure against different parameters, and serve different purposes.
Exhibit 4. A Theoretical “S” Curve Path to Completion
While they are not unrelated, we should make the distinction between the metric's purpose and the vision of success. Purpose relates to the metric document itself, describing the management function that will be served by the information contained in the metric. Vision of success is related to the physical work of the program, describing what we intend to accomplish by the actual program task or subproject. If our vision of success anticipates being ready to issue construction contracts, the metrics we prepare for the Director of Engineering will address “Design Completion” or “Number of Bid Packages Written and Assembled.” The metrics for the CFO will review “Engineering Costs Expended” or an assessment of how much of the construction cost estimate is based on detailed engineering.
A secondary feature of a metric's purpose is its relevance as a key indicator of our ability to achieve the vision of success. If we are trying to assess our progress toward “Design Complete,” tracking the number of open compatibility issues (actually, incompatibility issues) may be a much better indicator of our potential for success than counting the number of drawings released. The number of drawings is too open-ended a list to be effective: are 20 drawings enough to complete the design? Do we need more than 10? Incompatibility issues, on the other hand, should be a declining number, reaching zero when all of the component parts of our design fit together in a harmonious package. “Design Complete” can only be declared after we have eliminated design incompatibilities.
Path to Completion
Knowing where we are going allows us to chart a course for how we're going to get there. The path to completion embodies all of the Project Manager's art in planning. Building from the overall planning process that laid out the original program schedule, we need to examine the exact process by which a task will be accomplished, not just its starting and ending dates. Our Progress Metric will be used to assure senior management that we are making adequate progress toward completion at any moment in time, so we need to accurately identify how progress will be made.
Within the body of the Progress Metric document, the path to completion is the graphical representation of the algorithm by which we can calculate progress. Theoretically, this can be expressed:
Exhibit 5. A “Real World” Path to Completion
where: y is our parameter of success (i.e., percent complete, total dollars expended, miles traveled, etc.); x1 x2,…xn are our measurable variables; and t is time.
If we could accomplish our task simply by driving at a constant rate of speed, or by merely passing through the days of a week, our path to completion would be linear (see Exhibit 3). In the project world, we might be more apt to model our progress as almost linear, but with a slower ramp-up period at the beginning of the task and ramp-down period at the end of the task, leading to the traditional “S-curve” (see Exhibit 4). In the real world, when constructing our metric, we will need to build in any of the slow-down periods we are likely to encounter, regardless of whether they are at the beginning, end, or middle of our task (see Exhibit 5). A construction project, running from May 1 to October 1, could make reasonably constant progress each week, but will likely exhibit a no-activity plateau around Memorial Day, Labor Day, and the Fourth of July. In plotting the path to completion for our Progress Metric, we should show these anticipated slowdowns.
As another example, if our Progress Metric were addressing the costs of an activity, we would build in a spike at the beginning to cover mobilization costs, a spike at the end for move-out costs, and a more rapid rate of expenditure through the holiday periods if we choose to pay our way out of the anticipated holiday slowdowns using overtime premiums.
An essential precursor to knowing where you are going, is knowing where you are now. If the goal is to build a sandwich, the program team that already has peanut butter in the pantry has a different starting point than the team that has none. Similarly, different tasks (or groups of tasks) will have different predecessors, so our initial assessment of progress must take this into account.
A Progress Metric does not need to begin at zero. Often we encounter tasks during the design of a new product that are iterative in nature rather than strictly linear. The tool we develop to measure progress toward “Design Complete” needs to give us some credit for each loop through the design spiral. This enables us to build in some initial value for accomplishing preceding tasks that provide information necessary to enter the first cycle of the spiral. Similarly, when a program “disaster” strikes, we may be able to help the design team come back to even keel, if we can show them that not everything has changed: “Square One” will still hold all or much of the preliminary information that kicked off the current task, and we are better off than 0% complete.
Actual vs. Plan Comparison
The utility of a Progress Metric arises out of the regular reevaluation of our actual progress, and comparison of that value to our plan. Within the body of the Progress Metric document, the plan line is initially created using the path to completion algorithm we defined for ourselves. This line will not change throughout the useful life of the metric, unless some significant change in the underlying activity causes us to reassess and redefine our vision of success and then re-plot the path to completion. The actual line on the Progress Metric charts the results of a periodic recalculation of the path to completion algorithm, using current point-in-time assessments of the measurable variables.
If our assessment shows that the program is tracking along the original path to completion, we will want to stand back and let it proceed at its own pace. If we see our progress trending behind the path to completion, we can assess the elements of progress and apply management resources to the appropriate area(s) to return the program to a suitable rate of progress.
Conversely, if we see our measured progress running well ahead of our targeted path to completion, we need to look closely at the elements of success to make sure we have not allowed one area to proceed too quickly. Being ahead of the target curve may be good news, but it may also lead to a higher incidence of rework as the rest of the team catches up, or to the imposition of new constraints on the work of the slower elements. As keepers of the Progress Metric, when actual progress is significantly faster than the anticipated target, we may need to examine our progress assessment to ensure that the project team is working to the full scope of work described by the vision of success. If we inadvertently (or deliberately) skip steps, we may show an artificially inflated rate of progress. In this case, we need to redirect the team's efforts to ensure completion of the whole task.
Actual progress must be measured using the same algorithm each time we make the assessment, the same algorithm we used to compile the path to completion. We will generate consistent measurements and secure a meaningful comparison of actual vs. planned progress only if we have a constant standard of measure.
Another utility of the Progress Metric is its ability to indicate the need for, and to help define a project recovery plan. Using the actual vs. plan comparison element of the Progress Metric, we can identify a trend line extending from our actual rate of progress. If this trend line points far to the right on our timeline, we know we need to look for a recovery plan. We can use our path to completion algorithm to guide development of the recovery plan, by determining which of the measurable variables are lagging behind, and which specific elements can possibly be accelerated. We will, in effect, develop a new path to completion curve by re-forecasting how we can manage the progress elements differently to increase the slope of our actual progress line.
Range of Allowable Values
In the discussion (and Exhibits 3, 4, and 5) above, we have described the path to completion as a single, geometric curve: a thin line on our graph. In reality, we must account for a range of values that will be arranged above and below the theoretical path to completion. Predicting percent complete for a series of tasks, using their early start dates, will yield a curve that appears on the graph to the left of a curve calculated using the late starts for those same events. Forecasting a project's spending plan will yield a band of possible values about the target line equal to the allowances built into individual cost estimates.
In addition to the possible range of values associated with our plan line, we also need to be mindful of the error tolerance associated with the measurement of our actual progress. Every measurement has an associated error term. If we use a ruler divided into eighths of an inch to measure the length of a line, we know our measurements should be accurate to ± 1/16. Inherent in the measurement of actual progress is a similar error tolerance, which we must account for when we view the actual vs. plan comparison. If the measurement of actual progress falls above or below the path to completion, but is within the range of allowable values, we must exercise caution before taking any corrective actions in order to avoid over-steering a program that is actually on course.
The outlook of the assessor can significantly affect the error tolerance. If we are dealing with extremely optimistic or pessimistic people, their input to the progress calculation will tend to bias the result to their way of thinking. The pessimist will always say you're behind schedule until the day after you're done. The optimist will predict a rosier outcome than you can achieve. Recognizing these traits can assist the Project Manager in reviewing the actual vs. plan comparison, and determining whether or not additional action is required.
Appropriate Level of Detail
The final element of a Progress Metric is its level of detail. The level of detail is dependent upon what question the metric is trying to answer, and for whom the information is intended. When a friend asks you “How are you feeling?” it is appropriate to respond, “Fine!” When your doctor asks that same question, you need to be more forthcoming with a significantly more detailed listing of symptoms. A friend who requires additional information, will engage you in more conversation; your doctor will ask a series of probing questions and take physical measurements to more accurately discover your condition.
Progress Metrics want to describe the health of a task or project. As Project Managers, we must decide if we're trying to answer our friend's question, or our doctor's, and prepare an adequate response. If we want to provide an overview of the progress of our design, we may be able to look at the progress of only certain long lead components. If we need to assess our ability to deliver a complete design package, we may have to assess every part, every drawing, every mating element.
Evaluating an Effective Time-Based, Progress Metric
What is an Effective Metric?
In a world of computers and calculators, of Administrative Assistants and Management Analysts, finding something to count, measure, track, graph, publish, copy, and distribute is relatively easy. Finding something that actually tells you something worth knowing takes a little more thought. An effective metric will provide useful information to project or senior management, to allow them to assess the likelihood of success in their undertaking, or to discover areas of concern early enough to take remedial action. An effective metric is a key tool in proactive project management; it is not simply a record of time passed or resources consumed (Kerzner, 2001, p. 937).
As an integral part of the Communications Plan, an effective Progress Metric must possess the characteristics of project information identified by William Skimin (1998, p. 2). The metric must be timely. It must accurately represent the current state of the project. It must be relevant, concise, and focused on its specific task or program area. It must be neutral and consistent with all other approved project data. It must be accepted as accurate by those responsible for project execution. It must be presented in a format that aids in decision-making.
As Project Managers, we should evaluate every metric we propose to use to ensure that it will be effective. Preparing great volumes of reports, just because we can, isn't an appropriate use of our time and talent, nor of our support or materials resources. Answering, “Fine,” even when we can, probably provides an insufficient amount of information to management. By anticipating the questions we will be asked, by carefully considering how best we can answer those questions, and by recognizing and expressing the seven elements of a Progress Metric before we allow the metric to take form, we can utilize metrics that have an inherent value.
Progress Metrics can be an extremely powerful tool used by Project Managers to monitor and control their projects and programs. We have defined Progress Metrics to be any presentation of data measuring and/or assessing progress toward a stated goal, where the measurement or assessment is made regularly and periodically, and compared to an anticipated rate of progress that was defined during the planning phase of the task or program.
Progress Metrics are characterized by seven key elements:
• Vision of Success for the Task or Project
• Purpose of the Metric
• Path to Completion and achievement of the vision of success
• Starting Point for the Assessment of Progress
• Actual vs. Plan Comparison accomplished regularly over a span of time
• Range of Allowable Values for Planned or Actual Progress
• Appropriate Level of Detail.
An effective metric will provide useful information to project or senior management, allowing them to assess the likelihood of success in their undertaking, or to discover areas of concern early enough to take remedial action. Like other elements of project information, the Progress Metrics must be timely, accurate, relevant, and accepted as meaningful by those responsible for project execution.
A Project Manager can develop new Progress Metrics to help the management team make decisions about the progress of their program. To ensure that the new metric will be effective, the Project Manager can rely upon the seven key elements of a metric to help avoid amassing great quantities of data simply because it's available, and to focus on only the measurements that are key indicators of success.
Coleman, James H. 2000. Using Cumulative Event Curves on Automotive Programs. Ann Arbor, MI: Integrated Management Systems Inc.
Coleman, James H. 2000. Tracking the Elusive “Big Long Bar.” IMSI News and Views, Edition 5. Ann Arbor, MI: Integrated Management Systems Inc.
Kerzner, Harold. 2001. Project Management, A Systems Approach to Planning, Scheduling, and Controlling, Seventh Edition. New York: John Wiley & Sons.
PMI Standards Committee. 1996. A Guide to the Project Management Body of Knowledge. Upper Darby, PA: Project Management Institute.
Skimin, William. 1998. “Where Are We in Trouble?” Providing Information for Better Project Decisions. Proceedings of the 29th Annual Project Management Institute 1998 Seminars & Symposium, Long Beach, California. Newtown Square, PA: Project Management Institute.
Thomas, A. J. 1999. Project Manager's Desk Reference Guide to the Project Management Body of Knowledge & Review for the PMI® Examination, Eighth Edition. Denver, CO: The Hampton Group Inc.
Proceedings of the Project Management Institute Annual Seminars & Symposium
October 3–10, 2002 • San Antonio, Texas, USA