Quantitative project management--use of metrics for effective project management
Apolak Borthakur, Senior Manager Product Line, BMC Software
Nilesh Mate, Senior Project Controller, BMC Software
This paper relies on the following basic hypotheses: Quantification is required to “measure” performance. Analysis of measures provides insights that can point towards corrective measures. This process can lead to improved project performance
This idea is hardly new; but even seasoned Project managers often tend to neglect metrics. Even where it is applied, it tends to get done in a mechanical fashion, without regard to the specific needs of the project. This paper will describe the metrics used, the methods adopted for measurement, analysis, and the benefits that have been observed. Though most of these measures are applicable to all types of projects, they are derived from applying them to IT/software project environments.
The paper will also cover some insights and Do's and Don'ts that the authors have developed while applying these measures in practice. In conclusion, the authors contend that quantification and measurement is the key to project performance. Significant improvements have been observed by following a quantitative approach to Project Management.
Quantitative project management involves the use of measurements (metrics) to help effective management of projects. Introduction of metrics as a tool for project management involves the following steps.
- Defining “what” to measure. Measures that provide the maximum insight into the objectives and bear the maximum correlation with the objectives should be used.
- Define the measurement framework. This involves deciding the tools used, the frequency at which the observations will be pooled, and assigning responsibilities for measurement and presentation.
- Collection and presentation of data. This is getting down to the brass tacks, where project controllers collect the information and present it to management in an easy to digest format.
- Analysis of the data and initiating corrective action where necessary.
- Review the effectiveness of the measurement and corrective actions by comparing “before” and “aftef” numbers. This is critical to ensure that the measurement is not merely an “overhead” activity.
The following sections will describe these steps in details, alongside the measures that the authors have found to have the maximum impact.
Designing a metrics program
Predictive and Corrective metrics
There are some measures that help you forecast or identify a trend. They are very useful because they give early warning of some parameters of projects going astray. We call them “predictive” metrics.
There are some measures which are taken during the closing phase of projects or modules. They help in setting benchmarks for future and assimilating lessons learnt. We call these “corrective” metrics.
While “corrective” metrics have their own value, “predictive” metrics are a lot more useful for project managers.
Metrics have to be closely aligned with the major objectives of the project. The metrics program that will be described in this paper had the following objectives.
- Predictability (performance against plan)
- Quality (lower number of observed defects or “non-conformance to specs”)
- Responsiveness (speed of responses and resolutions given to stakeholders)
- Efficiency (better utilization of available resources)
- Productivity (enhancing the output of the existing resources)
The project management office defines metrics correlated with the above objectives and tracks them regularly. It further helps in the process of analyzing the trends and variances and implement corrective actions.
Determining the mechanism of data capture and presentation
After figuring out “what” to measure, one needs to determine “how” to measure it. This is dictated by the usage of management tools at the organizational level. Some project managers are lucky enough to have a single system that enables recording, querying and function as a full-fledged Management Information System at the same time. It is more common to find that project controllers have to go to several different systems that cater to time reporting, scheduling, defect tracking, helpdesk, etc. in order to generate sensible data.
The process of capturing the information, collecting and analyzing the measures needs to be designed and understood by everybody. One must try to automate these processes to the extent possible to prevent error and save time and effort on part of the project management team.
Effort/Cost and Schedule variances
The objective of these measures is to measure the progress of the project against plan. Of all the predictive metrics we have collected and presented to our teams, charts like the one shown in Exhibit.1 below have had the maximum impact.
Here, we simply pick up the expected amount of work completed and compare it with the actual work completed on a project and plot it as a time series for a given project. Both these figures are readily available from any standard project scheduling software.
Very often, teams (or even senior managers in-charge of projects) do not understand how to read where the project is with regard to plan, leading to very generic “we are on track” type of status reports. By adding a quantitative measure with a visual indicator, we make a big impact on the thinking of the project team and management.
In addition, we also use the following equation to determine the effort (or cost as the case may be) and schedule variances for all the identifiable work units in a project.
Variance (%) = (Planned - Actual)/Planned * 100
In an environment where cost and time tend to be the uppermost on every planner's mind, quality is often a soft target that could get compromised along the way. Quality assurance should therefore always be on the radar of project managers as an activity that should go on throughout the execution of a project.
The use of a good defect tracking system facilitates the preparation of quality metrics greatly. These systems enable logging of defects, and tracking them to closure. Typically, these tools also have query interfaces that can help generating relevant information.
Trends in defect detection and closure
One of the more illustrative graphics on our status reports is a chart that indicates how many defects were detected on a project, broken down by status and plotted against time in the form of a stacked chart, as shown in Exhibit 2.
The various elements of the stacked chart show the status. Obviously as the project approaches closure, one would expect defects to move from the open and fix available statuses to either deferred or closed – preferably a lot more “closed” than “deferred”. One would also expect the total length of the bar to level off as the quality stabilizes. This chart and the related data provide timely information about the quality issues on the project.
Drilldown into quality issues
It is often necessary to drill down further into the quality data. One of the most common drilldown features is the classification of defects into specific categories. For instance, our reports often contain the defect statistics broken down by product feature as shown in Exhibit 3. This enables identification of the problem areas and root causes.
Project teams can quickly draw some very useful inferences from this type of information. For instance, it would appear from the above chart that the core logic is the major culprit in terms of defects. But if the project team knows that the user interface is actually just about 10% of the project's functionality, and is still accounting for nearly a third of total defects, then clearly that is the area to focus on for getting major quality improvements.
For the summary as well as the drilldown, one can further classify the defects by “severity” and “priority” to gain further insights into the nature and seriousness of quality issues.
In determining defect density, we normalize the defects reported by the overall size of the project to be able to make project to project comparisons. Defect density is calculated by the simple formula below.
Defect Density = Total defects found/Measure of the size
While obtaining the total defects found, we often take a weighted sum by the severity or priority of the defects. For instance, critical defects are weighted higher than low priority defects. For size measures, we often go for a simple measure such as the total person day planned effort. In our experience, this works pretty well, because the planned effort considers both the “size” and “complexity” of projects. Of course, one could very well substitute it with more sophisticated size measures. For example in case of computer software projects, the denominator could be “function points added” or “lines of code written”.
Simply put, responsiveness tries to track the speed with which an issue was responded to and addressed. Here, we make use of a helpdesk system to log “tickets” and to track them to closure, recording all the actions taken about the ticket. This facilitates the generation of all the responsiveness metrics.
This is tracked using a chart similar to Exhibit 4. This chart tracks the age of cases which are still open. This, combined with an arithmetic mean of the total age of open cases gives us a very good indication about the “wait time” for customers to have a resolution to their issues.
While the above data focuses mostly on “open” tickets, we also look at the cumulative data in order to reflect on rate of closing tickets over a period of time as in Exhibit 5, where we track the number of cases that got assigned to group and resolved till that time. Ideally, the lines should be closing together and converging. In the exhibit below, we find that resolution rate is not keeping pace with the rate at which tickets are being raised, which is obviously a matter of concern.
These graphics are combined with numbers such as the “Average Turnaround Time”, where we track the amount of time (or effort) it took to resolve a particular ticket.
Average Turnaround Time = Total number of tickets resolved/Effort spent or cost
Here too, one could weigh the sum in the numerator by the severity or priority of the tickets.
Since we deal with software projects, the most important efficiency measure for us is how efficiently we use the existing man-power resources. A simple measure that we use is to calculate (on a predictive as well as corrective basis) the number of person hours that were used on planned engineering activities. The planned engineering activities are derived from the project schedule, whereas the amount of time worked is derived from the time reporting system.
Utilization % = Planned engineering effort / Total time spent on the project
In the numerator, we only consider “planned” effort, thus making sure that effort or schedule overruns will not inflate the utilization artificially.
To provide further insights into how the available productive effort is being utilized, we use the time reporting system to generate statistics about the “break-up” as shown below in Exhibit 6.
We find that such a view often surprises managers who think they “know” what their team is doing. For instance, the above team kept complaining that they were running behind schedules because of the large amount of time they had to spend on “customer support”. By analyzing the above report, we found only 7% of their time was actually devoted to this activity.
This also serves as a planning input, where we know how much of time is “normally” required for research or support kind of activities, which tend to be outside of the main project schedule.
In productivity we take the efficiency measures further by determining how much “output” was produced by the time spent by individuals or teams on particular activities. We have tried using the following “output” measures for quantifying output.
- Features added
- Defects raised/fixed
- Use cases developed/tested
- Function points developed
- Lines of code produced
It must be admitted however, that it is a challenge to arrive at a metric that is truly objective and comparable across projects, because each project has its unique environment.
Analysis of data
The interpretation and analysis of the data is often more difficult than the design and measurement phase. Some common guidelines that we have evolved after years of working in this area are as follows.
Fix the cause, not the symptom
Especially in programs with high visibility, the focus of any analysis quickly shifts towards “fixing the numbers” rather than solving the core issue. We often get pressurized to amend the formulae used for calculation or drop some data from reporting, in order to prevent escalations. While these suggestions sometimes have merit, we encourage people to find out the root causes and fix those, rather than try to make the charts look good.
In some cultures, criticism puts people on the defensive. Some people may actually take any critical observation rather personally. The idea of the analysis should be to steer clear of such extreme responses. This may mean that the presentation style be amended to suit the personality of the team or the individual managers.
Avoid jumping to conclusions
Sometimes, people running a metrics program can get carried away by the power of some observations and call for drastic changes. One needs to bear in mind that until the causal analysis is done and approved by an empowered team, it is premature to discuss corrective actions.
Share data with discretion
The power of numbers is often underestimated. Very often, numbers get misused in inter-team and inter-business unit politics. Therefore, discretion should be used in sharing the data with other parties. Ideally, until the analysis is completed, only the concerned managers should be involved in the distribution.
Use statistical techniques
Use common statistical tools to make sense of data that has a high variability. One may need to carefully weed out the extreme data points to get sensible trends.
We have been using metrics for a number of years now for planning, execution and control of projects. We have noticed significant improvements in project outcomes by running an effective metrics program and coupling it with analysis and corrections. Some of the observations are as follows.
- Nearly 50% improvement in predictability metrics
- Defect densities have come down by 30 to 50%
- Responsiveness has gone up by up to 50%
- Utilization rates have increased by between 30% and 50%
- In all cases without exception, we have been able to provide early warning of schedule or cost over-runs, quality issues or possibility of low utilization. This is something that senior management really appreciates
The material used for the presentation is part of an ongoing activity that constantly calibrates the effectiveness of metrics in improving project performance. In particular, the correlation between the following parameters is of particular interest to us and the subject of ongoing research.
This deals with questions that relate to individual working styles and their impact on project performance. The following examples come to mind.
- How does the amount of time spent in the office correlate with the work output
- What is the impact of high utilization rates (translating into workload) on quality measures or soft measures such as “employee morale” or employee turnover rate
Correlation between metrics
Here, we try to identify a correlation between the metrics to lead to inputs that may be valuable at the planning stage. For example, does a high resource utilization rate lead to riskier projects (increase the probability of missing deadlines)? Is there an “optimum” resource utilization rate that one should try to achieve?
Correlation between “improvements” and metrics
This goes back to confirming the theory that active collection, reporting and analysis of data leads to improvement in performance. The improvement needs to be quantifiable as well, else one has to re-visit the measures used and the methods of analysis.
© 2005, Sandeep Shouche
Originally published as a part of 2006 PMI Global Congress Proceedings – Bangkok, Thailand