New horizons in performance management
Project implementation requires a multidimensional understanding of the dynamics of management models. Any planning and monitoring exercise that is single dimensional in representation is bound to falter in the real three dimensional world. Since every project is a process of inquiry—it should allow the Project Manager to analyze, experiment and innovate within limits. This could only happen if we understood that performance management is not about precision in adherence—projects rarely keep variances to zero and Schedule and Cost Performance Indices hardly ever equal to 1! We need space to play with, yet our boundaries have to be objectively apparent.
Acceptance of “managed con conformance” integrates some valuable insensitivity from statistical process control into the project performance baseline. It is only then that we can manage the project such that it will have the highest probability of success.
This paper delves into structuring a multidimensional Performance management model out of the “Planned Value” and “Earned value” parameters of the Earned Value model. All such comparisons between the planned and earned values can now be viewed in a three dimensional perspective—the graphics are generated out of statistically quantified growth patterns in the 2nd and 3rd dimensions. Here is a tool to record the experiences of variations and their management on multiple levels and at different scales—building and allowing the project life cycle to develop through iterative processes of modeling, quantifying and visualizing.
The Background to Earned Value
The earned value method of performance measurement had its origin in the CSCSC (Cost Schedule Control Systems Criteria) developed in the USA. As of today it is the best industry standard tool for effectively integrating cost, schedule and technical performance management. There are two major objectives of an Earned Value system: to encourage contractors to use effective cost and schedule management control systems and to permit the customer to be able to rely on timely data produced by those systems for determining the contract status.
EV Quantification: three basic indicators quantify The Earned Value concept. They are the Actual Cost of Work Performed (ACWP), the Budgeted Cost of Work Performed (BCWP) and the Budgeted Cost of Work Scheduled (BCWS).
• ACWP—Represents the costs actually incurred in accomplishing the work performed within a given time period.
• BCWP—Is the Earned Value. It signifies the value of completed work. BCWP is derived by determining the budget for all completed work, including the completed portions of in-progress work. In contrast to the traditional measurements of actual costs against the budget, earned value is the performance indicator of both cost and schedule. This dual characteristic of the BCWP provides the required integration. Instead of merely stating whether or not money is being spent as fast as it was planned to be spent, BCWP when compared to ACWP indicates whether the progress achieved is worth the money spent.
• BCWS—Indicates where the project manager planned to be by a certain date. It is the indicator of planned progress—the performance measurement baseline.
These three values determined at each reporting period and plotted cumulatively, provide a very precise picture of the performance of the work package against the budget and the schedule.
The present environment of Earned Value Management, as defined now in the profession provides a very good and simplistic integration of cost and time. However, as our projects go through extensive mutation, rarely keeping variances to zero, inadequate robustness of the model becomes apparent. EVM only considers deviations from a single dimensional Budgeted Cost of Work Scheduled. (BCWS). In order for the world to accept EVM as a separate management science in itself, the model needs to stretch into newer dimensions and horizons. Success need not only be related to the deterministic approach in pursuing a numerical goal.
Toward a Sensible Solution
The model reads occurring deviations only—but does not tell us when corrective actions are required to get back on course. Corrective action should only be directed at significant or important variances. It will not be productive to attempt corrective actions on all variances that may be temporary trends. This paper shows an evolution of the BCWS, from a single dimensional line first to a two-dimensional Polyline and then its transition to a three-dimensional Polyline, as the EV model strives to integrate the missing dimensions.
Exhibit 1. Example of Shewart Chart Data
The Second Dimension: Definition of Thresholds for BCWS
This should be statistically defined. The thresholds cannot be defined arbitrarily as some percentage of allowable variations, agreed to between parties in the contract. Only a quantitative evaluation of it will make the EVM process seamlessly applicable to all applications. So effectively, the process will have a built in “insensitivity.” As long as the cost and schedule deviations are within the thresholds or tolerance lines, no specific corrective action is warranted.
Development of such threshold and tolerance lines can be derived from the basic principles of Statistical Quality Control or the Statistical Control Chart theory in general. Control charts were developed as a technique for ensuring that a process remained in statistical control, but not necessarily for ensuring that every item produced was within the tolerance limits set by the specifications. Consider a process that produces items, one of whose dimensions is of primary importance. Statistical control is achieved if the items are produced with a constant mean and variance. The variance is said to be due to non-assignable causes
So the basic concept is that all processes, outputs and systems exhibit variability. No two things are exactly the same. Schewart calls the sources of variability as chance and assignable causes and Deming calls them common or special causes. What control charts do is separate the signal of variability due to assignable causes, from the background noise of variability due to chance causes. When chance causes are only playing, the variation follows known patterns of statistical distributions. If a set of data is analyzed and the pattern of variation of data is shown to conform to such statistical patterns, that are produced by chance, we can assume that only chance or common cause is operating on the system. In such a situation a process is said to be under statistical control (Levine et al., 1995). To construct a control chart, we require an estimate of central tendency—the mean, and an estimate of variability—the standard deviation.
According to schewart control charts, the means of each sample n are distributed normally with mean m and stardard deviation
= sample range
dn = constant for given sample size n
n = sample size.
Using the fact that a variable which is normally distributed has 99.8% of its values lying less than 3.09 standard deviations from its mean, it is expected that 99.8% of the sample means would lie between
m(mean) + 3.09s, where s =
These are the lower and upper action limits.
Again, using the properties of the normal distribution, it may be expected that 95% of the sample means would lie between
m(mean) + 1.96s and
These values are known as the lower and upper warning limits.
Application to Costs
When we try and apply this to costs, the first thing to understand is that cost as a standard for control cannot be a specific value, it has to take the form of a frequency diagram—from which the limits of variation can be determined. If we extend a similar analogy of the Schewart chart method to the various costs constituting a reporting period, we realize that there is a distinct difference. Unlike the Schewart chart methods in the production industry, the problem with cost distribution over a period of time is that finally what is monitored is not the mean of the costs of the different activities but their sum total. Exhibit 1, which is a small example of the calculation of various limits, illustrates the point. Hence the formula used for deriving the control limits in Schewart Control charts needs to be modified.
Exhibit 2. Distribution of Costs on Activity Schedule
In Exhibit 1, the mean of “l” observations is “m,” and each of these “l” observations is the mean of sample size “n.” The mean “m” has to be modified to form the sum of “l” observations.
Adding “l” such frequencies whose mean is “m,” the formula for control limits become:
(Since variances and not standard deviations can be added up)
Similarly, the warning limits get modified to:
Since this method considers the ranges of the sample size “n,” it is important that the various ranges are of compatible order, so that the mean range of the “n” samples has a significance. This seems to be ideal when the constituent costs are from similar or homogeneous activities, indicating the same order of the quantity of work involved. Activities from the same or similar work package are good examples. Say RCC work in floors one to ten, when the floor sizes are similar. Let's call this Method 1. However, when dealing with non-homogeneous activities or when summing up through various work packages of dissimilar nature, it is better to add up the various variances rather than considering the mean range of samples.
So, the control limits will be:
And the warning limits will be:
Let's call this Method 2.
Exhibit 3. Method 1
Exhibit 4. Method 2
The example project in Exhibit 2 shows activities and their quantities and hence costs distributed monthly. The cumulative total of these costs when plotted on a time vs cost graph will constitute the initial Budgeted Cost of Work Scheduled (BCWS).
In order to find the threshold limits, each of these costs have to be distributed as a frequency. We need to make a frequency diagram of sample size n. We can consider normal distributions with 95% level of confidence and start with a “+” or “–” 15% variation on either side of a mean—the point estimate. So each cost estimate is made into a sample size of 3, with a lower and an upper limit estimated at 95% confidence. So in our calculation “n” will be equal to 3 and “l” will be varying in different months. If a month has five activities in it, “l” will be equal to 5, and if it has two, “l” will be equal to 2.
Exhibit 3 shows a part of a worksheet, which calculates the action, limits for cumulative costs in different months. The cumulative mean range of a month is the mean of all component ranges from the beginning of the project till that month. This reflects Method 1.
Exhibit 4 shows a part of a worksheet, which calculates the action, limits for cumulative costs in different months, by Method 2. The cumulative grand variance of a month adds all component variances from the beginning of the project till that month. The graphical BCWS drawn as a cumulative S curve was obtained by both Method 1and Method 2. In our example project, both turned out more or less similar.
Exhibit 5. The Two Schedules With Different Probability Levels
The most important conclusion from this exercise emerged in the finding that although we started with a 15% allowance in variation of each of the component costs (with 95% confidence), the final allowable variance in the total cost worked out at 4% only for 99% confidence interval (see Exhibit 5—Initial BCWS).
But just the fact that the action limits are mostly like parallel curves on both sides of the BCWS indicates that it has maintained the same distance from a higher cumulative cost (at the end of the project) as from a lower cumulative cost (at the initial stages of the project). This means a lower percentage of variation for the higher costs than for the lower costs. This also brings us to the point where we need to optimize the allowable variations in the components and the total cost, by a sensitivity analysis. It turned out to be an iterative process, where we determined an end contingency and distributed it. The most important tool was the understanding that a 10% contingency to the total cost allowed us to have a much bigger contingency of the component costs.
Dealing With Contingency
Normally distribution of contingency is a subjective issue, in the hands of the project manager. But here it becomes part of the planning process—distinctively linked with the allowable threshold values to costs. This part of the contingency is something that the planning process foresees. Design and estimating variances, differences in site conditions, weather conditions, and probable delays and costs fall within the reasons of a foreseen contingency. When total foreseen contingency is fixed, it can be distributed equally between component activities or work packages or there could even be an uneven distribution for special reasons. The following is a statistically defined sensitivity analysis between end contingency and component contingencies. In case of even distribution of component contingencies,
Where c is a constant contingency allowed in each activity
where C is a contingency allowed on total cost and μ=mean of total cost
For example: for a 10% contingency, a total cost with 99.8% confidence
and at 95% confidence level
In case of unequal contingencies within work packages or activities,
grand σ =
So, we need to do a sensitivity analysis for finding c1, c2, c3, etc.—which are separate contingencies for separate activities.
There is a part of this foreseen contingency, which needs to be dealt with an understanding of probabilities. Supposing we see two possibilities in schedules, beyond a certain time, due to weather conditions. In our example project we were running into the monsoon (rainy) season during the middle of the program, which would affect some prime concreting activities and their curing times. So the bar chart was loaded with both the first and second schedules at 50% probability levels. This consideration of a foreseen vital variable and its probability brings us to a parallel yet concurrent consideration of two different schedules
In Exhibit 5, the delayed schedule is arrived at by reducing the respective costs by 50% wherever the two schedules overlap. The two schedules can be given two distinctly different probabilities also.
The selection of these kind of “vital few” variables and quantification of them in terms of their probabilistic effects on the project BCWS, loads the baseline with all uncertainties and their consequences. This helps us present our project lifecycle like a walk through experience even before it has started. The “vital few” parameters thus optimized will convert the BCWS to a band or a Polyline (a line with a thickness) so that system now responds to deviant performance in a non-interacting way.
Apart from the foreseen contingency, which should be built into the schedule and planning, some amount of unforeseen contingency should be kept at hand as Management Reserve. Exhibit 5 shows the performance of the BCWP vis a vis the BCWS band made out of two schedules. The significance of the fall out of the contractor's performance is seen as the BCWP crossed the band and the second schedule actually took over. The project still had cost and time over runs. The schedule contingency was fully used up but some costs could be saved, thanks to a probabilistic understanding of the project and our robust process design, which had a built in insensitivity to surprises and deviations from standard operating conditions.
Exhibit 6. EVM is a Three-Dimensional Process
Lessons From a Developing Economy—Do Not Pursue the Nominal Value as Celebrated in the EVM Metrics—De-Emphasize Perfection
Performance Measurement in a resource's constrained environment can no longer be “focused on quality conformance,” (Tapiero, 1996), but on optimum performance. A good process control mechanism is more important than achieving the target on line, although it is highly likely to be achieved as a corollary, when such a system is in place. As Deming (1997) says, “A numerical goal accomplishes nothing. Only the method is important, not the goal.” As the construction management profession struggles to accomplish a credibility in a world of unmet targets, nothing can be more important than the importance to a innovative, honest and well meaning effort in improving the process, specially when the system often is incapable of meeting its goals due to various constraints. The author's experience in various projects in a developing economy has reinforced the fact that processes are often in statistical control yet failed to deliver the time and cost objectives, because the demand supply imbalance of such an economy generates all pervasive corruption, and collusion. The resultant uncertainty and unpredictability of the process causes variations, which are way beyond acceptable control limits. However, the system expectation tends to ignore the unquantified effects of these uncertainties, focusing on narrow specification limits. In such cases, some basic optimization and a study of the process show the systems to be predictably faltering, which in itself is a state of statistical control. As Deming (1997) says “a process could be in statistical control yet 100% defective. To go about reduction of fires, treating every fire as if it arose from a special cause, an accident, is totally different from regarding it as a product of a stable process. This supposition that every fire is an accident may well block the road to reduction in the number of fires.” The solution lies only in the improvement of the process. Thus the first step is for EVM to accept “non-conformance.” The BCWS as a nominal value in terms of time and cost is rather simplistic, and not worthwhile to pursue as an absolute number at the beginning of the process cycle. But it still remains relevant because as the cycle matures, and we make the system come near to its goal, we will be in a position to deliver the nominal value.
The Third Dimension: Conflict of Interests
The traditional approach to EVM and statistical control just explained ignores the element of conflict. Although the customer normally thinks that the contract document resolves all conflicts, in reality it is only a beginning. Conflict is all-pervasive. Negotiations happen throughout the project duration and unless the basis for such negotiations are “best for everybody concerned” (Deming, 1997), the conflict of financial interests is the third unquantified dimension in most contracts. Both customers and contractors expect “risk protection to ensure that they obtain what they expected at the time of the transactions.” In order to stabilize both their operating environments, the contract should protect both parties and reduce the uncertainty they face The third dimension of the BCWS, accepts the conflicts, and understands the need to cooperate, coordinate activities and reach a greater level of performance in a win-win relationship. Exhibits 6, 7 and 8 show the BCWS extruded to the third dimension—the differing heights suggesting different conflict levels at various stages of the project.
Often times, the contractor operates in a multi project environment and his profitability at any times is determined by how all the projects are doing. A detrimental effect in one could snowball to others, when his rate of return on investment and working capital is affected. This induces a new a negotiating and maximizing attitude in an otherwise “in control” project. The mitigation and optimization of these conflicts within the contractual framework is the third dimension, which determines the success of Project Performance. Owners and contractors are not necessarily always in dispute, but they do have different objectives in balancing costs against the progress and quality of contracted work (Wearne, 1989). No metrics can help the management of a situation that has its roots in profitability of diverse projects and/or overall operations of the parties, which is beyond a specific contract drawn up for a specific project. Hence, mitigating this conflict within the process envelope is the final challenge.
Exhibits 7. Views of the BCWP and BCWS Band
Exhibits 8. Views of the BCWP and BCWS Band
Deming W.A. (1997). The new economics. MIT, Center for Advanced Educational Services.
Ghose Suhrita. (1989). Development of an integrated approach to project monitoring. School of Planning and Architecture, New Delhi.
Levine, David, Ramsay, P., & Berenson, M. (1995). Business statistics for quality and productivity. Prentice Hall International.
Peters, Glen. (1981). Project management and construction control. Construction.
Pilcher, Roy. (1985). Project cost control in construction. London: Collin.
Tapiero, Charles S. (1996). The management of quality and its control. Chapman and Hall.
Wearne, Stephen. (1989). Control of engineering projects. P. Telford.
Proceedings of the Project Management Institute Annual Seminars & Symposium
September 7–16, 2000 • Houston, Texas, USA