# The time dependence of CPI and SPI for software projects

**Administrative Sciences Department, Metropolitan College, Boston University**

*Abstract*

*Abstract*

This paper proposes a formal method for including time dependence in Earned Value Management (EVM). The model requires three parameters: the reject rate for completion of activities, the cost overrun parameter, and the time to repair the rejected activities. The parameters map directly to the fundamental “triple constraints” of scope, cost, and schedule. Time dependent expressions for the planned value, earned value, and actual cost are derived, along with the cost performance index (CPI) and the schedule performance index (SPI). We apply the model to the well-established Putnam-Norden-Rayleigh (PNR) software labor rate profile, which is known to describe many types of large software and information technology projects. The standard PNR derivation is re-cast in a form suitable for EVM. We then apply the model to a well-known software data set, and demonstrate how to estimate the three project parameters early in the project's life, which allows us to make predictions of the project's final cost and schedule. These predictions turn out to be more reliable than standard Cost-to-Complete calculations: they converge faster to the correct answer with much less variability.

*Introduction*

*Introduction*

**The Problem and its Importance**

Earned Value Management (EVM) provides project managers with early warning signals of project trouble, and such indicators were found to be reliable as early as 15% into a project (Christensen & Heise, 1992). This was later validated using contracts from the Department of Defense (DOD) Defense Acquisition Executive Database (DAES):

“DOD experience in more than 400 programs since 1977 indicates without exception that the cumulative CPI does not significantly improve during the period of 15% through 85% of the contract performance; in fact it tends to decline.”

This was true regardless of the type or phase of the defense contract, weapon system, or the military service involved. Therefore, a significant overrun that continues more than 20% into the project indicates that the project is unlikely to meet its budgetary goals, and customers can reliably conclude the project is in trouble. However, the sting is in the tail: On most contracts, the CPI tends to decline so that things will only get worse!

This also indicates that time dependence is a property of the CPI, and so a goal of this paper is to improve the theory of EVM by including time dependence into the definitions of all quantities.

The *Practice Standard for Earned Value Management* (Project Management Institute, 2005) provides a guide to the principles of EVM and its role in facilitating effective project management. Marshall (2006) suggests that EVM is an effective project management methodology, and that EVM can provide significant positive predictors of project success. EVM metrics were also shown to be important contributors to the successful administration of contracts (Marshall, Ruiz, & Bredillet, 2008). Christensen (1994) has demonstrated the general accuracy of the estimate at completion (*EAC)* and that performance measurement data have predictive value.

On the other hand, Kim (2000) found that the main reasons for not adopting EVM were that it was “not needed on small projects and that it was hard to apply.” However, Kim also pointed out that computer tools and training significantly improved the acceptance and performance of EVM, and that “the literature suffers from an over reliance on anecdotal data.”

The key concept is the earned value (EV), which converts project accomplishments from physical units of measure (e.g., miles of roadway or deliverables completed) to financial units (e.g., dollars or labor hours). EVM defines the planned value (PV), as the time-phased budget baseline, and the actual cost (*AC*), as the cumulative cost spent to a given point in time to accomplish an activity, work package, or project.

Whether a project is on schedule at a particular date is determined by comparing the planned value to the earned value, where value is considered to be “earned” by the completion of measurable deliverables. For example, if at some time four deliverables were planned and three were actually completed (thus earning value), then the ratio of earned value to planned value is 3/4=0.75. This is known as the schedule performance index, (*SPI = EV/PV*), and it is intuitively obvious that a value of *SPI < 1.0* represents a project that is behind schedule. The schedule variance (*SV)* is another measure of the conformance of the earned progress to the planned progress: *SV = EV - PV.*

A criticism of EVM is that *SPI* and *SV* are inherently functions of time, but the form of the time dependence is unknown. This can most easily be seen by examining the behavior of *SPI* toward the end of the project. As the last few activities are completed, the earned value approaches the planned value, i.e., ** EV → PV**, and therefore,

**(Kerzner, 2006; Lipke, 2003). This is true even if the project is late, in which case the**

*SPI → EV/PV*→ 1*SPI → 1.0*after the planned completion date. A similar argument shows that

*SV → 0.*

If we measure *SPI < 1.0* at some point in time, we would like to know how late the project is going to be at the end by analyzing the behavior of *SPI(t)* over time. The challenging question is, therefore:

*Exactly how does SPI(t) change over time?*

This issue has been addressed in two ways:

1) Converting *SV* into time units (Anbari, 2003)

This involves defining the average actual cost spent per time-period, called the spend rate (*ACRate)*, and the average planned value per time-period, called the planned value rate, (*PVRate). PVRate* is defined as the baseline budget at completion (*BAC)* divided by the baseline schedule at completion (*SAC).* One can now divide *SV* by *PVRate* to convert *SV* into time units, which is referred to as the time variance (*TV).* The usefulness of *PVRate* is that it translates *SV*, which is in currency units, into *TV*, which is in time units.

2) Measuring the time delay on the earned value cost curve (Fleming, & Koppelman, 2005).

This is referred to as the Schedule Variance Method, SVM. *TV* is measured graphically by drawing a horizontal line from the intersection of the *EV* curve to the *PV* curve, and interpreting the distance on the horizontal time axis as a measure of the schedule delay (or acceleration).

Vanhoucke1 & Vandevoorde (2007) extensively reviewed the accuracy with which the above methods forecast the total project duration, and concluded that Method #2 generally outperforms other forecasting methods. This is not surprising because SVM is the only technique that uses instantaneous metrics, and the only one that continually re-estimates the change in schedule based on the project data to date. This is in contrast to *PVRate*, which is defined in terms of two global quantities, both of which are constant (BAC and SAC). However, both approaches are retrospective, and neither defines how the parameters should evolve over time, nor do they make any predictions about the future cost and schedule.

Vanhoucke1 & Vandevoorde (2007) conclude that graphs of CPI and SPI over time provide valuable information about trends in project performance. When corrective managerial actions are implemented, the changes in the behavior of the indexes are assumed to reflect the impact of the management actions. Anbari (2002; 2003) suggests that such graphs can be very effective in project reviews, and provides detailed, worked examples. His graphical tools for assessing project performance trends generally enhances the effectiveness of the approach. However, according to Anbari, “EVM has not been widely used to estimate the total time at completion, total project duration, or schedule.”

The rest of the paper is as follows. We discuss the relevant literature in the next section. Next, the EVM model is presented based on the well-established PNR labor curve, which applies to complex software projects. The time dependent cost and schedule performance indexes, *CPI(t)* and *SPI(t*), are calculated. We demonstrate the model's practical applicability by estimating the project's final cost and schedule for a software project with real-world data. We find that the projections of the model are more accurate than traditional Cost-to-Complete calculations. Finally, we provide some conclusions and project management implications.

**Relevant Literature**

Putnam (1978) pioneered the use of labor rate curves for software projects. The PNR curve applies to many types of software projects, particularly embedded software systems (Boehm, 1981). The PNR curve works reasonably well for many types of large software systems, but not so well for those that are incrementally developed (Conte, Dunsmore & Shen, 1986).

Cioffi (2005) proposed that a differential equation frequently used in ecology can reproduce project labor ‘S’ curves. The quantities that determine the progress of the ecology solution are the birth rate, the death rate, and the carrying capacity of the environment. Cioffi speculated as to “whether or not useful management analogs of these quantities exist.”

Parr (1980) had already analyzed very similar curves, which bear his name. While Parr analyzed software projects, his work could have wider relevance if his labor curves are interpreted in terms of project parameters: Defining a project as starting out with a fixed number of problems to be solved, and ending with decisions made to produce a workable product. Parr's curve is different from the PNR curve in that progress is determined by tree-diagram dependencies: activities cannot be started until activities in the parent nodes are completed.

Parr's and Cioffi's curves are mathematically quite similar, e.g., they are both symmetric about the labor peak. For software projects, however, this is not a good representation, because software labor profiles typically have a long tail that represents the testing and debugging process.

Basili & Beane (1981) carefully analyzed the PNR and Parr labor rate curves, and concluded that while the Parr curve often fits the raw data better, noise in the data makes it difficult to chose between the models. They also noted that the Parr curve requires four parameters to PNR's two, so it is not surprising that it is a better fit. Unfortunately, the parameters in the Parr curve are not easily determined, and so the curve is not very effective for resource estimation. This may help to explain why the Parr curve fell out of favor. It would be useful to determine if Cioffi's curve, which mathematically is very similar to Parr's, suffers from the same drawbacks.

When plotted cumulatively over time, all of the above labor rate profiles result in the typical ‘S’ curve. However, it has been pointed out that using *instantaneous* labor rate curves is frequently more useful than using the cumulative “S” curves (Cioffi, 2005). For example, management actions are often reflected in the labor rate curve as clear departures from expected behavior, while the cumulative nature of the ‘S’ curve washes out such deviations.

Cioffi (2006b) also proposed a revised formalism for EVM in an attempt to address the problem that “the historically arcane terminology and calculational notation have stood as roadblocks to its embrace by the management community.” Cioffi demonstrated how to combine the three elements of budget, schedule, and scope by using cost as the common exchange medium. He thus reduced “effort” to a common basis: cost. In an interesting application, he calculates the point beyond which recovery of a late project is highly unlikely.

*The Cost and Schedule Performance Indexes*

*The Cost and Schedule Performance Indexes*

The *CPI* is a measure of the conformance of the actual work completed (measured by its earned value) to the actual cost incurred. The *SPI* is the ratio of the earned value to the planned value. In this paper, we will explicitly indicate that all quantities are functions of time, so that the definitions become

**Estimates of Cost and Time to Complete**

The estimated cost to complete the remainder of the activities is called the estimate to complete (ETC), while the estimate of the final cost at completion is called the estimate at completion (EAC). The inverse of the *CPI* formula can be used in forecasting (Anbari, 2003). For example, dividing the remaining earned value by the current *CPI* gives a prediction of the remaining cost: the estimate to complete (*ETC*), which assumes that work performance will continue at the same rate. Adding *ETC* to the actual current cost to date, *AC(t*), gives a prediction of the final budget.

In general, methods for calculating the *EAC* depend on the assumptions made about the future performance of the project vs. the historical, established performance to date. The *A Guide to the Project Management Body of Knowledge (PMBOK ^{®} Guide)-Third Edition* (2004) provides three approaches, based on three different sets of assumptions: (1) when the original estimates are flawed; (2) when past performance is not a good predictor of future performance; and, (3) when past performance is a good predictor of future performance.

*A Model for EVM*

*A Model for EVM*

We begin with the analysis of a PNR labor rate profile. Parr (1980) first derived the analytical formula for the PNR curve, as follows: A project involves solving some fixed set of problems, and *W(t)* denotes the proportion of the problems already solved at time, t. *W(t)* is normalized to 1.0 at the end of the project, and the completion of the project is simply the exhaustion of the problem space. The “skill” available for solving problems is denoted as *p(t*), and Parr proposed that the rate at which project progress is made should be proportional jointly to the skill, *p(t*), and the amount of the tasks left to be solved,

It is then argued that the choice of the learning curve, which best fits observations from actual projects is a linear learning: From this, it follows that the rate of doing work should be the PNR curve, i.e., integrating Equation 2 gives:

where *T* is the time of the manpower peak, and *K* is a constant of integration, which can be determined by requiring the total labor used on the project is *N* man-years. Parr then assumed that the above rate of completing work, *W(t*), is also the rate at which management applies personnel resources. This results the well-known PNR labor rate curve:

In this paper, we will make a slightly different assumption: that the rate of doing work, ** W(t)**, represents the rate of completion of project

**When a project is planned, the time-phased budget is developed by summing the time-phased labor contributions of the scheduled activities, and we refer to this as the labor curve. We can choose to measure**

*activities.**W(t)*in currency units, in which case it represents the completion rate of activities in currency units. If the project is in the planning phase, then the rate of completion of activities,

*W(t*), is simply the planned value over time,

*pv(t).*

**Planned Value**

Using the above derivation of the planned value in terms of the PNR completion rate of activities, we have:

where the total number of activities is *N*, and the time of the labor peak is *T.* The cumulative Planned Value, *PV(t*), is defined as the cumulative sum of the instantaneous planned value,*pv(t):*

The instantaneous planned value is plotted in Figure 1 as the solid curve. The cumulative planned value is plotted in Figure 2, and shows the typical ‘S’ curve.

**Figure 1: Instantaneous Planned Value, Earned Value, and Actual Cost.**

**Figure 2: Cumulative Planned Value, Earned Value, and Actual Cost.**

**Earned Value**

We now assume that a constant fraction, *r*, of the activities that are supposed to be completed at time, *t*, are rejected for some reason and require extra work. Project managers should be able to estimate the reject rate early in the project as the initial activities are completed, and some are rejected. We assume that the reject rate is constant over the life of the project, which appears reasonable. For example, software project data suggests that error rates remain constant over the life of a project (Basili & Beane, 1981).

The completion of rejected activities will be delayed, by a delay time, τ, which is also assumed to be a constant. That is, on average the rejected activities all take the same length of time to repair. Unlike the rejection rate, we have been unable to find reliable data on the average delay experienced by individual activities. Therefore, the reasonableness of this assumption needs to be evaluated by comparing real project data with the predictions of this model.

An example of the process is as follows: Suppose that at the end of week 2 (*t* = 2), 5 activities were due to be completed, and one of these activities was rejected for some reason (*r* = 20%). This activity will be delayed for rework, and take an extra three weeks to be completed (τ = 3). The rejected activity will be completed in week five, when it will finally earn its planned value.

The earned value is the value of the successfully completed activities. In each interval, a fraction, *r*, of the activities are rejected, and so the remaining activities, (*1-r*), are completed and earn value. At the beginning of the project, in the interval **t < τ**, the earned value is the fraction of activities that are successfully completed:

For **t > τ**, the earned value has two contributions: the fraction of activities successfully completed at *t*, and those from **t—τ**, the that were delayed and are now complete:

We can now calculate the cumulative earned value by integrating the instantaneous values. The integration must be computed separately in each time interval:

We note that for large *t*,

Equation 11 merely says that at the end of the project, all of the activities are completed. However, the final earned value is not equal to the final planned value until after the planned end of the project. Since we assume that the reject rate is constant over the life of the project, any activities rejected near the planned end of the project will be completed after the planned end of the project. This can be seen in Figure 1, where the earned value curve is delayed at the end of the project relative to the planned value curve.

If extra activities had been added (i.e., scope creep occurred), then the earned value would approach a higher value than the planned value. Since we assumed that the number of activities did not change, the total earned value must be the same as the total planned value, and this is confirmed by Equation 11. This assumption is not terribly restrictive, and we will examine it in the final section.

Figure 1 shows the instantaneous earned value compared to the planned value over the entire life of the project. The earned value immediately falls relative to the planned value as activities are rejected, but eventually catches up, but not until after the planned end of the project. The cumulative versions are shown in Figure 2. Both planned and earned value curves approach the same value because the total number of activities in the project remains the same. The earned value is initially delayed because some of the activities are rejected and credit for their completion is only earned when they are finally completed.

**Actual Cost**

The instantaneous actual cost, *ac(t*), includes the work performed on both the activities that were successfully completed and those that were rejected. For **t < τ**, the actual cost is simply the planned value. Some of the activities are rejected, but the costs to repair them will be incurred in the future. Therefore,

For the interval **t>τ**, the activities that were rejected at time **t–τ**, will now be successfully completed. The fractional extra work required is *c*, so the instantaneous actual cost is:

The term represents the fraction of activities rejected previously, and the constant, *c*, accounts for the additional work required to repair those previously rejected activities. A value of *c* = 1.0 represents a 100% overrun for each of the rejected activities.

An example of the process is as follows: At week 2 (*t* = 2), five activities were due to be completed. One of these activities was rejected for some reason (*r* = 20%). This activity is delayed for rework and takes an extra three weeks to be completed, and is successfully completed in week five (τ = 3). The actual cost incurred in week five includes two contributions: 1) the planned labor amount for week five and two, the activity rejected in week two incurs extra costs in week five, consuming an extra 25% (c = 0.25).

The instantaneous actual cost as a function of time over the entire life of the project is also shown in Figure 1. The cumulative actual cost is found by integrating the above equations in each interval:

The cumulative actual cost as a function of time is shown in Figure 2 for a value of *c* = 0.33. The actual and planned costs are identical up to time *t* = τ. After that, the actual cost increases as the cost of the rejected activities accumulates. The planned and earned values do not depend on the cost overrun parameter, *c*, only the actual cost depends on *c.*

The project's Total Actual Cost is the total cost of the entire project, which is predicted to be:

Therefore, once we estimate the parameters, *r* and *c*, we can estimate the final total cost of the project. Equation 17 is an estimate for the project's final cost and we note that this estimate is a constant because it only depends on the fundamental project parameters, which are assumed to be constants. We also note that the final cost does not depend on the delay parameter, τ, and so it is possible to estimate the final cost from an early determination of the parameters, *r* and *c*.

*Cost and Schedule Performance Indices*

*Cost and Schedule Performance Indices*

We can calculate *CPI(t)* and *SPI(t)* from Equation 1 since we know the time dependence of the cumulative planned value from equation 6, the earned value from Equations 9 and 10, and the actual cost from Equations 13 and 14. *SPI(t)* only depends only on the repair delay, *τ*, and the reject rate, *r*, while *CPI(t)* also depends on the cost overrun parameter, c. Figure 3 shows *CPI(t)* and *SPI(t)* for τ = 5, *r* = 0.5, and *c* = 0.33. *CPI(t)* immediately falls and remains low over the entire life of the project. In comparison, *SPI(t)* falls initially, but then slowly climbs back to 1.0 at the end of the project, as expected.

Figure 4 shows the shape of the *CPI(t)* curve when *r* and τ are constant and the cost overrun parameter varies. It is clear that many different shapes for *CPI(t)* are possible, consistent with the data of Christensen & Heise (1992).

**Figure 3. CPI and SPI(t) for reject rate, r = 0.5, delay parameter, τ = 5, and cost overrun rate, c = 0.25.**

**Figure 4. CPI(t) for reject rate, r = 0.5, delay parameter, τ = 5, and various values of the cost overrun rate, c.**

The performance over time of the two indexes is interesting. *CPI(t)* immediately falls to **(1 – r)** because in each interval, including the first few, some activities are rejected. As the rejected activities are completed after time, **t > τ**, costs are incurred and the *CPI(t)* slowly changes, depending on the value of the cost overrun parameter, *c*.

Figure 4 shows that initially for small values of *r, CPI(t)* gradually rises, but for larger values, it falls. The value *c* = 1.0 denotes a doubling of the cost of the rejected activities. We remind the reader that even though *CPI(t)* is changing, the underlying model parameters are constant: the cost overrun, *c*, the reject rate, *r*, and the schedule delay to complete the repair, τ.

The behavior of *SPI(t)* is quite different. While *SPI(t)* also initially falls to **(1 – r)**, after the time delay, τ, thereafter it begins to climb, eventually reaching 1.0 at the end of the project, as it should. We note that the final value of ** SPI → 1.0** is an explicit property of this model.

Some authors (in particular, Fleming & Koppleman (2005) and Vanhoucke1 & Vandevoorde (2007)) have claimed that since ** SPI → 1.0** at the end of the project, it is not useful as an indicator over the last third of the project. Figures 4 and 6 show

*exactly how SPI(t)*changes over time. Therefore, since we now understand the structure of the time dependence of

*SPI(t*), we can compare the actual performance of the

*SPI*to its projected performance. Managerial actions should result in changes to the fundamental parameters, i.e., a lowered reject rate, a reduce cost overrun, and a decreased time to repair. Genuine managerial improvements in the performance of the project, therefore, will be reflected in detectable changes in the shapes of the

*CPI(t)*and

*SPI(t)*curves. In the absence of genuine improvements, the curves will follow their established time-dependant patterns.

The initial and final values of *CPI(t)* and *SPI(t)* can be calculated. Table 1 summarizes the starting and ending values for *CPI(t), SPI(t).*

**Table 1. Initial and final values for the cost and schedule performance indexes.**

Parameter | t → 0 | t → ∞ |

CPI | 1–r | |

SPI | 1–r | → 1 |

*Analysis of a Software Project*

*Analysis of a Software Project*

We now show how the process described above works in practice by estimating the three model parameters early in a project. We will use historical data from a well-known software project, which we will refer to as the ‘calibration project’ (Warburton, 1983). We then estimate the final value of the project's cost and schedule. The goal is to determine whether early in the project life cycle we can accurately estimate the cost overrun and schedule delay, and compare the answers of the model to more traditional methods.

Figure 5 shows the instantaneous planned and actual expenditures. The smooth dotted curve is the planned expenditure and the other represents the actual expenditures from time sheet data. Figure 6 shows the progress of the deliverables, which was reconstructed after the project was finished. However, we can use the data as if it were accumulated in real-time as the project proceeded. The difference between the smooth planned curve in Figure 5 and the spiky curve in Figure 6 represents the earned value for deliverables. That is, value was only earned when the deliverables were 100% complete, at which point the project earned the value of the deliverable.

**Figure 5. Calibration project. Instantaneous labor curve for software calibration project.**

**Figure 6. Calibration project. Planned value (PV) and earned value (EV)**

Figure 7 shows the cumulative data from Figure 6. Since we are interested in predicting the final cost from early project data, we have presented just weeks one through 20. It is immediately apparent that the cumulative version smoothes out the data.

We now have the planned and earned values, as well as the actual cost. Therefore, we can calculate the *CPI* and *SPI*, which is shown in Figure 8.

**Figure 7. Calibration project. Cumulative planned value (PV), earned value (EV), and actual cost (AC) for the early stages of the project: weeks 1 – 20.**

**Figure 8. Calibration project. CPI and SPI.**

**Figure 9. Calibration project. CPI and SPI for the first 20 weeks.**

Predicting the Cost-to-Complete (CTC)

We now use the EVM model to estimate the final project cost and compare it to the standard approach recommended in the *PMBOK ^{®} Guide* (PMI, 2004):

The CTC is estimated from the current actual expenditures to date, *AC(t*), plus the remaining earned value divided by the *CPI.* This latter term assumes that progress in the future will be the same as the historical progress, an assumption verified by Christensen (1994). For the calibration project, the CTC is shown in Figure 10. At each point in time, the current value of the *CPI* is used, along with the current actual cost, to calculate the predicted final value of the CTC.

Next we estimate the parameters for the EVM model. From the number of modules requiring rework, we estimated the reject rate, *r* = 1.0. This was straightforward to estimate: The project manager simply recorded the number of modules sent back for rework. Surprisingly, almost all of the modules were delivered late, and required extra work to finish, resulting in a reject rate of 100%!

Also, the project manager recorded the amount of extra work required to complete the rejected modules, which gave an estimate of the cost overrun parameter, which for the calibration project was fairly consistent at *c =* 0.33. That is, all of the modules required an extra 33% to complete. Finally, the project manager (PM) estimated the average time delay associated with the completion of the rejected activities. Initially, the first few modules were late by just one to two weeks. However, after about eight to 10 weeks, the delays were quite significant and eventually settled in at around 10 weeks. (τ = 10).

Figure 10 compares the prediction of the EVM model with the traditional CTC estimate for the first 25 weeks of the project. Both estimates converge to the actual final cost after about 20 weeks, or 15% to 20% of the way through the project. However, the EVM model appears to have much less scatter, and converges faster to the true final cost. In fact, by week 10, the EVM model has a prediction within about 5% of the correct answer.

**Figure 10. Comparison of predictions of the final cost using CTC and EVM models. The EVM model converges faster to the correct value than the traditional CTC.**

For completeness, the predicted labor curve that results from using the EVM model parameters is shown in Figure 11. The model predicts quite well the shape of the actual labor curve, which means that it is useful to the PM in determining future labor needs. Those needs are significantly different from the originally planned curve. While the CTC curve predicts the final cost, it does not say anything about the details of the labor curve, i.e., it does not predict the needed staff.

The predictions of the EVM model appear to be more useful than those of the CTC. The EVM model predictions converge faster to the true final cost. Also, one can use the model parameters to predict a new labor profile, which can be used for staff planning.

**Figure 11. Calibration project. Prediction of labor curve (denoted as EVM) based on early estimation of the model parameters.**

*Conclusions*

*Conclusions*

A major challenge in project management is balancing the “triple constraints” of scope, cost, and schedule. In that context, the three parameters of the model presented here map directly to the fundamental triple constraints. The first parameter, the reject rate, *r*, is a measure of the scope quality, as it characterizes the rate at which scope activities do not meet their designed objectives, and must be reworked. The second model parameter, *c*, measures the cost overrun and so maps directly to the project cost constraint. The third model parameter, τ, describes the time delay associated with repairing the rejected activities and so maps directly to the project's schedule constraint. Thus the three model parameters directly characterize the fundamental triple constraints of scope, cost, and schedule. There are no extraneous or phenomenological parameters.

The EVM model provided here establishes the time dependence of *CPI(t)* and *SPI(t).* For many types of large projects the typical cumulative labor profile seems to result in an ‘S’ curve (Christian & Kallouris, 1991). The application of ‘S’ curves for cash flow projections can achieve accuracies of over 90%, and the shape of the S-curve budget vs. time is a quick way to judge performance (Singh & Lakanathan, 1992). This means that for most projects, the labor profiles will be quite similar to those of Figure 2, which suggests an intriguing property of the model: The behavior of the *CPI(t)* and *SPI(t)* curves over time for most projects should be similar to those shown in Figure 4. Therefore, as long as the manpower curves resemble a general ‘S’ shape, the curves for *CPI(t)* and *SPI(t)* should be relatively independent of the precise form of the labor rate profile.

We have assumed that the project parameters are all positive. The reject rate, *r*, is inherently positive. Industrial data strongly suggests that project error rates do indeed remain constant over time (Basili & Beane, 1981), which suggests that the assumption that *r* is constant is reasonable. A positive value for *c* results in a cost overrun, but without change the model will work just as well if *c* is negative, representing a cost under run.

However, the model will not work as presented for schedule accelerations, i.e., negative values of the delay parameter, τ because of the terms such as . More work is required to determine if the model can be generalized to include schedule accelerations. We have been unable to obtain data on the average delay experienced by individual activities, so a topic for future research is to evaluate the reasonableness of the assumptions by conducting more calibrations of the model with actual project data, and to determine its range of applicability. However, we have determined that using one well-known project data set, the model predictions are useful, and surprisingly accurate.

By analyzing early project data as the first few activities are completed, we demonstrated that PMs can estimate the values of the three model parameters. Predictions of the final cost and schedule are then available, and allowing for noise in the data; these are constant for the life of the project-a useful property of the model. The three parameters are therefore in some sense fundamental, while quantities such as *CPI(t)* and *SPI(t)* are shown to be derived functions of time, and should not be regarded as fundamental.

We also assumed that there was no scope creep, that the number of activities did not change. This assumption is not terribly restrictive. The PM will know quite early on with the delivery of the first few modules if the scope is increasing. If so, one simply re-estimates the parameter, *N*, the total number of modules, and uses the EVM model with the new value of *N.* Some adjustment to the time for the labor peak, *T*, will probably also be required, but this will depend on the reaction of the customer to the proposed increase in cost and schedule. The important point is that the PM will be able to analyze potential options from a well-defined and validated model.

The model predicts the time dependent behavior of *SPI(t*), and the property that ** SPI(t) → 1** is an explicit property of the model. PMs can measure

*SPI(t)*over time and distinguish between inherent changes (those due to the time dependence of SPI), and genuine changes in project performance, which are a result of significant managerial actions that improved the reject rate, the cost overrun rate, and the time to repair.

The EVM model provides a useful contribution to a PM's toolbox: the ability to determine how measured values for *CPI(t)* and *SPI(t)* can lead to an estimate of a project's final cost overrun and schedule delay. Despite the preliminary nature of the model presented here, a number of interesting features emerged. While much research remains to be done, it appears that the EVM model is a useful starting point and shows promise.

*References*

*References*

Anbari, F. T. (2002). *Quantitative Methods for Project Management* (2nd ed.) (No 4). New York: International Institute for Learning.

Anbari, F. T. (2003). Earned Value Project Management: Method and extensions. *Project Management Journal, 34* (4), 12.

Basili, V. R., & Beane, J. (1981). Can the Parr curve help with manpower distribution and resource estimation problems? *The Journal of Systems and Software, 2*, 59-69.

Boehm, B. (1981). *Software Engineering Economics.* Englewood, NJ : Prentice Hall.

Christensen, D. S. (1994). Using performance indices to evaluate the estimate at completion. *Journal of Cost Analysis and Management, Spring*, 17-24.

Christensen, D. S., & Heise, S. R. (1992). Cost Performance Index Stability. *National Contract Management Journal, 25* (1), 7-15.

Christian, J., & Kallouris, G. (1991). An expert system for predicting the cost-time profiles of building activities. *Canadian Journal of Civil Engineering, 18*, 814.

Cioffi, D. F. (2005). A tool for managing projects: An analytic parameterization of the S-curve. *International Journal of Project Management, 23*, 215 222.

Cioffi, D. F. (2006a). Completing projects according to plans: An Earned-Value improvement index. *Journal of the Operational Research Society, 57*, 290-295.

Cioffi, D. F. (2006b). Designing project management: A scientific notation and an improved formalism for earned value calculations. *International Journal of Project Management, 24*, 136-144.

Conte, S. D., Dunsmore, H., & Shen, V. Y. (1986). *Software Engineering Metrics and Models.* Menlo Park, CA: Benjamin/Cummings Publishing Company, Inc.

Fleming, Q. W., & Koppelman, J. M. (2005). *Earned Value Project Management* (3^{rd} ed.). Newtown Square, PA: Project Management Institute.

Kerzner, H. (2006). *Project Management: A systems approach to planning, scheduling, and controlling* (9th ed.). New York: John Wiley & Sons.

Kim, E. H. (2000). *A study on the Effective Implementation of Earned Value Management Methodology.* Doctoral Thesis, The George Washington University, Washington, DC.

Lipke, W. (2003). Schedule is different. *The Measurable News*, 31-34.

Marshall, R. A. (2006). *The contribution of Earned Value Management to project success on contracted efforts: A quantitative statistics approach within the population of experienced practitioners*. Doctoral Thesis, Lille Graduate School of Management.

Marshall, R. A., Ruiz, P., & Bredillet, C. N. (2008). Earned value management insights using inferential statistics. *International Journal of Managing Projects in Business, 1* (2), 288-294.

Parr, F. N. (1980). An alternative to the Rayleigh curve model for software development effort. *IEEE Transactions on Software Engineering, SE-6* (3), 291-296.

Project Management Institute. (2004). *A Guide to the Project Management Body of Knowledge (PMBOK ^{®} Guide)* (3rd ed.). Newtown Square, PA: Project Management Institute.

Project Management Institute. (2005). *Practice Standard for Earned Value Management.* Newtown Square, PA: Project Management Institute.

Putnam, L. H. (1978). A general empirical solution to the macro software sizing and estimating problem. *IEEE Transactions on Software Engineering, 4* (4), 345.

Singh, S., & Lakanathan, G. (1992). Computer-based cash flow model. *Proceedings of the 36th Annual Transactions of the American Association of Cost Engineers, R.5.1-R.5.14*.

Vanhoucke1, M., & Vandevoorde, S. (2007). A simulation and evaluation of Earned Value Metrics to forecast the project duration. *Journal of the Operational Research Society, 58*, 1361-1374.

Warburton, R. D. H. (1983). Managing and predicting the costs of real-time software. *IEEE Transactions on Software Engineering, SE-9* (5), 562-569.

© 2010 Project Management Institute. All rights reserved.

Advertisement

Advertisement

## Related Content

Advertisement