### Abstract

The variation in a project's actual schedule, as compared to its planned schedule, is measured by its schedule variance (SV), which measures the difference between the earned value (EV) (the value of work actually performed) and the planned value (PV), so SV = EV – PV. However, SV is expressed as a monetary unit (e.g., in dollars), which makes it difficult to understand as a variance in the schedule, which should presumably be measured in time units, such as days or months. Several authors have proposed a “time-based” earned schedule (ES), which is easier to interpret.

Unfortunately, there are no generally available formulas for calculating ES, which means that it is difficult to calculate. We remedy this shortcoming by presenting formulas to calculate ES. The formulas are relatively straightforward, and are illustrated by applications in two industries: construction and software development. The results and conclusions from the two industries are quite similar, and indicate that as long as project labor curves follow the general “S” shape, the formulas should have wide applicability.

The first useful result is that the formulas show how the cost and schedule variances evolve over time. That is, they demonstrate how ES performs over the life of a project, and explain its behavior. They also allow the calculation of Actual Costs, Planned Value, and Earned Value over time. Therefore, one can also easily calculate the cost performance index, *CPI(t)*, and schedule performance index, *SPI(t)*, both of which are also functions of time.

The ability to make schedule forecasts without performing a complete bottom-up analysis has been long desired by project managers. That deficiency is eliminated: project managers can develop early schedule estimates, and then track the schedule's actual performance as the project evolves. While much work remains to be done, it appears that the formulas are a useful starting point and show great promise.

### Introduction

Anbari (2003) described the major aspects of the earned value method (EVM), including some extensions not found in the simplified guide to EVM terminology provided in *A Guide to the Project Management Body of Knowledge (PMBOK ^{®} Guide*) (Project Management Institute, 2000). EVM uses cost, or some other reasonable substitute, as the common measure of project performance for both cost and schedule parameters, and has wide applicability to both public and private sector projects. On the other hand, non-users of EVM often indicate that the method is hard to use (Fleming & Koppelman, 2000; Kirn, 2000).

The key concept is the earned value, EV, which converts project accomplishments from physical units of measure (e.g., miles of roadway or deliverables completed) to financial units (e.g., dollars or labor hours). EVM also defines the planned value (PV, the time-phased budget baseline) and the actual cost (AC, the cumulative cost spent to a given point in time to accomplish an activity, work package, or project). AC is determined as the cost to earn the related value.

### From EV to TV

A project's cost performance is measured by comparing EV to AC, while schedule performance is measured by comparing EV to PV. The schedule variance, SV, is a measure of the conformance of the actual progress to the planned progress: SV = EV – PV. A major criticism of the standard EVM is that the schedule variance is measured in cost units, not time. This issue has been addressed in two ways:

- Converting the SV into time units (Anbari, 2003)
- Measuring the time delay on the cumulative cost curve (Fleming & Koppelman, 2000)

The first of the above approaches involves defining the average AC per time period, called the spend rate (AC Rate), and the average PV per time period, called the planned value rate (PV Rate). PV rate is defined as the baseline budget at completion (BAC) divided by the baseline schedule at completion (SAC). Thus, PV Rate = BAC/SAC. The usefulness of PV Rate is that it translates SV into time units. Dividing SV by PV Rate converts SV into time units, which is referred to as TV, where TV = SV / PV Rate (Anbari, 2003).

In the second approach, TV is measured graphically by drawing a horizontal line from the intersection of the EV curve with the status date to the PV curve and reading the distance on the horizontal time axis (Fleming & Koppelman, 2000). Both of the above approaches have the desirable result of defining the schedule variance in time units, which is a more useful characteristic of a schedule over-run.

However, both of the above approaches suffer from a serious drawback. Both assume that all of the parameters are independent of time. For example, the PV Rate is calculated as the total budget divided by the total schedule (both at completion), and is assumed to be constant over the life of the project. When one divides the current SV (at time, *t*) by the PV Rate, one is assuming that the average PV Rate applies for all time.

The assumption that the parameters are independent of time is manifestly not true. This can easily be seen by examining the behavior of SV at the end of the project. As all of the activities are completed, then the earned value approaches the planned value. More precisely, EV → PV, and SV = EV – PV → 0. SV is, therefore, inherently a function of time, and to emphasize the fact we will denote it as *SV(t)*. In fact, it is easy to see that all quantities in EVM are functions of time.

Accepting that the parameters are functions of time presents a dilemma: At the end of the project, the schedule over-run is a definite number, such as 6 weeks, and is manifestly a constant. (Of course, the schedule may under-run, but for simplicity we will refer to an over-run.) How do we estimate the constant, eventual, final schedule over-run from quantities that vary over time? An even more intriguing question is, Can we estimate the constant value for the final schedule over-run early in the project? That is, given some data on CV and SV, can we predict the eventual final schedule over-run? Finally, can we estimate the over-run in time units?

For a schedule over-run prediction to be reasonable, one should expect that its estimation remain constant over the life of the project. This suggests that TV should be a constant. In which case, PV Rate should be a function of time, as indicated in equation (1):

Knowing that SV(t) is inherently a function of time, then for TV to be a constant, PV Rate must also be a function of time. If so, then how does one calculate PV Rate(t)? An analogous argument shows that the cost variance must also be a function of time, CV(t).

In this paper, we will make some progress in answering these questions by developing a model that predicts the behavior of the EVM parameters over time. This can even be done early on in the project's life. What emerges is a prediction for the eventual schedule over-run for the entire project, and in time units.

### CPI and SPI

The cost performance index (CPI) is a measure of the conformance of the actual work completed (measured by its earned value) to the actual cost incurred: CPI = EV / AC. The schedule performance index (SPI) is a measure of the conformance of actual progress (earned value) to the planned progress: SPI = EV / PV. In both of the above formulas, a value of 1.0 indicates that the project performance is on target. When CPI or SPI are greater than 1.0, this indicates better-than-planned project performance, while CPI or SPI less than 1.0 indicates poorer-than-planned project performance. The formulas used to calculate the CPI and SPI indices are generally based on cumulative costs.

The inverse of the above formulas are used in forecasting (Anbari, 1980; Egan, 1982; Cioffi, 2002; Webster, 2002). Dividing the forecasted budget by the current CPI gives a prediction of the final budget if performance continues at the same rate. A similar computation can be performed using the SPI.

However, just as the schedule variance is a function of time, SV(t), by analogous reasoning, the schedule performance index must be a function of time also, SPI(t). And of course, this also means that the cost performance index must be a function of time as well, CPI(t).

Graphs of the variances and performance indices over time provide valuable information about trends in project performance. When corrective actions are implemented, the changes in the behavior of the indexes reveal the impact of the changes. Such graphs can be very effective in project reviews. However, examining and analyzing the changes in the parameters begs the question of how they change over time. For example, “What is the baseline change in CPI over time, and how does it compare to the measured performance?”

### Cost and Schedule Forecasting

The estimated cost to complete the remainder of the activities for a project is called the estimate to complete (ETC), while the estimate of the final cost at completion is called the estimate at completion (EAC). Methods for calculating EAC depend on the assumptions made about the future performance of the project versus the historical, established performance to date. The *PMBOK ^{®} Guide* provides three approaches, based on three different sets of assumptions:

- When the assumptions underlying the original estimate are flawed
- When past performance is not a good predictor of future performance
- When past performance is a good predictor of future performance

Many formulas have been proposed to calculate the EAC, and under a variety of assumptions (Anbari, 2003; Fleming & Koppelman, 2000; Kerzner, 2006). However, according to Anbari (2003), “EVM has not been widely used to estimate the total time at completion, total project duration, or schedule … based on actual performance up to a given point in the project.” Using reasonable assumptions, Anbari provided formulas for the project's time estimate at completion (TEAC) and time variance at completion (TVAC), based on the baseline schedule at completion (SAC) and the actual performance up to a given point in the project (Anbari, 2001, 2002).

The difficulty with all of the above formulas is that they assume that the values for CPI and SPI are constant. Even if one assumes that the current project performance is an excellent predictor of future performance (assumption #3), one still needs to assume that: 1) the current values of CPI and SPI are constant; and 2) that they are representative measures of the entire future performance of the project. When graphs of CPI and SPI are changing over time, which is the usual case, the critical question becomes “How do CPI and SPI evolve over time?”

This importance of this question can be seen by reviewing the behavior of the SPI. As noted by Fleming and Koppelman (2000) and Kerzner (2006), at the end of the project, the SPI always approaches 1.0. This is the simple result of the completion of all proposed deliverables; that is, as each activity completes, the earned value becomes equal to the planned value. This is true even if the project is late, in which case the SPI still approaches 1.0, but after the planned completion date. We would like to know how late the project is going to be by decoding the behavior of SPI(t) over time.

### Construction Labor Curves

We now turn to analyzing labor curves that are typically applied in the construction industry. Wideman (1994, 2001, 2004) provided labor curves for a profitable civil contract which was predominantly formwork and concrete placing. The data is redrawn in Exhibit 1, which shows a histogram of the production workforce over the 38-week project duration. There is an initial period of build-up, a period of peak loading, followed by a period of progressive demobilizing.

Allen (1979) suggested that a simple trapezoidal figure can be used as an excellent approximation to the actual labor loading. The trapezoidal profile, which is also shown in Exhibit 1, consists of a linear ramp-up to a peak after 50% of the schedule, a constant peak load ending after 75% of the schedule, and a ramp-down to the end. In Exhibit 1, the agreement between the actual project data and the approximate loading is quite striking. In fact, the estimate of the total labor from the trapezoidal curve is within 2% of the actual value.

Warburton (2008) developed a model using the equations for the trapezoidal curve, which is denoted as c(t). The peak value of the labor curve is, *P*, and the project ends at time, *T _{e}*. The ramp-up spans the interval [0,

*T*/2]. Using Allen's rule #5, (the peak labor loading occurs from 50% to 75% of the project), the constant section spans the interval [

_{e}*T*/2,3

_{e}*T*/4].Thus, the equation for the trapezoidal curve is (Warburton, 2008):

_{e}### Planned Value

The instantaneous rate at which activities are planned to be completed is defined as the instantaneous planned value, *PV _{I}*(

*t*). As activities are staffed, there is no guarantee that they are completed on time, and so equation (3) represents the planned completion of activities. Therefore, the curve for

*PV*(

_{I}*t*) follows the same curve as the instantaneous labor rate,

*c(t)*. Traditionally, the Planned Value,

*PV*(

*t*), is defined as the cumulative sum of the previous, instantaneous

*PV*(

_{I}*t*), and so is defined as:

The instantaneous and cumulative planned values are plotted in Exhibit 2 (the dotted lines). The traditional cumulative version (Exhibit 2b) shows the typical “S” curve associated with cumulative costs. According to investigations by Singh and Lakanathan (1992), the application of “S curves” for cash flow projections can achieve accuracies of approximately 88–97%, and the shape of the S-curve budget versus time is a quick way to judge performance. This is precisely the objective of macro estimation methods: an early estimation of project performance from overall system parameters.

The Warburton model assumes that as the project proceeds not all activities will be completed on time, and one assumes that a fraction of the activities that are supposed to be completed at time, *t*, are rejected for some reason and require extra work. Project managers should be able to estimate this parameter early on in the project as the initial activities are completed and any over-runs noted. Software project data suggests that error rates remain constant over the life of a project, so the availability of an early estimate of the rejection rate is a reasonable assumption.

The completion of rejected activities will be delayed, and it is assumed that this delay is a constant amount, *τ*. That is, each activity that is rejected is delayed the same amount. Unlike the rejection rate, there is little data on the average delay experienced by individual activities. Therefore, the reasonableness of this assumption needs to be evaluated by comparing real project data with the model.

### Actual Cost

The instantaneous Actual Cost, *AC _{I}*(

*t*), includes the work performed on both the activities that were successfully completed and those that were rejected. At time,

*t*, the activities that were rejected at time

*t*–τ will be successfully completed. Therefore, the instantaneous Actual Cost at time,

*t*, is:

The parameter, *α*, represents the average product of the rejection rate and the work required to fix the problem. We see from equation (5) that *α* is the fractional extra work required for each planned activity. Using this model, Warburton calculates the cumulative Actual Cost, *AC*(*t*), by integrating, exactly as in equation (4). The cumulative Actual Cost as a function of time is shown in Exhibit 2 (solid line). The Actual Cost increases as the cost of the rejected activities accumulates. The Actual Cost as *t* → ∞ represents the total cost of the project, which is (Warburton, 2008):

This is reasonable, because it says that when the cost over-run in each activity is on average a fixed percentage, the end result is a cost over-run for the entire project by the same percentage. It is interesting to note that in this model, the total project cost does not depend on the time delay, *τ*, but only on the fraction of activities that were rejected.

### Earned Value

The Earned Value is the value, or cost, of the successfully completed activities. In each interval, a fraction, *α*, of the activities were rejected, so the remaining activities (1 – *α*) were completed and earn value. Also, previously rejected activities that are completed in this interval also earn value. Therefore, the instantaneous Earned Value is:

Exhibit 3a shows the instantaneous Earned Value (solid line) as a function of time as compared to the Planned Value (dotted line). The EV is initially delayed relative to the planned value, but eventually catches up. The earned value is slightly delayed at the end, representing a schedule over-run.

The cumulative Earned Value (EV) is found by the same process as for the cumulative Actual Cost—integrating. Exhibit 3b shows the cumulative Earned Value as a function of time (solid line) as compared to the cumulative Planned Value (dotted line). Both curves approach the same value as *t* → ∞ because the total number of activities in the project remains the same. The earned value is delayed because some of the activities were rejected and credit for their work was only earned when they were finally completed after the delay time.

If extra activities had been added (scope creep occurred), then the earned value would approach a higher level than the planned value. Since the number of activities has not changed, the total earned value must be the same as the total planned value. This is confirmed in the Warburton model by the following relations, which show the Planned and Earned Values as *t* → ∞:

### CPI(t) and SPI(t)

The Cost Performance Index (CPI) is defined as the ratio of Earned Value to Actual Cost, while the Schedule Performance Index (SPI) is defined as the ratio of cumulative Earned Value to cumulative Planned Value (PMI, 2000). Both CPI and SPI are traditionally defined in terms of the cumulative values. However, from equations (3)–(5), one can immediately see that these quantities are a function of time, and so:

The behavior of CPI(t) and SPI(t) are shown in Exhibit 4. The performance over time of the two indexes is different and quite interesting. CPI(t) immediately falls to 1 – *α* because in the first time interval, some activities are rejected. As the rejected activities are completed after time, *t = τ*, credit is earned, and the CPI rises slightly, but it remains low over the entire life of the project. At the end of the project, CPI(t) does not approach 1.0, it approaches the value shown in equation (10), which depends directly on the rejection rate.

However, the behavior of SPI(t) is quite different. It also falls immediately to 1 – *α*, but it eventually climbs back to 1.0 at the end of the project, as it should. However, SPI(t) reaches the value 1.0 after the projected scheduled completion. The actual values at the end of the project are:

A criticism of the use of SPI is that since it approaches 1.0 at the end of the project, it is not useful over the last third of the project (Corovic, 2007; Fleming & Koppelman, 2003; Lipke, 2003, 2004; Henderson 2004; Vandevoorde & Vanhoucke, 2006). However, Exhibit 4 shows precisely how and when SPI(t) approaches 1.0. Knowing the precise time-dependent behavior of SPI(t) somewhat blunts this criticism, because with the above model one can compare the actual performance of SPI(t) to the projected performance. The criticism that the value of SPI(t) is in monetary units is still valid, and we will address this later.

### Software Labor Curves

Putnam (1978) pioneered the use of the Norden-Rayleigh curve to describe the number of people working on complex software projects. The Putnam-Norden-Rayleigh curve, now known as PNR, appears to apply to many types of software projects, particularly embedded software systems (Warburton, 1983). The number of people working on a project as a function of time, *m(t)*, is given by:

*T* is a constant that denotes the time at which the number of people is at the maximum—the labor peak. *K* is a constant that can be determined by the condition that the total cost of the project, i.e., the total number of man-years is, *N*. A comparison of the PNR and trapezoidal labor curves is shown in Exhibit 5. The instantaneous labor rates are shown in Exhibit 5a, while the cumulative values are shown in Exhibit 5b, both of which show the typical “S” shape. The Warburton (2008) model can be applied to the PNR curve exactly as was done for the trapezoidal labor curve. The planned value, actual cost and earned value are computed using integrals as before, just replacing the *c(t)* of the trapezoidal curve with *m(t)* from equation (11).

Exhibit 6 compares the curves that result for *CPI(t)* and *SPI(t)* from the two models. The parameters have been selected so that both models have the same total cost and the same labor peak. Despite the apparently quite different labor curves, the behavior of *CPI(t)* and *CPI(t)* over time is remarkably similar in the two cases—see Exhibit 6.

Christian and Kallouris (1991) established that for most projects the typical cumulative labor profile is an “S” curve. This suggests an intriguing and potentially significant property of the Warburton model:

*The behavior of the CPI(t) and SPI(t) curves over time for most projects should be similar to those shown in Exhibit 6*.

Therefore, as long as a project's labor curve is the usual “S” shape, the conclusion is that the shape of the *CPI(t)* and *SPI(t)* curves will follow that shown in Exhibit 6.

### Early Estimation of Cost and Schedule Over-runs

We now turn to estimating in the early stages of a project, the final values of the project's cost and schedule over-runs. In the early stages of a project, the labor curve is linear, and given by the first part of equation (3). In this region, the planned value, actual cost, and earned value are easily computed, and are all functions of time:

### Calculation of the Cost Variance, *CV(t)*

Using the standard definition of *CV(t)*, we have from equation (14) and (13):

Rearranging equation (15) gives the value for the cost over-run parameter, α, which is a constant:

To calculate α, we need the values for *P* and *T _{e}*. This ratio can be found by noting that the slope of the Planned Value labor curve is 2

*P*/

*T*. Therefore, in the early stages of a project we can estimate the slope of the planned value curve, which we denote as

_{e}*S*in equation (16). So we can calculate

_{PV}*α*from the time dependent cost variance, CV(t), according to equation (16). Notice that the delay parameter, τ, does not occur here, and so we can use equation (16) to calculate α. We emphasize that α is a constant, and is a global property of the entire project.

### Calculation of the Schedule Variance, *SV(t)*

Using the standard definition of *SV(t)*, we have from equations (14) and (12):

For convenience, we have defined the constant, Q, which depends on α, which we know from equation (16). The quadratic in equation (18) has a straightforward solution:

Therefore, we can calculate the schedule over-run, τ, from *SV(t)*. We emphasize again that τ is a constant, and is the prediction of the total schedule delay for the project; that is, it is a global property of the entire project and does not depend on time.

In the real world, there will of course be noise in the data. However, if we can obtain reasonable values for CVI(t) and SV(t), then we can estimate values for α and τ. Then we can estimate the cost and schedule over-runs for the project. Both of these estimates will remain constant over time, and so are genuinely useful predictions of the cost and schedule over-runs. Further, the schedule over-run parameter is in time units.

### An Example of the Estimation of CPI(t) and SPI(t)

We now show how the model is used in practice. In the early stages of a project, the planned value curve is known. As the project proceeds, and deliverables are accumulated, one can determine the actual cost and assess the corresponding earned value. Exhibit 7 shows 3 samples of such a process, at times 5, 7, and 9.

In Exhibit 7a, we have used instantaneous values, while in Exhibit 7b we have plotted the same data using cumulative values. One of the characteristics that is immediately obvious from Exhibit 7 is that the calculation is much easier to accomplish using the instantaneous values. A number of features of the instantaneous representation make it more practical to use: the linearity of the curves make the data easier to analyze—it is always harder to estimate quantities from curves; and the greater deviations of the actual costs and earned values on the linear chart make them easier to recognize and compute.

Once we have the values for the planned costs, actual costs, and earned values, it is straightforward to compute the values for *CV(t)* and *SV(t)* at each of the points in Exhibit 7. Then, using the slope of the planned value curve, *S _{PV}*, we use equation (16) to calculate

*α*, and equation (19) to calculate τ. At this point, we have estimates for the eventual total cost over-run and the total schedule delay.

### Estimating *CPI(t)* and *SPI(t)*

Exhibit 8 shows the estimates for the values of *CPI(t)* and *SPI(t)* corresponding to the same points as in Exhibit 7. The dotted lines show how *CPI(t)* and *SPI(t)* change over time, even when the over-run in both cost and schedule are in fact constant. One can see that based on the three estimated values, it might be hard to decide what the overall CPI and SPI for the project might be. The inherent curvature of the lines would make it difficult predict the future values of these quantities. Only through the knowledge of the model presented here do we understand that the curves in *CPI(t)* and *SPI(t)* do in fact represent constant values for the over-run in both cost and schedule.

Looking at Exhibit 8, one sees that the threeestimates for *CPI(t)* appear to be rising. An inexperienced project manager might interpret this to mean that things are improving. In fact, nothing has changed, and the project has an over-run in both cost and schedule. The “apparent” slight improvement in time 3–9 is due to the fact that some activities were not completed during the time 0–3. These have now been completed and credit is being earned for them. However, a constant fraction of activities are in fact delayed in each time interval, and *CPI(t)* will soon level off as seen in Exhibit 6.

On the other hand, *SPI(t)* rises steadily, eventually reaching 1.0. However, as seen in Exhibit 6, *SPI(t)* will only reach 1.0 (denoting the completion of all activities) after the scheduled completion time. The project is running late.

### Conclusions

EVM provides project managers with triggers or early warning signals of project trouble. Such indicators have been found to be reliable as early as 15% into a project. Better planning and resource allocation associated with the early periods of a project might be the cause of this reliability (Fleming & Koppelman, 2000).

However, these warning signs are time-dependent, and must be interpreted with great care. Until now, there was no way to determine this time dependence. The extension of the Warburton model provided here establishes that the behavior of *CPI(t)* and *SPI(t)* over time can be calculated. Labor curves in two diverse industries (construction and software) produce very similar curves for *CPI(t)* and *SPI(t)*, leading to the tentative conclusion that *CPI(t)* and *SPI(t)* are relatively independent of the precise form of the project's labor curve. This agrees with Christian and Kallouris's (1991) observation that most project curves follow “S” shapes. This suggests that the model has wide applicability.

Exhibit 8 shows that estimates of *CPI(t)* early on in a project are inherently a function of time. An inexperienced project manager might interpret this to mean that while the project did not start well, the rise in CPI(t) indicates that improvements have been made. In fact, nothing has changed, and the project has an over-run in both cost and schedule. Exhibit 8 also shows that *SPI(t)* is rising steadily, which again might be interpreted as an improvement in the project's progress. This is not so, as the project is running late. Exhibit 8 shows that interpreting measured values for *CPI(t)and SPI(t)* is tricky unless one appreciates their inherent time-dependent behavior.

The Warburton model requires two constants: the cost over-run rate and the time delay to repair the rejected activities. Because not all activities will be completed on time, a constant fraction of them are assumed to be rejected and require extra work. By plotting the instantaneous labor curve, project managers should be able to estimate the required parameters early on in the project, as the first few activities are completed.

More complex issues need to be included in the model and are topics for future research. The most interesting issue is that of “scope creep,” which increases the number of activities and so increases both the actual cost and the earned value, which in turn affects both *CPI(t)* and *SPI(t)*. Also, the model assumes that activities are independent, which is clearly not true, as the critical path depends on the connection between activities. This might be addressed by assuming that the delay in one activity results in the delay of other activities further down the path. This should magnify the effect of the schedule delay and will presumably make the model more sensitive to the schedule delay parameter.

A project manager's toolkit should include a method for determining how particular values for *CPI(t)* and *SPI(t)*, measured at some definite point in time, can be used to estimate the eventual cost over-run and the schedule delay. Despite the simplicity of the Warburton model, a number of interesting features emerged. While much work remains to be done, it appears that the model is useful and shows some promise.