Project management theory

deriving a project's cost and schedule for its network structure

Roger D.H. Warburton, PhD, PMP

Associate Professor, Department of Administrative Sciences, Metropolitan College, Boston University

Denis F. Cioffi, PhD

Associate Professor, Department of Decision Sciences, The George Washington University School of Business

It is rarely explained that the ubiquitous estimate at completion (EAC) assumes a linear cumulative labor curve. This is an example of Koskela and Howell's (2002) criticisms that project management is a "narrow" theory (i.e., it is linear) and that it is "implicit" (i.e., the linearity is rarely acknowledged). We address these issues by proposing a theory that begins with the explicit assumption that activities are related by sequential dependencies, which is the traditional assumption in the project network diagram. By adding a few reasonable assumptions, what emerges is a practically useful relation between the project's network structure and the labor rate profile, which becomes a fundamental project observable, a term we use in the formal scientific sense. The integration of the labor rate results in a familiar S-shaped curve, which provides a much more realistic project management model. We show how to characterize the parameters of the S-curve labor profile in terms of the cost and schedule. We validate the predictions of our theoretical model against a real-world project, and the agreement between theory and practice provides preliminary empirical validation of the theory and shows that it is possible to create a theory of project management that has immediate practical benefits. Finally, we explore some of the implications of the theory.

Keywords: project management theory; S-curves; labor profiles; cost and schedule prediction.

1 Introduction

The definition of a project explicitly includes the idea that each one is unique, and yet concepts such as the critical path method (CPM) and earned value management (EVM) are claimed to have universal applicability. There is thus an unstated assumption that there exists some sort of common, underlying theory so that CPM and EVM have legitimate, universal applicability. As Koskela & Howell (2002) observe, however, if there is such a theory, it is "implicit and narrow"(p. 293).

The estimate at completion (EAC) is a good example: EAC is ubiquitous in the literature but that it assumes a cumulative labor cost profile that is linear over time is rarely explained. This is a good example, therefore, of the Koskela & Howell (2002) criticism: Project management is a narrow theory (only linear), and it is implicit because the assumptions, such as linearity, are rarely explicitly acknowledged.

In practice, EAC has been shown to be reliable and work well (Vanhoucke & Vandevoorde, 2006; Christensen, 1993). However, why should such a narrow (i.e., linear) theory work so well when real-world labor curves are usually presented as S-shaped? Further, the problems of applying EVM to schedule prediction are well known (Marshall, 2006; Book, 2003; Book, 2006), and several efforts have arisen to address this issue (Lipke, 2003; Kerzner, 2006). For example, the earned schedule formulae are empirical and, to our knowledge, there are not even linear models to support the earned schedule estimation formulae. For schedule estimation, therefore, one might argue that there is not even an "implicit and narrow" theory.

We address these issues by proposing a formal theory and begin with the assumption, which was highlighted by both Turner (1993) and Koskela & Howell (2002), that project activities are related by sequential dependencies. We add this inter-relatedness of activities to the definition of a project, which we consider as a system. From this we inherit theoretically powerful concepts, such as observables, which, we propose, in the project management context are quantities such as the cost and schedule.

Koskela & Howell (2002) also point out that project management is dominated by management as planning, the dispatching model, and the thermostat model. These criticisms are well known (Johnston & Brennan, 1996), and one of the most compelling is the emphasis is on planning, with little offered on executing. The traditional relation between planning and execution is through authorizations (the dispatching model), a concept that explicitly separates planning from execution and feels rather more like the scheduling of manufacturing, which is routine and inherently not a project.

Also, project control is reduced to a primitive thermostat model, in which deviations from set parameters are corrected. This requires several assumptions (Hofstede, 1978): that there is a performance standard that can be defined and measured; that there is a causal relation between management actions and project outcomes; and that management actions can return the project to the desired state.

However, without a theory that explains the explicit relations between observables, there is no guarantee that any specific action by a project manager will fix, or even affect, the observable in question. Thus, a shortcoming of the thermostat model is that without an underlying theory, the causes of problems remain unknown. A more sophisticated model should be based on the scientific method. We address these issues by proposing a formal theoretical foundation for projects.

Both Turner (1993) and Koskela and Howell (2002) observe there is little to report on theories of project management and that either there are none, or it is not believed they are significant. In this paper, we address these criticisms directly by exploring the development of a theory of project management. We first motivate the development of such a theory by comparing the state of the art in project management with previous successful theories in other disciplines.

We begin with the explicit assumption that project activities are related by sequential dependencies, which is the traditional assumption. What emerges is a practically useful relation between the project's network structure and the labor rate profile. Thus, we are able to establish a relation between the theoretical structure of the project and practical quantities, such as the cost and schedule.

Different theoretical structures behave differently over time. Therefore, matching different project types to appropriate theoretical models should lead to a better understanding of project behavior over time.

We derive a fundamental relation between the project network structure and the labor rate profile, which then becomes a legitimate project observable. The labor rate profile is the basis of the cost and schedule and, hence, project management. The integration of the labor rate results in a familiar S-shaped curve, which provides a much more realistic project management model than the current linear version.

A theme of project management is that change is inevitable. We propose that real-world changes during execution occur in the activities and their inter-connections. That is, changes occur in the project's network structure. Therefore, updates to the project plan have to be made manually, derived after-the-fact from the updated network. It is no wonder that plans are hard to maintain. Providing a project management theory, therefore, has the potential to light the way toward better tool building.

The structure of the paper is as follows. After a review of the relevant literature, we develop an analogy between other theories and project management (section 2). We briefly describe the Putnam-Norden-Rayleigh (PNR) model, which defines project observables (section 3). Then we present some simple examples that show how the labor profile can be derived from the network structure (section 4).

Using a few simple, reasonable assumptions about the relations between activities, we derive an S-shaped labor curve from a general network structure (section 5). We show how to characterize the resulting S-curve's parameters in terms of observable project characteristics. We compare the predictions of our theoretical model to a real-world project. Agreement between theory and real-world practice provides a preliminary empirical validation of the theory and adds weight to the idea that it is possible to create a theory of project management that has immediate practical benefits. Finally, in section 6, we explore some of the implications of the theory.

1.1 Relevant Literature

From a theoretical perspective, we are aware of the following efforts that have contributed to understanding the structural foundation of projects:

  • The Putnam-Norden-Rayleigh (PNR) Model. Putnam (1978) proposed an analytical formula for labor cost rates over time for software projects, which is now known as the PNR model. The PNR model is well-established, is realistic, is not too complex and is still used on software projects to estimate costs and schedules and to monitor and control them during execution (Moore, 1999; Boehm, 1981). However, nothing about the PNR model is specifically tied to software projects.
  • The Parr Model. Parr (1980) developed what might be described as the first foundational theory of software development. A software project is assumed to start out with a fixed number of problems to be solved, and it ends when all problems are solved. The problems are nodes that can have dependent nodes that cannot be started until the predecessor nodes are complete. This model is not specific to software projects because the relations between the nodes are simply finish-to-start constraints. From these assumptions, Parr derived a labor cost curve.

Parr's elegant and compelling model blazed a trail toward an understanding of the importance of the network structure and its relation to observable project parameters.

In their original form, however, both models have problems: The PNR model has a suspect underpinning (the so-called linear learning assumption), and the Parr model's parameters are not obviously related to observable project characteristics.

Vanhoucke and Vandevoorde (2006) reviewed the accuracy of forecasting methods, concluding that that graphs of cost performance index (CPI) and schedule performance index (SPI) over time provide valuable information about trends in project performance. They then invoked the assumption that when corrective managerial actions are implemented, the changes in the behavior of the indexes are assumed to reflect the impact of management actions (Vanhoucke & Vandevoorde 2007).

Recently, Vanhoucke (2012) used Monte-Carlo simulation on fictitious and empirical project data to attempt to understand why EVM works so well for some projects and fails so miserably on others. Interestingly, Vanhoucke established that networks with greater parallelism have more variability in their results, compared to networks with a more serial structure due to merge bias. Elshaer (2013) seemed to confirm this by suggesting that schedule prediction often fails when incorrect warnings emerge from non-critical activities. That is, more parallelism means more non-critical activities and, therefore, more activity delays will not be on the critical path and will not affect the schedule unless they end up on the critical path.

The Oxford English Dictionary project appeared to follow the same labor cost curve for decades, suggesting that there was indeed an underlying and, perhaps, fundamental structure to the project and that it remained constant over the project's life. Interestingly, industrial data strongly suggests that project properties, e.g., error rates, do indeed remain constant over long project times (McGarry, Pajerski, Page, Waligora, Basili & Zelkowitz 1994). This suggests that there may be observables that are predictable and measurable over the life of the project.

2 Project Management Theory

We believe it is useful to compare the development of a project management theory with another successful theory, that of thermodynamics. There are parallels between the two disciplines and, reasoning by analogy, we will be able to make some comments about the state- of the art of project management.

With the assumption of observable, measurable parameters that remain predictable over the life of the project, management assumes it can predict behavior from initial data. Given that assumption, a fundamental theory typically arises in three stages: First, an understanding of observable quantities grows and precise definitions emerge. Next, simple static relations between the observables are discovered and, finally, dynamic laws are proposed and validated. These dynamic laws explain the behavior of the observables of the system over time.

A simple example from operations is queuing theory (Hopp & Spearman 2011). Examples of observables are the number of people in the queue, the average time spent in the queue, the number of servers, etc. It is well-known that the single queue, multi-server arrangement is the one in which the people in the queue spend, on average, the least time. While operations theories are useful guides, they suffer from a mismatch in goals with project management. Much of operations theory is concerned with manufacturing, which is not a project. Therefore, we need a theory better matched to the unique, dynamic, and changeable nature of projects.

We suggest that a more appropriate analog might be the development of thermodynamics in the nineteenth century. The first formal relation between the observables was Boyle's Law (1676), followed by Charles's Law (1780s). These represent the second stage in the development of a theory, the discovery of static relations between observables.

The theory became dynamic with the invention of the steam engine, which was patented by James Watt in 1781, but its principles were not understood until the mid-nineteenth century when the development of a set of dynamic observables (e.g., work and entropy) stimulated the key idea: efficiency depends on the temperature difference between the engine and its environment. Only after that understanding does one know which variables are important and which actions will get the system back to its desired state.

This is schematically illustrated in Figure 1. The second law of thermodynamics (written below the steam engine in the illustration) establishes that the essential quantity in the efficient dynamic control of a steam engine is the temperature difference. Therefore, one knows which variables are important and which actions will efficiently get the system back to its desired state.

The final stage in the development of thermodynamics came at the end of the nineteenth century when statistical mechanics emerged and provided the relation between the underlying microscopic structure (the gas molecules) and the system's observable macroscopic parameters, i.e., temperature.

Thermodynamics allows for the efficient control of engines, which is described by the second law. A system's observable is the actual temperature, T<sub>a</sub>. The analogous project management system is the network, whose observable is represented by the actual cost, T<sub>a</sub>. In both cases, the thermometer represents the idea that observables are measurable and are a basis for controlling the system

Figure 1: Thermodynamics allows for the efficient control of engines, which is described by the second law. A system's observable is the actual temperature, Ta. The analogous project management system is the network, whose observable is represented by the actual cost, Ta. In both cases, the thermometer represents the idea that observables are measurable and are a basis for controlling the system.

The analogous ideas in the project management domain are shown on the right hand side of Figure 1. While we are not proposing anything as grand as the thermodynamics of project management, we can use it as a guide and begin to formulate equivalent concepts.

First, we require a definition of the observables associated with a project and likely candidates are the cost and schedule. These are indeed macroscopic variables of interest and are measurable but, as yet, we have no theory to justify the assertion that they are the relevant parameters for managing a project. In section 3, we show that that the PNR model establishes that the labor cost is, indeed, an observable in the formal sense.

Next, we propose that the architecture of the project, as represented by the network diagram on the right hand side of Figure 1, is the analog of the thermodynamic system. According to the Oxford English Dictionary, the definition of a system is: A set or assemblage of things connected, associated, or interdependent, so as to form a complex unity. This definition is reasonable because a project consists of interrelated activities that form an entity, i.e., the network structure.

According to the standard project management definition, activities consume time, and so, formally, the network is defined to consist of the set of activities, their execution time (planned and actual), their links and interdependencies. There may also be hierarchical relations between activities, i.e., modules may be made up of several sub-components.

3 PNR Defines Observables

Putnam pioneered the idea of using labor rate profiles to describe software projects (Putnam, 1978). In the PNR model, a project is the completion of a fixed number of activities, and n(t) denotes the cost of the activities completed at time, t, with the total cost of the project denoted by N. The completion of the project is simply the exhaustion of the activities to work on, so at the end of the project, n(f)→N.

One can also write the PNR model in terms of the completion of a fixed number of activities, and one can easily change back and forth between these quantities—see Cioffi (2006a, 2006b). As the project proceeds, the number of activities may change as higher level activities are broken down into smaller, more manageable entities. However, we assume no scope creep so that the total cost does not change and N is a constant.

In the PNR model, the labor cost rate is:

img

where Tp is the time of the peak in the labor rate curve, see Figure 2. The number of activities, N, is directly related to the cost. Therefore, without loss of generality, we can interpret the number of activities, N, as the total cost. The cumulative cost over time is given by the integral of (1).

img

At the peak in the labor rate curve, t = Tp, the cumulative labor is, CPNR(Tp) = 0:39N, which is approximately 40% into the project. The end of the project is at ~ 2:5 Tp. Therefore, the two observables in the PNR model (N and Tp) are directly related to the most important practical project quantities, the cost and schedule.

We present data for a real-world project in Figure 2, one that is known to be well represented by the PNR model (Warburton 1983, 2011). The project's planned labor rate (smooth dashed curve) had a peak at Tp = 46 weeks, so that the end of the project was estimated as 2.5 Tp = 115 weeks.

PNR labor rate curves. The plan for the project (dashed line) has a peak at T<sub>p</sub> = 46 weeks. The actual cost (jagged line) was fit to a PNR curve with peak at T<sub>e</sub> = 56 weeks (solid smooth curve)

Figure 2: PNR labor rate curves. The plan for the project (dashed line) has a peak at Tp = 46 weeks. The actual cost (jagged line) was fit to a PNR curve with peak at Te = 56 weeks (solid smooth curve).

The planned total cost was N = 1,200 person weeks. Once the project began, the actual costs (jagged line) quickly deviated significantly from the plan and a new PNR curve was fit to the actual, emerging data. The new (actual) peak was estimated at Ta = 56 weeks, and the new estimated (actual) cost was 1,600 person weeks.

That is, there was a 22% delay in the schedule and a cost overrun of 25%. Using the PNR model, these final values can be accurately estimated after only 15–20 weeks, which is 15% of the way through the project. The key point is that the PNR model tells us that the two important observables that characterize a project are the total cost and the time of the peak in the labor rate curve.

While the PNR labor profile is a successful, practical tool for managing projects, it is not clear how the labor profile is related to the underlying network structure. The Parr model addresses this criticism.

4 From Architecture to Labor Rate

The network diagram fulfills the system definition since its predecessor-successor constraints determine the interactions of the activities (e.g., when they may be started and when they may be performed in parallel). We define activities as nodes, which exist in two types:

  1. A leaf node, which has no descendants; and,
  2. A non-leaf node, which has descendants, and so the completion of the activity opens more nodes to work on.

The project starts out with a fixed number of activities to be completed, and it ends when all the activities are completed. (We assume no scope creep.) The labor rate falls at the end of the project because the team runs out of activities to work on. Progress on the project thus depends on both the number of activities and the relations between them. In conventional project management language, the network is usually represented by an Activity on Node network (or an equivalent Activity on Arrow network), with each node corresponding to an activity.

4.1 Simple Examples

Figure 3 shows a portion of a network diagram in which the dependencies between the nodes are characterized by the special case of a binary tree and where the probability of a leaf node is 50%. Therefore, on average, half of the nodes are leaf nodes, and non-leaf nodes have exactly two descendants.

The descendants of nodes for the case where the branching ratio is <i>k = 2,</i> and the probability of a leaf node is 50%

Figure 3: The descendants of nodes for the case where the branching ratio is k = 2, and the probability of a leaf node is 50%.

We see that before the nodes were completed, there were four nodes, and afterwards there were also four nodes. Therefore, for this particular network structure, the average number of nodes, or activities, is constant over time. If the nodes consume roughly the same resources, the average labor rate on the project is a constant.

This simple example illustrates that it is possible to derive a relation between the project's network structure and its macroscopic observables. The network is characterized by the branching ratio and the leaf probability. The system's macroscopic observable is the resulting labor rate. This is encouraging because it shows that we can derive the relation between the project's observables from its network structure.

Figure 4 illustrates the case where there are three descendant nodes (k = 3) and a lower probability (25%) of a leaf node occurring. We observe for future reference that when the branching ratio is k, and a non-leaf node is completed, the change in the number of open activities is k-1. That is, one non-leaf activity is completed and k descendants are opened up for work.

The descendants for the case where the probability of a leaf is 25%, and the branching ratio, <i>k=3</i>

Figure 4: The descendants for the case where the probability of a leaf is 25%, and the branching ratio, k=3.

5 Derivation of the Labor Rate

We now consider the general case by making specific the concepts described by Turner (1999): Projects are characterized by decomposition and dependence between activities. At time, t, we denote number of completed activities as, c(t). Resources can only be usefully applied to nodes that are not yet completed and that are available to be worked on (i.e., their predecessors are complete). These are referred to as the visible activities, V (t).

The assumption of finish-to-start constraints is the simplest case. Network diagrams may contain other types of constraints (e.g., finish-to-finish) and this is discussed in section 6, Conclusions.

Next we consider how the values of c(t) and V (t) change when a single activity is completed. We define an interval of time, [t0, tn], that encloses the completion of just one activity. Then, in that interval, the number of completed activities, c(t), simply increases by one:

img

The corresponding change in the number of visible activities, V (t), depends on whether the activity that was just completed was a leaf node or not. If the activity was a leaf node, there are no descendant activities to work on. On the other hand, if the completed activity was not a leaf node, the activity had descendants and completing it made new activities available for work, i.e., visible. If the branching ratio is k and the probability of a leaf node is p(t), after the completion of a node, the number of new, visible activities is:

img

After the completion of a leaf node, the number of visible activities declines by 1, with probability p(t). After the completion of a non-leaf node, the number of visible activities increases by k-1 with probability, 1-p(t).

Now we need an assumption about the relative probabilities of these two alternative paths. Typically, activities in the early stages of the project open up new areas for subsequent work, and so the earlier a node appears in the project, the more likely it is to have descendants. That is, the probability of a leaf node early on is smaller. Also, towards the end of the project, there are fewer and fewer activities to be worked on and the team is completing more leaf nodes. Both of these ideas suggest that the later in the project the activity is completed, the more likely it is to be a leaf node.

We can capture these ideas with the assumption that the probability that the most recently completed activity is a leaf node is proportional to the number of activities completed. Therefore, denoting the total number of activities in the project by N, the probability of a leaf node is:

img

where β is a constant that will turn out to be the total cost of the project—see Appendix. Substituting this in (4) gives the change in the number of visible activities as:

img

Management has the option of increasing or decreasing progress by supplying or withdrawing resources. But these resources must be applied in an efficient manner because simply adding resources is not useful (Brooks, 1975). Rapid progress by the application of significant resources is possible only when there are activities to work on (i.e., they are visible). In practice, a resource (or possibly a team of resources) can only be assigned to visible (uncompleted) activities.

Applying less labor means that some visible activities that are available to be worked on will not be assigned labor. Applying more labor is inefficient as only the specific, visible activities can actually be worked on.

Therefore, we are relating, in a precise analytic relationship, the idea that one increases efficiency by matching the labor to the number of visible activities. Project managers understand this informally, but here we are demonstrating it formally. Efficiency is maximized when the labor rate matches the visible activity rate.

Formally, the rate at which labor can be usefully applied to the project is proportional to the visible activities, V(t). To obtain useful analytic results, the discrete model is converted to its continuous-time analog and the method for solving the resulting equations is given in the Appendix. The result is the following labor rate profile:

img

where T is the time of the peak in the labor rate curve, α is a parameter that determines the shape of the curve, and the subscript, k, denotes the branching ratio, i.e., number of descendants of a non-leaf node. Examples of this labor rate profile are shown in Figure 5. As a = α(k -1) increases, the curve's peak narrows. All curves have the same peak time in the labor rate, the same total area, and, therefore, the same total cost.

The cumulative labor rate, which is an S-curve, is the integral of (7):

img
Labor rate profiles described by

Figure 5: Labor rate profiles described by (7). As a = α(k-1) increases, the curve narrows but all curves have the same total area, and, therefore, the same total cost.

Since (8) is an S-curve, it is a much better model for a project than the linear model that is the foundation of much of A Guide to the Project Management Body of Knowledge (PMBOK® Guide) – Fifth Edition. Equation (8) is similar to that of Cioffi (2005): a relation should be explored in future research.

We now determine the parameters: k is the average branching ratio and so is easily determined by analyzing the network structure. One method of determining α is to fit the curve in (7) to actual (or projected) project data. This was done for the project data in Figure 2 and a least squares fit gave the value α(k – 1) = 0.048. The resulting labor rate curve is shown as a dotted line in Figure 6. The fit using (7) is somewhat better than that of the PNR model because of the additional parameter, α.

The important result is that we characterized the network structure in terms of finish-to-start constraints from which we derived the labor rate cost profile for the project. Further, Figure 6 shows that the theoretically derived labor rate profile is a good match to a real-world project, i.e., we constructed a theory and validated it against real world data.

Labor rate profiles: PNR plan (dashed), PNR actual (solid jagged), PNR predicted (solid smooth), and equation (7) (dotted), which has a slightly better fit than PNR because of the extra parameter, α

Figure 6: Labor rate profiles: PNR plan (dashed), PNR actual (solid jagged), PNR predicted (solid smooth), and equation (7) (dotted), which has a slightly better fit than PNR because of the extra parameter, α.

6 Conclusions and Theoretical Implications

We focused our definition of a project on the inherent interrelations between activities and formally treated a project as a system. We then made some reasonable assumptions about how the project activities were related and what emerged was the labor rate profile. Thus, we related the project's network structure to its observables, such as the cost and schedule. Interestingly, the cumulative labor profile that emerged was S-shaped and so represents a significant improvement in the theoretical modeling of projects, which to date has been almost entirely based on linear theories.

At any time, only a subset of activities can be worked on, the so-called visible activities. Therefore, management should assign resources only to visible activities because assigning resources to activities not ready to be worked on is clearly wasteful. This makes precise the requirement, proposed by Koskela & Howell (2002), that the "basic thrust is to eliminate waste" with the goal that unnecessary work is not done.

The proposed theory is based on finish-to-start constraints. However, a network structure can also be built using other constraints (e.g., finish-to-finish, start-to-finish, etc.) It is a topic for further research to determine how to incorporate other types of constraints into the model.

A theory is nice to have, of course, but the interesting question is, "Is it practical?" We believe that Figure 6 begins to provide a positive answer to that question. The theory presented here is no more complex than the PNR curve, which is widely used for estimation and tracking. It also provides insights into many areas of project management. Table 1 summarizes Koskela and Howell's (2002) proposed goals and their possible satisfaction by the theory proposed here.

Table 1: The goals of a project management theory.

The goals of a project management theory

We compared our proposed theory to a well-known, real-world project and the results were encouraging. Specific project data were measured early on and the theory provided accurate estimates for the total cost and final schedule. Therefore, we validated the predictions of our theoretical model against a real-world project and the agreement between theory and practice provided a preliminary empirical validation of the theory and added weight to the idea that it is possible to create a theory of project management that has immediate practical benefits.

We have validated the theory against one project, which, while interesting and encouraging, leaves the open the question of the theory's general applicability. More research is required to answer this question.

Koskela & Howell (2002) also suggest that a theory should be prescriptive, and we believe that our proposed theory satisfies that condition: Given reasonable assumptions, we were able to predict the labor cost and schedule from the network structure. Our theory should also support rational control of projects during execution by determining the impact of proposed network changes on the system's observables.

According to Koskela & Howell (2002), the empirical evidence suggests it is impossible to maintain an up-to-date plan. Therefore, a future research goal is for the plan to be derived systematically from the network structure and, hopefully, automatically.

We suggest that during the execution of real world projects, it is the network structure that changes and that the cost and schedule impacts follow. That is, the plan follows from the network, whereas the PMBOK® Guide – Fifth Edition suggests that plans are input to execution. While plans are developed before execution, the key is to recognize that the network structure is an important entity. When the network changes, it is conceivable that tools can automatically derive an updated plan. For example, the theory predicts how a change to the branching ratio during execution will affect the cost and schedule.

The proposed theory might also improve project management processes. For example, an improved appreciation of the importance of the network structure suggests that more attention be paid to it during scope development. That may lead to better insights into cost and schedule estimation and more effective control during execution.

Several issues might be explored in future research efforts. First, the effect of uncertainty in costs and schedules, which results in noise in the data, will affect the accuracy of the model's predictions. When such noise is a factor, does the current model perform better than existing models? Second, one might explore whether the S-shaped curve is more effective than the currently used linear versions for project estimation and control. A related issue is to determine what types of network structures correspond to which types of projects and/or industries, so that the appropriate model can be applied to a project. Finally, it would be interesting to include the dynamic nature of project management, i.e. the impact of changes to activities, their network interdependencies, their execution times, and their costs.

In conclusion, we demonstrated that there is a fundamental relation between a project's network structure (the inter-related nature of project activities) and its labor rate profile, which, in turn, determines the cost and schedule. The proposed model also informs managers about how to calibrate projects in terms of observables, such as the total cost and the final schedule. We proposed a theory, measured observables, validated the theory by comparing its predictions to a real-world project, and used the theory to guide future improvements.

Mathematical Appendix: Solving the Equations

Our model is based on Parr (1980), although we use a different approach to the tree dependencies that, we believe, is more applicable to project management. We have also updated the terminology to modern project management language.

We showed that when a single activity is completed in the interval [to, tn], the number of completed activities simply increases by one: c(tn) = c(to)+1. The corresponding change in the number of visible activities, V (t), depends on whether the activity just completed was a leaf node or not: After completion of a leaf node, the number of visible activities declines by 1, with a probability p; and, after completion of a non-leaf node, the number of visible activities is increased by k−1 with probability, 1−p, which led to (6). The continuous form of (6) is:

img

where β is a constant. If there is no scope creep, the project consists of a fixed number of activities to be completed. As a result, eventually the entire project ends, with no further activities left to complete.

We assume that it is efficient only to apply resources when there are activities to work on, i.e., they are visible. This suggests that the rate at which labor can be usefully applied to the project is proportional to Vk(t). When resources are applied in this optimal way, activities will be completed at a rate proportional to Vk(t), which implies,

img

where β is the constant of proportionality. Since the rate at which nodes are completed is dc(t)/dt, we have that in a well-managed project, where sufficient resources are available to work on the visible activities,

img

Substituting this in (10) gives:

img

which, upon integration, immediately gives:

img

We rearrange this as follows:

img

The denominator of the left hand side is a quadratic expression in c(t), which can be converted to partial fractions and integrated, giving:

img

where A is a constant of integration. Using the argument from the PNR model, we set the constant A = exp(α(k – 1)T, where T is the peak in the labor rate curve. That T is indeed the peak in the labor rate curve is easily proved by differentiating (15) twice and setting the result to zero. Thus,

img

the constant, β, is now determined by demanding that the total labor converge to N at the end of the project. Therefore, the cumulative labor, which is an S-shaped curve, is:

img

Finally, we differentiate (17) to get the labor rate:

img

These are the equations given in section 5.

Boehm, B. (1981). Software engineering economics. Englewood Cliffs, NJ: Prentice Hall.

Book, S. (2003). Issues associated with basing decisions on schedule variance in an earned value management system. National Estimator, Fall, 11–15.

Book, S. (2006). Earned schedule and its possible unreliability as an indicator. The Measurable News, Spring, 24–30.

Brooks, F. (1975). The mythical man-month: Essays on software engineering. Boston: MA: Addison-Wesley Publishing Co.

Christensen, D. S. (1993). The estimate at completion problem: A review of three studies. Project Management Journal 24(1), 37–42.

Cioffi, D. F. (2005). A tool for managing projects: An analytic parameterization of the S-curve. International Journal of Project Management, 23, 215–222.

Cioffi, D. F. (2006a). Completing projects according to plans: An earned-value improvement index. The Journal of the Operational Research Society, 57, 290–295.

Cioffi, D. F. (2006b). Designing project management: A scientific notation and an improved formalism for earned value calculations. International Journal of Project Management, 24, 136–144.

Elshaer, R. (2013). Impact of sensitivity information on the prediction of project's duration using Earned Schedule method. International Journal of Project Management, 31(4), 579–588.

Hofstede, G. (1978). The poverty of management control philosophy. Academy of Management Review, 3(3), 450–461.

Hopp, W. J. & Spearman, M. L. (2011). Factory Physics (3rd ed.). Long Grove, IL: Waveland Pr. Inc.

Johnston, R. B. & Brennan, M. (1996). Planning or organizing: The implications of theories of activity for management of operations. Omega, International Journal of Management Science, 24(4), 367–384.

Kerzner, H. (2006). Project management: A systems approach to planning, scheduling, and controlling (9th ed.). New York, NY: John Wiley & Sons.

Koskela, L. & Howell, G. (2002, June). The Underlying Theory of Project Management is Obsolete. Proceedings of the PMI Research Conference, Seattle, Washington, USA, pp. 292–302.

Lipke, W. (2003). Schedule is different. The Measurable News, Summer, 31–34.

Marshall, R. A. (2006). The contribution of earned value management to project success on contracted efforts: A quantitative statistics approach within the population of experienced practitioners, PhD thesis, Lille Graduate School of Management.

Moore, C. R. (1999). Performance measures for knowledge management. In J. Liebowitz (Ed.). Knowledge Management Handbook (pp.6–1-6–12). Boca Raton, FL: CRC Press.

McGarry, F., Pajerski, R., Page, G., Waligora, S., Basili, V., & Zelkowitz, M. (1994). Software process improvement in the NASA software engineering laboratory, Technical Report, CMU/SEI-94-TR-22, ESC-TR-94–022.

Parr, F. N. (1980). An alternative to the Rayleigh curve model for software development effort. IEEE Transactions on Software Engineering, 6(3), 291–296.

Project Management Institute. (2013). A Guide to the Project Management Body of Knowledge (PMBOK® Guide) – Fifth edition. Newtown Square, PA: Author.

Putnam, L. H. (1978). A general empirical solution to the macro software sizing and estimating problem. IEEE Transactions on Software Engineering, 4(4), 345–361.

Turner, R. (1993). The handbook of project-based management. London, UK: McGraw-Hill.

Turner, R. J. (1999). Project management: A profession based on knowledge or faith? (editorial). International Journal of Project Management 17(6), 329–330.

Vanhoucke, M. & Vandevoorde, S. (2006). A comparison of different project duration forecasting methods using earned value metrics. International Journal of Project Management, 24, 289–302.

Vanhoucke, M. & Vandevoorde, S. (2007). A simulation and evaluation of earned value metrics to forecast the project duration. Journal of the Operational Research Society,58, 136–1374.

Vanhoucke, M. (2012). Measuring the efficiency of project control using fictitious and empirical project data. International Journal of Project Management, 30(2), 252–263.

Warburton, R. D. H. (1983). Managing and predicting the costs of real-time software. IEEE Transactions on Software Engineering, 9(5), 562–569.

Warburton, R. D. H. (2011). A time-dependent earned value model for software projects. International Journal of Project Management, 29, 1082–1090.

Roger D. H. Warburton is an Associate Professor in the Department of Administrative Sciences at Boston University's Metropolitan College. He earned a doctorate in astrophysics from the University of Pennsylvania. He teaches project management and supply chain management, both online and in the classroom. Dr. Warburton lectures internationally about supply chains and outsourcing, demonstrating that US manufacturing can be competitive and relentlessly challenging the obsession with manufacturing everything offshore. He also conducts research in earned value management, attempting to establish the underlying theory and also provide examples and guidance to practicing professionals.

©2014 Project Management Institute Research and Education Conference

Advertisement

Advertisement

Related Content

  • Project Management Journal

    Validation of a New Project Resilience Scale in the IT Sector member content locked

    By Rahi, Khalil | Bourgault, Mario This article aims to present the concept of project resilience and to validate, through quantitative analysis, to assess its two key dimensions: awareness and adaptive capacity.

  • Project Management Journal

    Navigating Tensions to Create Value member content locked

    By Farid, Parinaz | Waldorff, Susanne Boche This article employs institutional logics to explore the change program–organizational context interface, and investigates how program management actors navigate the interface to create value.

  • Project Management Journal

    Getting Past the Editor's Desk member content locked

    By Klein, Gary | Müller, Ralf To reach acceptance, every research paper submitted to Project Management Journal® (PMJ) must pass several hurdles. This editorial aims to declare the editorial process and reveal major reasons for…

  • Project Management Journal

    Investigating the Dynamics of Engineering Design Rework for a Complex Aircraft Development Project member content locked

    By Souza de Melo, Érika | Vieira, Darli | Bredillet, Christophe The purpose of this research is to evaluate the dynamics of EDR that negatively impacts the performance of complex PDPs and to suggest actions to overcome those problems.

  • Project Management Journal

    Coordinating Lifesaving Product Development Projects with no Preestablished Organizational Governance Structure member content locked

    By Leme Barbosa, Ana Paula Paes | Figueiredo Facin, Ana Lucia | Sergio Salerno, Mario | Simões Freitas, Jonathan | Carelli Reis, Marina | Paz Lasmar, Tiago We employed a longitudinal, grounded theory approach to investigate the management of an innovative product developed in the context of a life-or-death global emergency.

Advertisement