Analysis of resource buffer management in critical chain scheduling

Oya Tukel

and

Walter Rom

Eli Goldratt introduced the Critical Chain Project Management (CCPM) concept in his recent book, Critical Chain, in 1997. CCPM is considered an application of the Theory of Constraints (TOC) to project scheduling, and many successful implementations have been reported in the literature (see, for example, Mabin & Balderstone, 1999). The Critical Chain is “the set of tasks which determines overall project duration, taking into account both precedence and resource dependencies” (Newbold, 1998). These tasks form a longest path in a network, which is composed of sections that are technologically sequenced and sections that are resource-dependent.

As mentioned by Patrick (1999), with “critical chain scheduling,” the focus shifts from assuring the achievement of task estimates and intermediate milestones to assuring the delivery date of a project. This requires a mechanism known as buffer management, which protects critical chain tasks from uncertainty by concentrating safety where it is needed rather than spreading it around. The concentrated safety that protects the project promise date from variations in the critical chain is called the project buffer. The concentrated safety that is added to the places where chains of non-critical tasks feed into a critical chain task is called the feeding buffer.

The last component of buffer management is called the resource buffer, which is defined as the resource alert systems or effective prioritization of resource attention that will assure that the resources are ready when it is time to work on a critical chain task. Contrary to the project and feeding buffers, resource buffers are not safety times added to the project, and they do not change the elapsed time of the project (Patrick, 1999; Goldratt, 1997).

There has been recent interest in the CCPM literature in analyzing the impact of buffers on critical chain schedules. Specifically, several approaches have been suggested, such as the Cut and Paste Method (C&PM) and the Root Square Error Method (RSEM) for determining the feeding and project buffer size. Herroelen and Leus (2001) report that the C&PM seriously overestimates the buffer sizes, while RSEM performs better, especially for larger projects. In a recent study by Tukel, Rom, & Eksioglu (2003), an in-depth analysis of the relationships between buffer sizing techniques and project performance is provided, and two new methods for determining feeding buffers are presented. One method uses resource tightness, while the other uses network density in determining the buffer size.

Although there has been recent literature developing around buffer management, there has been no work, to our knowledge, investigating ways of integrating resource buffers into CCPM. With this study, we attempt to close this gap by offering four alternative methods for integrating resource buffers into critical chain scheduling. The methods monitor the project at certain time intervals, using the Earned Value philosophy (Fleming & Koppelman, 1996) and adjust resource effort based on individual task progress or the progress of the whole project. The effectiveness of each method is tested using a simulation study. In the next sections, we describe these methods and the simulation study. The last section is devoted to conclusions and further research directions.

Resource Buffers

While feeding buffers make sure that the work is available, resource buffers make sure that the resources are available to do the work (Newbold, 1998). There are several suggestions regarding how to deal with resource buffers, although none of them are systematic. For example, Goldratt (1997) suggests that resource buffers can be reminders that start a week before the expected start time of a critical task and are repeated several times until the task actually starts. Similarly, Newbold (1998) suggests that the simplest solution to deal with the resource buffer is to treat it as just a wake-up call, which alerts resources to be ready to work on the critical chain tasks when needed. In both suggestions, the assumption is that people know that, when the time comes, they must drop everything and work on the critical chain tasks. Patrick (1999) claims that resource buffers—that is, “work-coming alerts”—might not even be necessary because both feeding and project buffers prevent resource delays in impacting project promise time.

There are, however, several implementation problems that could arise when dealing with resources as suggested. Typically, many parallel projects are in progress when the warning systems are activated. The resource (typically the workforce), which is shared among projects, might have several critical tasks coming up shortly and might be delayed in completing the critical task currently being worked on. Then the questions arise as to which critical task will get the priority in being implemented and would the workforce be allowed to stop implementing the current critical task in order to start a different one. Another issue is that the Project Manager (PM) can plan the project buffer and the feeding buffer ahead of time--but not the resource buffer. He or she will not be able to know exactly how effective the warnings will be until the implementation of the project proceeds. This might result in inferior plans, with excessive feeding and project buffers that, in turn, will result in a longer-than-necessary promise date. This necessitates a dynamic process that will adjust resource usage and availability (capacity), in order to minimize the effects of delays on project completion time. The methods we develop in the next section offer four different ways to accomplish this. The methods are based on two fundamental monitoring approaches: monitor the progress of individual tasks or a subset of tasks, or monitor the progress of the entire project at certain time milestones.

Methods for Managing Resource Buffers

All the proposed methods start with developing a critical chain schedule, given by Steps 1 through 6 below.

Step 1—Determine the fifty-percent-duration estimate for each task.
Goldratt’s assertion is that when the initial safe estimates for task durations are made by the project team members, they reflect a comfortable cushion, approximately the same as the expected task duration. Accordingly, finding the critical chain of a project begins with removing these cushions from the task durations, leaving the average durations to be used as expected task durations in the planning stage.

Step 2—Push all the tasks as late as possible, subject to precedence relations. (i.e., determine the late finish network).

Step 3—Eliminate resource contentions by re-sequencing the tasks.
There is no specific procedure offered by Goldratt (1997) for resolving resource contentions, although Leach (2000) recommends resolving contentions starting with the conflicting tasks that are closest to project completion time, or the ones which show most conflict. Resource contentions can also be removed by applying any of the Resource Constrained Project Scheduling heuristics, as well as exact solution procedures, such as Branch and Bound (Herroelen & Leus, 2001).

Step 4—Identify the critical chain as the longest chain of dependent events for the feasible schedule that was identified in Step 3.
If an exact solution procedure is used in Step 3, the length of the critical chain is equal to the shortest project makespan. However, if a heuristic procedure is used, the shortest makespan is not guaranteed. Regardless of the method used, there could be ties for the longest chain, in which case an arbitrary choice is made between them (Herroelen & Leus, 2001).
Once the critical chain schedule is obtained, the buffers can be added to protect the critical chain and thus the project completion time.

Step 5—Add the feeding buffers wherever a non-critical task feeds the critical chain, and offset the tasks on the feeding chain by the size of the buffer.

Step 6—Add the project buffer to the expected project completion time to determine the promise date for the customer.

This critical chain schedule determines the project timeline by the inclusion of feeding and project buffers. The timeline that is developed at the planning stage will be used as a baseline schedule at the implementation stage. At that stage, a choice of monitoring criteria is used to determine whether the use of resource buffers should be initiated. Compared to project and feeding buffers, resource buffers are added as more resource availability, or increased effort, which results in shorter task durations. The challenge facing the project manager at this stage is to know whether to initiate this additional effort. In the following, four methods which the project manager could use are considered. The notation needed to present the methods is given first.

Notation:  
I the set of project tasks where I ={0,1,..,m+1}
EAC expected total project effort
  = img where
di the expected duration of activity i
ril the usage of resource type l by activity i
t the time intervals, t=1,…
kv the monitoring points for v=1,2,..,V
dia the actual duration of task i
Si the starting time of task i
CC the set of critical chain tasks
AS(t) the set of tasks that are active at time t
Remain_di(t) the remaining duration of task i at time t
AS_EXP(t) the set of tasks that are expected to be active at time t
C(t) the set of tasks completed before time t
C_EXP(t) the set of tasks expected to be complete before time t
AEi the actual effort consumed by task i until time t
EEi(t) the expected effort consumed by task i until time t
SUCC(i) the set of succeeding tasks of task i
EE(t) the expected project effort until time t
img where i є C_EXP(t) and j є AS_EXP(t)
AE(t) the actual project effort until time t
img where i є C(t) and j є AS(t)
SVi(t) the schedule variance of task i at time t
= AEi(t) –EEi (t)
SV(t) the schedule variance of the project at time t
           = AE(t) –EE(t)
SC(t) semi-complete set; the set of tasks that are active and half complete at time t
SCC(t) semi-complete critical set; the set of critical tasks that are active and half complete at time t
ρ proportional reduction in task duration

Method 1

This method monitors the execution of the project at a predetermined set of milestones based on the total expected and actual effort spent. The total expected effort spent on a task is calculated as the sum of the expected duration of the task multiplied by the resource usage of the task, summed over all resource types. Then the total expected effort, EAC, spent implementing a project is the sum of the total expected effort spent on all tasks. The milestones are then reached when the actual effort spent is equal to (or slightly exceeds) the corresponding fraction, kv , of the total expected project effort. The fractions are chosen by the PM and reflect how frequently the monitoring is desired. For example, with V=3, a natural choice might be k1=0.25, k2=0.50, and k3=0.75. At each of these milestones, the following actions are taken: The tasks which are active but behind schedule are speeded up by reducing their remaining durations by proportion ρ, through the use of resource buffers. The succeeding tasks of any late tasks are similarly speeded up by the same proportion ρ.

Pseudo-code:

Calculate EAC

t=1(time=start)

v=1(first monitoring interval)

While m+1 is not completed (i.e: the project is not completed)

Calculate AE(t)(the actual project effort performed until time t)

If AE(t) ≥ kv *EAC

Then determine the active set: AS(t) = {i ∈ I | Si < t and i is not complete}

For all i є AS(t)

Calculate SVi(t)

If SVi(t) <0 then (the task is behind schedule)

Activate resource buffers for task i:

Remain_di(t)← (1-ρ)(Remain_di(t))

For all j є SUCC(i)

dj= (1-ρ) dj

else continue

v=v+1

Else continue

t=t+1

Method 2

With this method, the monitoring is done based on individual task progress. During the implementation of the project, at the halfway point of each task a check is made to see if it is taking longer than planned. If the actual duration is longer than the planned, we use resource buffers to speed up the remaining duration by proportion ρ.

Pseudo-code:

t=1

While m+1 is not completed

Determine the semi-complete set: SC(t) = {i ∈ I | Si < t and i is half complete}

For all i є SC(t)

If SVi(t) <0

then Activate resource buffers for task i

Remain_di(t)← (1-ρ)(Remain_di(t))

t=t+1

Method 3

The implementation of method 3 is the same as for method 2, except that the monitoring is done at the halfway point of each task on the critical chain.

Pseudo-code:

t=1

While m+1 is not completed

Determine the semi-complete critical set: SCC(t) = {i ∈ I | Si ρ t and i is half complete and i є CC}

For all i є SCC(t)

If SVi(t) <0

then Activate resource buffers for task i

Remain_di(t)← (1-ρ)(Remain_di(t))

t=t+1

Method 4

This method can be considered as a hybrid of methods 1 and 2. At the halfway point of each task, we check if the project is on schedule by comparing the total actual project effort spent so far with the total expected project effort to this point. If the project is behind schedule--that is, if the actual effort is greater than the expected--we speed up the remainder of this task by proportion ρ.

Pseudo-Code:

t=1

While m+1 is not completed

Determine the semi-complete set: SC(t) = {i ∈ I | Si < t and i is half complete}

For all i є SC(t)

if SV(t)< 0 then

then Activate resource buffers for task i

Remain_di(t)← (1-ρ)(Remain_di(t))

t=t+1

In all the methods described above, either the schedule variance of individual tasks or the schedule variance of the entire project is tracked. The schedule variances are calculated as described by A Guide to the Project Management Body of Knowledge (PMBOK® Guide; PMI, 1996); the scheduled project or task effort is compared against the performed. In practice, there can be different reactions to the occurrence of variances; for example, the threshold for initiating action is allowed to change over the duration of the project. In new product development, large variances are allowed during the earlier phases of project implementation, while during the later phases the threshold for action is smaller (Kerzner, 2000). In other instances, variances may be corrected over the project timeline through a baseline schedule change. The methods described in this study manage schedule variances by initiating resource buffering while the baseline schedule remains unchanged.

Simulation Study

Dataset

To test the performance of the resource buffering methods, we conducted a simulation study using the Patterson data set (Patterson, 1984). This data set consists of 110 projects, with the number of activities ranging from 7 to 51 and requiring a maximum of three different types of resources. There are 53 projects with 22 or fewer activities, 10 projects with 51 activities, and the rest being roughly equally divided between 27 or 35 activities. The data set also includes the minimum-makespan, resource-feasible, early-start schedules.

The early-start schedules given in the data set were used to derive the late-start schedules. This was done by shifting activities as late as possible without violating resource feasibility. The activities which have the same early start and late start will then form the critical chain, because they are the longest resource-feasible sequence of activities in the network. Next, the feeding paths are identified and the feeding buffers are added using the Adaptive Procedure with Resource Tightness (APRT) as the buffer sizing method (Tukel et al., 2003). This method determines the feeding buffer size based on the resource tightness of the project, with larger buffers chosen for projects with little excess capacity. The insertion of feeding buffers causes the scheduled start times for activities on the feeding paths to be offset from their late start by the buffer size. Once all the feeding buffers are inserted, the planned project completion time is finalized and the plan is ready to be implemented. Typically at this stage, the PM adds the project buffer to determine the promise date to the customer. The implementation of the plans is simulated using the parallel schedule generation scheme developed by Kolisch and Hartman (2000). We also implemented, in addition to the four methods of resource buffering, the projects with no resource buffers as a benchmark.

Figure 1 summarizes the relevant characteristics of the Patterson data set. In the second column, we report the mean and the standard deviation of resource tightness, and in the third column we report the mean and standard deviation of network density of each group. The density of a network (calculated as the ratio of total number of precedence relationships to the total number of tasks), as well as the resource tightness (calculated as the total resource usage to total resource available) values, are calculated the same way as reported in Tukel, Rom, and Eksioglu (2003).

The resource tightness for each group is around 0.74, indicating that there is moderate resource usage. In general, the networks in the Patterson data set are not dense, with average density around 0.18. For both measures, the averages for the various problem sizes are similar. As can be seen from the standard deviation values, however, there are considerable differences among the individual problems. The average number of feeding buffers that are added to each project, averaged over each group, is reported in the last column of Figure 1.

Problem characteristics

Figure 1. Problem characteristics

Generation of Random Durations

We assumed that the task durations come from a right-skewed distribution and used the lognormal distribution to generate actual durations. With this distribution, it is possible to vary the standard deviation independent of the mean. This allowed us to simulate projects with varying levels of uncertainty in task durations, while leaving the mean durations unchanged. The works of Herroelen and Leus (2001) and Tukel, Rom, and Eksioglu (2003) also use the lognormal distribution, and it is also recommended for simulation studies in Law and Kelton (1991). Accordingly, we generate a normal random variable Y with mean μ = ln(di) - σ2/2 and standard deviation σ, where di is the expected duration of task i as given in the Patterson data set. The actual duration of task i is then determined as exp(Y). The choice of σ determines the variance of the actual duration of task i. The variance, VARi, is calculated as (Tukel et al., 2003):

VAR i = (di )2 * (exp(σ2 )-1)

We used three choices for σ : 0.3, 0.6 , and 0.9. Figure 2 gives the standard deviations for the lognormal distribution for a variety of values of σ and mean task durations. Note that the lognormal distribution has the property that the standard deviation increases as the mean increases. Especially for the larger values of σ, the standard deviations of the task duration are very large compared to the mean.

Standard deviations of randomly generated durations for various values of sigma

Figure 2. Standard deviations of randomly generated durations for various values of sigma

Experimental Layout and Performance Indicators

One- hundred replications of each of the 110 problems in the Patterson data set were generated, and means were computed for a variety of project characteristics and performance indicators. One hundred repetitions were deemed adequate because results remained stable when a larger number of repetitions was tried. For each problem and for each value of sigma, one-hundred replications were generated and each of the four resource buffering methods was tested as well as the benchmark—a total of 110,000 problem instances.

The following performance indicators are computed and reported in the tables:

Proportional difference between the actual and planned project completion times: The planned makespan is calculated after adding the feeding buffers to the feeding chains and offsetting the start and completion time of tasks accordingly. The actual makespan is calculated by generating durations from a Lognormal distribution, inclusion of both feeding and resource buffers, and implementing the project using a parallel schedule-generating scheme. Then, for each project, the proportional difference is calculated as:

Difference=(actual makespan – planned makespan)/planned makespan

Utilization rate: For each project the overall utilization rate of resources after implementation is calculated as:

utilization = (capacity used)/(total capacity available)

img

where Rl indicates the units of resource l available, and the rest of the notation is defined before.

Total reduction in task duration (C): For each project it is calculated as the sum of the reductions in the task durations due to the use of resource buffers:

img

where, dia * is the actual duration of activity i after resource buffering.

Amount of resource buffer consumed (XR): For each project this is calculated as img

Change in project completion time (ΔT): For each project it is calculated as the difference between the actual project completion time with and without the use of resource buffers.

Computational Results

The computational results are summarized in Figures 3-5 and Figures 8-11. Figure 3 summarizes the proportional difference between the actual and planned project completion times, averaged over each problem size category. The results of four resource buffering methods (S1-S4) are reported as well as the no-resource buffer case (S5) for low (σ=.3), moderate (σ=.6),and high (σ=.9) levels of uncertainty in task durations. For all the methods tested, the proportional reduction in activity duration when resource buffering is used ) is set at 0.20; and for method 1, the number of monitoring intervals V is set to 3 with k1=0.25, k2=0.50, and k3=0.75.

In general, the results indicate that the resource buffers assist in meeting planned completion times, regardless of the choice of resource buffering method. As the problem size increases, the average difference between the actual and planned completion times widens. Some of the average differences between the completion times are negative, indicating that the average actual project completion time is shorter then the planned.

Among the resource buffering methods, method 2 performs the best in terms of meeting the planned schedule. The actual project completion time with this method is within 5% of planned completion time, regardless of level of uncertainty or problem size. The other methods also provide robust results when compared to having only feeding buffers to protect against uncertainty. Having only the feeding buffers may result in as much as a 20% difference between the actual and the planned project completion time.

The second row of each cell estimates the standard deviation of the proportions. Generally, the standard deviations increase as sigma increases, but decrease as the problem size increases. Together, the average proportions and their standard deviations can be used to determine project buffer size. For instance, the project buffer size can be determined by using the average proportion late plus two standard deviations. Thus, using method 2 (S2), the average project buffer size for large problems (with 51 activities) with low uncertainty (σ =0.3) can be determined as 0.059 + 2(0.037)= 0.133, or a 13.3% project buffer.

In Figure 4, average resource utilization rates are presented. Regardless of the method used, the utilization rates are less than 65% and more than 53%, indicating the availability of excess resource capacity although method 2 has the highest utilization rates. Compared to the resource tightness values given in Figure 1, which were computed assuming deterministic task durations, the utilization rates in Figure 4 are 20% lower due to the longer-than-expected project completion times.

Figure 5 summarizes the effects of the use of resource buffers on project performance. Specifically, the total reduction in task durations (C), the total amount of resource buffer consumed (XR) to achieve these reductions, and the changes in project completion time (ΔT) are reported. The ΔT values are determined by taking the difference between the actual project completion times with and without resource buffers. The ratios of amount of resource buffer consumed to the change in project completion time, as well as the ratios of total reductions in task durations to the change in project completion time, are also reported and presented graphically in Figures 8-11.

Among all the methods tested, regardless of level of uncertainty, method 2 requires the highest amount of resource buffer while providing the most reduction in actual project completion time, and thus provides the smallest gap between the actual and planned project completion times. The maximum reduction in task durations also occurs with this method and the C/ ΔT ratio ranges between 1.42 and 1.92. The ideal result would be to have this ratio equal to 1, indicating that every reduction in task duration results in the same amount of reduction in project duration.

Average differences between the planned completion times and the actual completion times

Figure 3. Average differences between the planned completion times and the actual completion times

Average utilization rate using actual completion times

Figure 4. Average utilization rate using actual completion times

Average amount of crashing occurred ( C ), amount of additional resources (XR), and amount of reduction in project completion time (ΔT)

Figure 5. Average amount of crashing occurred ( C ), amount of additional resources (XR), and amount of reduction in project completion time (ΔT)

In general, the graphs in Figures 8-11 indicate that as the project size increases, the effectiveness of resource buffering decreases slightly. However, as uncertainty in task duration increases, the relative effectiveness of using resource buffers also increases. The most effective resource buffering, when there is high uncertainty, is with method 4. Although method 2 requires the highest amount of resource buffering, the ratio of resource buffer used to the change in project completion time is very comparable to the other methods. Method 1 has the highest XR/ΔT ratio.

XR/ΔT ratio for Method 1

Figure 8. XR/ΔT ratio for Method 1.

XR/ΔT ratio for Method 2

Figure 9. XR/ΔT ratio for Method 2.

XR/ΔT ration for Method 3

Figure 10. XR/ΔT ratio for Method 3.

XR/ΔT ratio for Method 4

Figure 11. XR/ΔT ratio for Method 4.

Statistical Analysis

The results of the simulation study are also analyzed using ANOVA. Figure 6 summarizes the results. The dependent variable is the gap between the planned and actual project completion times. The explanatory variables are the level of uncertainty, the project complexity, the planned project completion time, the resource buffer methods, and the interactions between the buffering methods and the uncertainty, and between the buffering methods and the planned project completion time. As can be seen from the table, the r-square is 0.63 and the F values are highly significant, indicating that the gap between the planned and actual completion times is related to the explanatory variables. Both interactions are significant, indicating that, as the planned project completion time increases, the performance of the resource buffer methods change. In particular, the relative performance of method 2 is better as the planned project completion time gets larger. The interaction between the level of uncertainty and buffering method is also significant, indicating that, as the level of uncertainty changes, the relative performance of resource buffer methods also changes. Again, with high uncertainty, method 2 provides the smallest gaps.

In the second part of the analysis, we developed a prediction equation for each of the methods, using complexity and the planned project completion time. The equations are listed in Figure 7. With these equations, a project manager would be able to predict actual completion time of a project after the project planning stage is complete. Then the project buffer can be set accordingly. The coefficient for the planned project completion time for method 2 is the lowest (0.146), which means that, as the planned completion times increase, the actual project completion times increase at a slower rate compared to the other methods.

ANOVA

Figure 6. ANOVA

Prediction equation for each of the methods

Figure 7. Prediction equation for each of the methods.

Conclusions

Most projects are managed by carefully watching the calendar, comparing where we are today against the baseline schedule (Patrick, 1999). In many cases, however, the reliability of project performance, in terms of meeting promise dates, is impacted by not only effective planning but also by resource behavior. The extensive computational testing reported in the previous section clearly indicates how the project performance is affected by the level of resource usage, and the importance of providing a procedure that the PM can follow during the implementation regarding the allocation of resource buffers. The results also show that the inclusion of feeding and project buffers assists the PM in dealing with uncertainty, but become insufficient in delivering the project within the planned time frame. The four resource buffering methods we offer in this study provide more effective project management when it comes to meeting deadlines. Based on the results presented in the tables, the best strategy to follow in terms of monitoring the project progress is to monitor each task and react with additional resources if it is running late, regardless of the project status. The next-best strategy would be to have the same monitoring scheme, but only for the critical chain tasks. Following these strategies can result in an actual project completion time that will be, on average, within 5% of the planned completion time. In addition, for projects similar to the ones in the Patterson data set, the PM should plan for using roughly two-thirds of available resources, and allocate one-third to be available for resource buffers.

The simulation study helped us identify other advantages of using resource buffers in CCPM. For instance, when there is high uncertainty in task durations, the use of resource buffers is a more effective way to deal with delays. Also, resource buffers reduce the need for project buffers because the actual project completion times are relatively close to planned project completion times, regardless of project size and level of uncertainty.

Recognition of risk and uncertainty in project management were the initial stages of CCPM when it was introduced by Goldratt in 1997. Recent studies in buffer management that offer alternative ways of sizing and locating the buffers in a critical chain schedule made the approach more comprehensive. What has been lacking in the approach has been a method for protecting the schedule against uncertainty with the use of resource buffers. This study addresses this deficiency in the literature and provides directions for future research in CCPM.

Fleming, Q. W., & Koppelman, J. M. (1996). Earned value project management. Upper Darby, PA: Project Management Institute.

Goldratt, E. M. (1997). Critical chain. Great Barrington, MA: North River Press.

Herroelen, W., & Leus, R. (2001). On the merits and pitfalls of critical chain scheduling. Journal of Operations Management, 19, 559-577.

Kerzner H. (2000). Project management: A systems approach to planning, scheduling, and controlling. Hoboken, NJ: John Wiley & Sons.

Kolisch, R., & Hartman, S. (2000). Experimental evaluation of state-of-the-art heuristics for the resource constrained project scheduling problem. European Journal of Operational Research, 127, 394-407.

Law, A. M., & Kelton, W. D. (1991). Simulation modeling and analysis. New York: McGraw Hill.

Leach, L. P. (2000). Critical chain project management. Boston: Artech House.

Newbold, R. C. (1998). Project management in the fast lane: Applying the theory of constraints. New York: St. Lucie Press.

Mabin, V. J., & Balderstone S. J. (1999). The world of the theory of constraints. Boca Raton, FL:St. Lucie Press.

Patrick, S. F. (1999). Critical chain scheduling and buffer management: Getting out from between Parkinson’s rock and Murphy’s hard place. PM Network, 13, 57-62.

Patterson, J. (1984). A comparison of exact procedures for solving the multiple constrained resource project scheduling problem. Management Science, 30, 854-867.

Project Management Institute. (1996). A guide to the project management body of knowledge (PMBOK® guide). Upper Darby, PA: Author.

Tukel, O. I., Rom, W., & Eksioglu, S. (2003). An investigation of buffer sizing techniques in critical chain scheduling. Accepted for publication. European Journal of Operational Research.

©2006 Project Management Institute

Advertisement

Advertisement

Related Content

  • Project Management Journal

    People as Our Most Important Asset member content locked

    By Dupret, Katia | Pultz, Sabrina In this article, we examine how employees experience different types of work commitment at an IT consultancy company using agility to give staff greater autonomy and decision-making latitude.

  • Thought Leadership Series

    Suōxiǎo réncái chājù member content open

    PMI hé pǔ huá yǒng dào zuìxīn de quánqiú yánjiū biǎomíng, jīyú xiàngmù dì zǔzhī quēfá duì wèilái fēngxiǎn de rènshí, huòzhě kěnéng yǒuxiē zìmǎn, yǐjí réncái wéijī jiāng duì xiàngmù jí qí mǎnzú…

  • Thought Leadership Reports

    Reducción de la brecha de talento member content open

    La última investigación global de PMI y PwC indica que existe una falta de conciencia, o tal vez cierta complacencia, entre las organizaciones basadas en proyectos sobre los riesgos que se avecinan…

  • Thought Leadership Series

    Sainō no gyappu o sebameru member content open

    PMI to PwC no saishin no sekai-tekina chōsa ni yoru to, purojekutobēsu no soshiki no ma de, zento ni yokotawaru risuku, oyobi jinzai kiki ga purojekuto to senryaku o mitasu nōryoku ni oyobosu…

  • Thought Leadership Series

    tadyiq fajwat almawahibi member content open

    tushir 'ahdath al'abhath alealamiat alati 'ajraha maehad 'iidarat almasharie (PMI) washarikat brays wawtirhawis kubarz (PwC) 'iilaa wujud naqs fi alwaey , 'aw rubama baed altarakhi , bayn…

Advertisement