# Managing risks in projects with decision technologies

**Timothy J. Lowe, University of IowaRichard E. Wendell, University of Pittsburgh**

Chapman and Ward (1997) define project risk as a “threat to success” of the project. They investigate the roots of uncertainty through a systematic analysis of their *Project Definition Process*—providing answers to the following six questions: (1) Who are the parties involved? (2) What do the parties want to achieve? (3) What is it the parties are interested in? (4) How is it to be done? (5) What resources are required? And, (6) when does it have to be done? Further, they state that the *purpose* of risk management is to improve project performance via a systematic identification, appraisal, and management of project related risk. Alternatively, Kangari and Boyer (1989) define risk management as a systematic approach to risk identification, goal description, risk sharing and allocation, risk evaluation, and risk minimization and response planning.

Thus, under these broad definitions of risks and quite general descriptions of risk management, it is clear that risk avoidance/risk mitigation programs must be multidimensional. These programs often include good management practice, leadership and human resource issues, as well as scheduling, contingency planning and buffer management (buffer sizing and placement). The focus of this paper is this latter set of programs in that we analyze the way that a project team can utilize quantitative planning tools to contain project risk and to hedge against its impact on success. We are mostly concerned with *schedule risk*—the uncertainty of project completion time. We wish to point out early on that our work in this area has just begun, but it appears to be a fruitful area for future research.

Hulett (1995) outlines seven sources of schedule risk:

1. Lack of a realistic schedule developed to a level of detail that accurately reflects how the work will be done, with fully developed work scopes and sequential logic.

2. Inherent uncertainty of the work arising from advanced technology, design and manufacturing challenges, and external factors including labor relations, etc.

3. Complexity of projects, which requires coordination of many contractors, suppliers, government entities, etc.

4. Estimates prepared in early stages of a project with inadequate definition of the work to be performed, and inaccuracies or optimistic bias in estimating activity durations.

5. Over-use of directed (constraint) dates, perhaps in response to competitive pressures to develop aggressive, unrealistic schedules.

6. Project management strategies favoring late start scheduling or fast track implementation.

7. Lack of adequate float or management reserve.

Hulett argues that the key to risk management is the quantification of risk, and the use of software tools to reduce the impact of risk on project schedules. We agree! Indeed, with the increasing power of the computer, with better and easier-to-use software, with more and better data available, and with increasing pressures to manage projects effectively; we believe that using decision technologies to manage risk in projects is now an important part of project risk management. Accordingly, it is the focus of this paper.

## Background

Probably dating back to the first use of the Critical Path Method (CPM) for project scheduling, project managers have realized the shortcomings of CPM for dealing with uncertainties that are inherent in any plan. As pointed out by Hulett (1995, 2000) the project duration calculated by CPM is accurate only if everything goes according to plan. This is rare in real projects. Furthermore, given uncertain activity duration times, the completion date provided by CPM is often not even the most likely project completion date, and the path using traditional CPM techniques may not be the one that will be most likely to delay the project and which may need management attention. The reasons for this phenomenon are well known and include the facts that CPM uses only point estimates of duration times (thereby ignoring duration variances), and only the longest path in the project network (using these point estimates) is used to compute the project length.

To circumvent the above difficulties with CPM, several methods have been investigated to obtain more realistic estimates of project completion time. Bendell et al. (1995) give a nice review of some of these methods. Probably the earliest attempt is what has become known as the *PERT method,* whereby three duration estimates (optimistic, most likely, and pessimistic) are used for each activity. The critical path is then found using the most-likely duration values, and the probability distribution of project completion time is taken to be the distribution provided by the distribution of the sum of random variables describing the durations of activities on the critical path. Difficulties with this approach include the assumption that activity durations are *independent* variables, and that durations of non-critical activities (even if their durations have a large variance) are ignored. In spite of these difficulties, the PERT method seems to be widely adopted and is a feature included in most project management software systems.

Another approach, often referred to as the *analytical approach,* involves the computation of the cumulative distribution function (CDF) of project completion time as a multiple integral (Ringer, 1969) distribution. Due to the complexity of the computations, this approach is feasible only if the network is small and the probability density functions of the activities are in analytical form. When exact methods are impractical, Ringer has proposed computer-based *numerical integration* methods to approximate the completion time CDF.

Another approximation method, called the *moments method,* is proposed by Sculli (1983) and it depends on being able to compute the first four central moments of the distribution of the sum, and maximum of activity times. In this method, the project network is progressively reduced to a single arc, by collapsing serial and parallel arcs. Davis and Stephens (1983) have developed computer software to support this approach. Bendell et al. (1995) developed the moments method in the special case where activity times are Erlang distributed. Gong and Hugsted (1993) proposed a method they call *backward-forward uncertainty estimation* as a means to include non-critical activity time uncertainties in the risk analysis of a project network.

Other methods have also been proposed in the literature for managing risk. These include managerial approaches such as the *Planned Contingency Allowance* (PCA) technique proposed by Eichhorn (1997) using “unders” to offset “overs” as proposed by Ruskin (2000), or the application of the *Theory of Constraints* to project management proposed by Goldratt (1997); as well as more analytical approaches such as those proposed by Gong and Rowlings (1997), and Gong (1997).

*Computer simulation* is another approach that is becoming even more popular as desktop computing power increases and special simulation software becomes available at an affordable price. Early simulation approaches include those ofVan Slyke (1963), Gray and Reiman (1969), and Burt and Garman (1971a, 1971b). In this approach, each iteration of the activity time distribution of each activity is sampled, and the resulting values are used to compute the longest path in the network. This exercise is repeated a large number of times and a distribution of project completion times is then developed. In addition, other useful information such as the fraction of times a given activity appears in the critical path is gathered. Clearly, for the technological reasons mentioned above, this approach to estimating the distribution of project completion time will continue to become more popular (for example, see Levine, 1996; Gump, 1997). For a further discussion of simulation in project risk management see Grey (1995).

Simister (1994), whose paper provides a nice overview of various project risk analysis and management techniques, reported on the results of a mail survey of various methods that expert practitioners use to manage project risk. Although the number of respondents to his survey was quite modest, he concluded that computer applications, e.g., packages such as @RISK, are used by the majority of practitioners. He also concluded that one of the simplest of all possible techniques (checklists) was the most favored of all techniques suggested in his survey instrument. We remark that the results of that survey may be quite different now (in the year 2000) than in the year of his survey (1994).

Indeed, recent advances in such software have made them even more relevant and user-friendly. As an example, the @RISK add-on for Microsoft Project 98 now allows “conditional branching” in the simulation analysis of project schedules. By conditional branching we mean that specific branches of the decision tree are “sampled” only if certain conditions are met. However, the user must specify the branching rules *prior* to the simulation run.

Specifying branching rules is a not easy. Yet it is a fundamental problem that must be faced. In this paper we investigate the use of decision theory (Jones 2000, Johnson & Schou, 1990) to make such decisions. Specifically, the problem we are concerned with involves contracting opportunities to reduce task times on various project activities. These contracts involve financial commitments and in some cases lead-times to accept contract terms. Also, task times are uncertain but have known probability distributions. Thus, we are concerned with the decision process of crashing activities with the overall objective of minimizing expected project cost.A part of the planning process is to determine if, and when, to elect the crashing option for various tasks. Thus, the setting is ripe for the use of decision theory.

In the next three sections we consider a relatively simple (core) project with the characteristics mentioned above. We show that when crash-decision lead-times are zero, that the problem is quite easily solved. However, when lead-times are positive, the problem becomes much more challenging. We use the core project to demonstrate the relationship of (optimal) expected project cost to various problem parameters. Our analysis allows us to gain insight into such problems, but clearly only scratches the surface of possible research efforts. In section “Research Issues and Practical Considerations in Utilizing Decision Analysis,” we discuss research issues as well as practical issues in utilizing this approach. In the last section of the paper, we give some concluding observations and discuss other risk containment strategies that are possible to use in this problem environment.

## A Tree Characterization of Crashing Options in a Serial Project

Consider a serial project where the durations of each task are either short or long, each occurring with a given probability. In addition to these normal durations, suppose that we have the option to crash each task. In particular, suppose that the crash decision for each task is simple—to crash it or not to crash it. Crashing a task costs a given amount of money, called its crash cost, above and beyond the normal cost of the task. As with normal durations, the durations of a task under crashing are either short or long. For simplicity of presentation, we assume that the probabilities of long and short durations under crashing are the same as in the normal case. However, under crashing the duration of a task in a “crash mode” is generally smaller than its respective duration in a “normal mode.” A decision to crash a task must be made in advance of the start of the task by a minimum amount of time, called its lead-time. Exhibit 1 illustrates such a situation when lead-times are zero.

A serial project having three tasks is depicted in Exhibit 1. To interpret the exhibit, consider task B. This task will take either two or 13 days to complete in the normal mode, each with a probability of .5. In a crash mode, under which an additional $50 crash cost is incurred, it will take either one or seven days, each again with a probability of .5. Since B has a lead-time of 0, a decision to crash B can be made when task B begins.

Exhibit 2 depicts a decision tree for this example. (Actually because of size constraints, Exhibit 2 only gives the top half of the tree corresponding to not crashing task A. The bottom half, corresponding to crashing A, is similar.) We illustrate the notation in this tree for task B (with the notation for other tasks being analogous): “B no” designates a decision not to crash B and “B yes” means to crash it; “B NS” designates a normal, short duration, and “B NL” designates a normal, long duration; “B CS” designates a short duration under crashing while “B CL” means a long duration under crashing.

Suppose that the project in Exhibit 1 has a finishing target time of 18 days and that a project duration exceeding this target will incur a penalty cost of $30 per day. Further, suppose that there are no other relevant costs and that the objective is to make crash decisions so as to minimize expected total cost. Using standard decision analysis, it can be verified that the optimal solution is to crash task B under all conditions, and only (as a contingency plan) to crash C in the following situations: when the duration of task A is long; or when the duration of task A is short but the duration of task B is long.

Note that each path in the decision tree in Exhibit 2 has one decision node at the start of every task—namely whether or not to crash it. We now add the possibility of a lead-time requirement that changes this characteristic.

Let everything be the same as in Exhibit 1 except that the lead-time of task B, as well as task C, is six days. Thus, if a decision to crash C is made at the start of task C then a six-day delay will occur before task C can begin. On the other hand, if you decided to crash C at the beginning of the project, then there might be little or no lead-time delay, but short durations for A and B could make crashing C unnecessary or uneconomical.

As evident from this example, a crashing decision under a lead-time is not just “whether to crash or not to crash,” but also “when to crash.” Making this decision requires knowing when information is available to the decision-maker about task durations.

In our analysis of such a problem, we assume that the decision-maker does not know which of the two possible task times (long or short) will occur until the short task time of the respective task has elapsed. (However, our modeling approach is valid under other scenarios regarding points in time when it is known which task times will apply.) Under the above assumption, we have the following result regarding points in time when crash decisions are to be considered.

### Sufficiency Result

For each task j, it is sufficient to consider the decision to crash the task at the beginning of the project, and at those points in time represented by the end of short duration completions of each task i preceding task j. Furthermore, crashing task j at a predecessor task i would only be considered if the duration of task i was not short.

This result follows from the fact that new information relevant to a crash decision will occur at only those points in time identified in the result. If the duration of a predecessor task i was short, then this result cannot motivate crashing task j since a short duration is the best outcome that could have occurred. As an illustration, in Exhibit 2 it will be sufficient to consider the decision to crash task C at the beginning of the project, at the point in time representing a short duration of task A if it happens that the duration of task A is long, and at the point representing a short duration of task B if it happens that the duration of task B is long.

Using the above sufficiency result, we can characterize the set of crashing decisions as a decision tree. Exhibit 3 denotes a decision tree for a simpler version of Exhibit 2—an example using only tasks A and B. In addition to the previous notation, we let “B LTD” designate the lead-time delay in the start time of B due to B’s lead-time. (We will define the concept of a lead-time delay shortly, but for the moment it is convenient to think of it as a potential delay in project length due to the fact that the crashing decision has a positive lead-time.) Further, we sometime break a long duration of a task into two components, where the duration of the first component equals the short duration of the task and where the duration of the second component equals the difference between its long and short durations. For example, “A1 CL” is the first component of A under a crash mode and its duration equals 5, whereas “A2 CL” is the second component whose duration equals 10 (i.e., 15—5). Finally, note that the durations of the “tasks” in a tree corresponding to lead-time delays of tasks need to be computed. We address this issue in the next section of the paper.

Observe that Exhibit 3 is a *general tree* for two-activity serial projects (under our assumptions) since it includes all the logical possibilities that can occur. Indeed, this general tree includes all possible choices for probabilities, costs, lead-times, etc. Furthermore, the pattern of such a general tree is clear for any serial project. At the beginning of the project, crash decisions are made (either to crash or not to crash at this time) for each task in the project. Then, we have the first activity duration. If this duration is short, it is immediately followed by the second activity duration. If the first activity duration is long, this fact will be known only at the point in time representing an early duration. At this point, we reconsider crashing decisions for all future tasks for which we have not decided earlier to crash. We let *T* denote this general tree.

Not surprisingly the general tree for the three tasks in Exhibit 2 is much larger and, therefore, is difficult to display in a figure. Exhibit 4 gives a *section* of this tree corresponding to when no tasks are crashed at the beginning of the project. Since we can choose to either crash or not to crash each of three tasks at the beginning of the project, observe that Exhibit 4 represents just one of eight (= 2^{3}) sections of the decision tree for Exhibit 2.

The tree in Exhibit 2 can be viewed as a special case of the general tree when lead-times are zero. Further, other choices of lead-times can yield other trees that are special cases of the general tree. In this paper, we focus on the general tree since it is applicable to any serial problem.

## Analysis of the Decision Tree

First, we have some notation. With *T* as the general decision tree for a project, let *P* denote a path in *T,* where a path is a sequence of edges of *T* from the root of *T* to a tip (terminal point) of *T* What is the duration of the path P? Clearly this will be the sum of the durations of the tasks plus any lead-time delays along the path. Determining the delays requires some calculations, which we consider next.

Observe that lead-time delays can only occur for tasks that are crashed. Along a path *P* in *T* we start with task A and then iteratively consider B and then C, etc. Suppose that A is crashed along P. If the lead-time for task A is positive, then A’s lead-time delay equals its lead-time (since we assume that the project starts as soon as possible). Otherwise, A’s lead-time delay is zero. In general, for any crashed task j on path *P* we compute the elapsed time along *P* from the point where the crash decision is made until the time that the predecessor tasks of j are completed. This elapsed time is, of course, simply the sum of the durations of the events (including any lead-time delays) along the path from the point of the crash decision to (and including) the immediate predecessors of task *j*. If the lead-time of task j is greater than this elapsed time, then the lead-time delay equals the difference “lead-time—elapsed time.” Otherwise, we say that the lead-time delay is zero. In Exhibit 4, observe that the lead-time delay of task B on path 19 (the 19^{th} path from the top) is 1. Here the corresponding elapsed time is 5 and the lead-time of B is 6, so that the lead-time delay is 6—5=1.

We now consider a companion concept of lead-time delay. Again, as above, for each crashed task j along *P* we iteratively compute the difference between elapsed time and lead-time. However, if the lead-time is less than the elapsed time, the difference (elapsed lead-time) is called the lead-time slack of j along P. If the lead-time slack is not less than the elapsed time, then we define the lead-time slack as 0. For a task not crashed along *P* we say that its lead-time slack along P is ∞. Let S(j, *P*) denote the *lead-time slack* of task j along *P*. As an illustration, observe that the lead-time slack of task C along path 5 in Exhibit 4 is 5 since the elapsed time is 11 and the lead-time is 6. Also note that for any task j, at most one of lead-time delay and lead-time slack in positive.

For given lead-times, the duration at each terminal point on the tree is simply the sum of the durations of the tasks along the corresponding path. This duration may result is some indirect or penalty cost, which can be added to the total cost of crashing tasks along the corresponding path. Then, the expected cost of the project can be determined by doing the standard backward folding of the decision tree *T,* yielding optimal decisions at the decision nodes. (Of course, some decision nodes may have alternative optimal decisions.)

Consider an optimal solution to the decision problem where exactly one branch out of each decision node is specified. Let T* denote a *strategy subtree* consisting of those paths *P* where all decision branches on each such path correspond to this optimal solution. Observe that T* specifies the optimal decision to be selected at each decision point that can be attained with a positive probability. To illustrate, consider the following example.

Suppose that indirect costs are related to project duration as given in Exhibit 5. It can be shown that the optimal solution in this situation is A no, B no, and C no at all decision points, except for the case where both A NL and B NL occur. In that case, we choose C yes. The optimal strategy subtree T* for this is the union of paths 1, 2, 3, 4, 7, 8, 11, and 12 in Exhibit 4. The expected cost of this strategy is $407.50.

Consider some path *P* ∈ T*. We now define the *minimum lead-time slack of task j,* denoted S(j),as: S(j) = Min{S(j, *P*) for *P* ∈ T*}.

Note that S(j) gives the *upper limit of increase* in the lead-time in task j before any increase in the cost of the project will be incurred. In Exhibit 2 for task C the minimum lead-time slack is 5 since only two paths in *T** have a crashed duration for C (paths 11 and 12 in Exhibit 4) and the slack there is 5. Thus, the cost impact of an increase in the lead-time of task C is 0 up to an increase of 5.

## Impact of Changes in Lead-Time and Indirect Cost

For an increase in lead-time of a task beyond its slack, we can expect the project cost to increase. What is the nature of this increase? Of course, with the indirect cost table given in Exhibit 5, the expected cost will be a nondecreasing, non-convex, piecewise-linear function of lead-time. However, even if indirect cost increases as a linear function of project duration, the following exhibit illustrates that expected project cost is still a nondecreasing, nonconvex, piecewise-linear function of lead-time.

Consider a modification of the second example where the indirect cost is $30 per day times project duration. The solution (strategy) that minimizes expected cost is to crash A, B, and C at the beginning of the project, corresponding to an expected cost of $705. This cost is derived as follows. The cost of crashing all three tasks is $225. Since tasks A and B are crashed, if they both realize short durations, task C still cannot start until time 6 since C’s lead-time is 6. With this fact, it can be verified that the expected project duration for this strategy is 16 days, leading to an indirect cost of $480. For this strategy, activity C has a lead-time slack of 1, which is evident since activity B’s lead-time of 6 means that C cannot start until day 7 at the earliest and since the lead-time of C is 6. For increases in C’s lead-time beyond 7, cost will increase, as depicted in Exhibit 6.

The shape of this cost curve can be readily understood. With a lead-time of 6 for C, tasks A, B and C are all crashed in a minimum-cost solution. Beyond 7, increases in the lead-time of C will result in an increased project duration in the case when A has a duration of 5 and B has a duration of 1. But this case occurs with a probability of .25 and so expected cost will increase at a rate of .25(30)=$7.50 per day. Cost will continue increasing at this rate until the lead-time of C is 13, at which point the expected cost is $750. At this point delay will occur when B takes on either a long or a short duration, so the cost will begin to increase at .5($30) or $15 per day. At the point when lead-time is 13.67 the optimal solution changes to one where only B and C are crashed. Now, with A not crashed, the project length will increase only when the durations of A and B are both short. This occurs with probability of .25, and so the rate of increase in expected project cost is again $7.50 per day. This rate will continue until the lead-time is 16 1/3, at which point we have an alternative optimal solution of crashing A and B, but not C. Above this point C, will remain “uncrashed” and so further increases in C’s lead-time will not affect the expected project cost. Thus, the expected project cost stays constant at $780, starting at day 16 1/3.

In general, the shape of the expected cost curve as in Exhibit 6 can be explained as follows. Consider any path *P* in some strategy subtree.As the lead-time of an activity j on the path increases, the length of the path will remain constant until its lead-time slack S(j, *P*) becomes 0. Then the path length, and hence the cost (under the assumption of linear indirect cost) will increase linearly. Since the expected cost of a solution strategy is the expected cost over each of the paths in its strategy subtree, the expected project cost of each strategy will be an nondecreasing, piecewise linear, convex function of lead-time. To find an optimal strategy the backward folding of the tree finds the minimum cost solution over all such strategies. Thus, the general shape of the cost curve will be the minimum of these convex functions, which is generally non-convex. Typically, it will be flat until slack is exhausted, then increase in a piecewise linear, nonconvex fashion, and then become flat again when the selected task is not crashed in an optimal strategy—hence the “S-shape” in Exhibit 6.

We now turn our attention to the impact of changes in project indirect/penalty costs. In what follows, we systematically examine the impact of changes in the cost per day rate on the overall project cost.

Consider a modification of the second example where crashing costs and lead-times are fixed, but project indirect cost is simply proportional to the project duration, namely it equals an indirect cost rate times duration. If this rate equals 0, then obviously the optimal solution is to crash nothing and to incur a project cost of 0. As the indirect cost rate increases, crashing of tasks may be optimal. Exhibit 7 illustrates how expected project cost changes as this indirect cost rate changes. Observe that it is an increasing, concave, piecewise linear function.

The rationale for the shape of Exhibit 7 is readily apparent. As the indirect cost rate increases from 0, expected project cost will increase with a slope equal to the average project length of 29 under regular, i.e., noncrashed durations for the tasks. This continues until the indirect rate reaches $14.28 at which point we have an *alternative* optimal solution, which involves crashing task B. Since under this solution the expected project duration is 25.5, this becomes the slope of the expected project cost until indirect cost reaches $15. Then a new solution of the crashing of tasks B and C becomes optimal with an expected project duration of 20.5, which becomes the new slope until the indirect rate reaches $22.22. After this, the optimal solution is to crash all tasks and the expected project duration, or slope, is 16.

In general, the shape of this cost curve is not surprising. Indeed for each strategy the expected cost increases as a linear function. Since the folding back process effectively finds the minimum over all strategy subtrees, it involves taking the minimum of a collection of linear functions—which is an increasing, piecewise linear, concave function.

## Research Issues and Practical Considerations in Utilizing Decision Analysis

Herein we showed how the problem of choosing among speedup options in a serial project can be characterized and solved using decision analysis. To do this we considered a core problem having two events for each task and two crash options for each decision. We saw that the problem with lead-times is significantly more complex than the one without lead-times in that the decisions expand from “speedup/don't speedup” to “when to make such decisions.” The problem has some fundamental properties that we discussed, including lead-time slack and the cost impact of changes in lead-time as well as indirect costs. The purpose of the paper is not only to consider a solution of the serial problem, but also to lay a foundation for future work in this area.

As for future work, much needs to be done. While the core problem and the corresponding decision analysis can be readily extended to situations with more than two events for each task and with more than two options for each decision, the corresponding complexity of the decision tree increases quickly. Also, as the number of tasks increase, the size of the tree grows exponentially. Thus, decision analysis can quickly become unwieldy for even modest-sized problems.

One avenue of future research is to find methods for reducing the size of the decision tree prior to analysis. It may be possible to do some form of decision tree reduction by developing domination conditions (conditions that eliminate segments of the decision tree).

Of course, in practice other issues may mandate generalizing the problem to incorporate other aspects such as stochastic lead-times, resources, and risk aversion. And finally there is the question of how to deal with nonserial projects (having parallel paths). Here again we can, in concept, utilize the decision analysis approach, but the complexity of the analysis may make this prohibitive. On this latter point, one possible approach is to reduce a project to a serial path using an approach analogous to classical PERT. However, just as in classical PERT, such an approach may yield suboptimal answers since it ignores other paths that are critical with a nonzero probability.

Given such limitations of decision analysis, future research should include the possible utilization of other approaches such as stochastic optimization, dynamic programming, and heuristic methods.

## Other Risk Management Approaches and Concluding Observations

While this paper focuses on decision technologies in dealing with risk, we recognize that it is just one tool for dealing with it. Indeed, in the development and delivery of project management training for research and development personnel at Bandag, Inc, one of the authors of this paper helped to develop a Project Management Planning Guide for the management of schedule risk in the development of new products and services. The Guide is a very simple document (two pages) with the purpose of instilling a common project planning discipline in the company. The guide is a dynamic document in that action items are triggered throughout the life of the project. Also, several steps of the Guide are intended to reduce project schedule risk, e.g., involving the project sponsor early on in the development process (to avoid surprises later), ensuring that the project team is involved throughout the planning process, scheduling regular review meetings, etc. Bandag found this disciplined approach to project planning useful and has embedded the Guide in its corporate Product Development Process.

We believe that much can be learned by studying risk analysis techniques currently used in finance. Motivated in large part by financial risk management we suggest a variety of strategies including: diversifying risks, transferring risks to contractors, purchasing insurance, building in slack (buffers), obtaining more information about uncertainties, controlling the outcome (e.g., through project design), and building in redundancy.

Consider, in particular, the use of diversification in portfolio composition. The notion is to spread the risk among several investments. To carry this concept over to project management, the natural thought is to diversify by selecting more than one “doer” (contractor) when there is uncertainty in task completion times. However, we now show by simple example that diversification is not always the best thing to do.

Let A, B and C denote three tasks, where the task completion time for each task is either two or four days with probability 0.5 for each event.We assume that if the tasks are done by a single contractor, the completion time distributions are perfectly correlated. However, if done by different contractors the distributions on task times are independent.

If the project involves doing the tasks in series, i.e., A then B then C, it is optimal to diversify (assign each task to a different contractor) since in that case, there is a 12.5% chance (0.5)^{3} each that the project is completed in either six or 12 days and a 37.5% chance each that the completion time is either eight or 10 days. By assigning each task to a single contractor, we note that the completion time would be (because of perfect correlation) either six or 12 days with a probability of 0.5 for each event. In this example, note that the *expected* completion time is the same (nine days), but diversification gives rise to a (desirable) lower variance of completion time. The analogy to finance portfolio theory is that in this project with serial tasks, total project time is the *sum* of task times.

Alternatively, suppose the tasks are done in parallel, i.e.,A and B and C can begin at the same time. Thus, project completion time is the *maximum* of the completion time of the three tasks. In this case, it is optimal to assign all three tasks to a single contractor. This follows since project completion time is either two or four days with a probability of 0.5 for each event, while if independent contractors are chosen, project completion time distribution is four days at 87.5% and two days at 12.5%. Thus, non-diversification (sole-sourcing) reduces the expected completion time: three days versus 3.75 days.

The intent of the above example is to show that diversification is not always the best thing to do. Its choice depends upon project task dependencies as well as other factors such as independence of completion time distributions among contractors, etc. As with other risk strategies, the decision of what to do is not easy.

In this paper we have advocated the use of readily available decision technologies to manage schedule risk in projects. Although our focus has been somewhat narrow, i.e., confined to schedule, we believe that the use of modern decision technology tools can be useful to a project team as it plans and executes a project. In conclusion, we hope that this paper stimulates research in utilizing the advances in both computer hardware (the ability to solve larger problems more quickly) and decision technology software (algorithms for solving large, complex optimization problems) to assist project planners in dealing with risk.

### References

Bendell, A., Solomon, D., & Carter, J.M. (1995). Evaluating project completion times when activity times are Erlang distributed. *Journal of the Operational Research Society, 46,* pp. 867–882.

Burt, J. M., & Garman, M.B. (1971a). Monte Carlo techniques for stochastic PERT network analysis. *INFOR, 9,* pp. 248–262.

Burt, J.M., & Garman, M.B. (1971b). Conditional Monte Carlo: A simulation technique for stochastic network analysis. *Management Science,* 19, pp. 207–217.

Chapman, C., & Ward, S. (1997). *Project risk management: Processes, techniques, and insights.* New York: John Wiley & Sons.

Davis, C.S., & Stephens, M. (1983). Approximate percentage points using Pearson curves. Algorithm AS1. *Applied Statistics, 32,* pp. 322–327.

Eichhorn, B. (1997, October). Manage contingencies, reduce risk: The PCA technique. *PM Network,* pp. 47–49.

Goldratt, E. (1997). *Critical chain.* Great Barrington, MA: North River Press.

Gong, D. (1997). Optimization of float use in risk analysis-based network scheduling. *International Journal of Project Management,* 15, pp. 187–192.

Gong, D., & R. Hugsted (1993, August). Time-uncertainty analysis in project networks with a new merge-event time-estimation technique. *International Journal of Project Management, 11, pp.* 165–173.

Gong, D., & Rowlings, J.E. (1997). Calculation of safe float use in risk-analysis-oriented network scheduling. *International Journal of Project Management,* 13, pp. 187–194.

Gray, C.F., & Reiman, R.E. (1969, March). PERT simulation: A dynamic approach to the PERT technique. *Journal of Systems Management, pp.* 18–23.

Grey, Stephen. (1995). *Practical Risk Assessment for Project Management.* West Sussex, England: John Wiley & Sons.

Gump, A. (1997, July). Scheduling high-tech projects. *PM Network,* pp. 15–17.

Hulett, David, T. (1996, July). Schedule risk analysis simplified. *PM Network,* pp. 23–30.

Hulett, David, T. (1995, March). Project schedule risk assessment. *Project Management Journal,* pp. 21–31.

Hulett, David, T. (2000, February). Project schedule risk analysis: Monte Carlo simulation or PERT? *PM Network,* pp. 43–47.

Johnson, G.A., & Schou, C.D. (1990, June). Expediting projects in PERT with stochastic time estimates. *Project Management Journal,* 2, pp. 29–33.

Jones, E.F. (2000, February). Risk management—Why? *PM Network,* 14, pp. 39–42.

Kangari, R., & Boyer, L.T. (1989, March). Risk management by expert systems. *Project Management Journal,* pp. 40–48.

Levine, H.A. (1996, April). Risk management for dummies, Part 2. *PM Network,* pp. 11–14.

Ringer, L.J. (1969). Numerical operators for statistical PERT path analysis. *Management Science,* 16, pp. B136–B143.

Ruskin, A.M. (2000, February). Using unders to offset overs. *PM Network,* 14, pp. 31–37.

Sculli, D. (1983). The completion time of PERT networks. *Journal of the Operational Research Society,* 34, pp. 155–158.

Simister, S.J. (1994). Usage and benefits of project risk analysis and management. *International Journal of Project Management,* 12, pp. 5–8.

Van Slyke, R.M. (1963).Monte Carlo methods, The PERT problem. *Operations Research,* 11, pp. 839–860.

Proceedings of PMI Research Conference 2000

Advertisement

Advertisement

## Related Content

Advertisement