Design of experiments for project management (part II)
Editor's Note: This is the second part of an article in a series which introduces a relatively new concept in project management, design of experiments. (Illustrations are numbered sequentially throughout all parts.) It is a direct result of the submission of a paper, “Optimization In Project Coordination Scheduling Through Application of Taguchi Methods,” by Mike Santell, John Jung, and Jay Warner  for consideration for publication in the Project Management Journal. The reviewers concluded that it was appropriate for publication except that PMI readers lacked the background in statistical design of experiments to adequately understand that paper. Dr. Jay Warner graciously agreed to write a series of “tutorials” for publication in the PM NETwork to prepare readers for the more technically oriented paper will appear in the September issue of the PMJ. In addition, Dr. Warner will present a workshop at the PMI ’92 S/S for those who wish to learn more about design of experiments in general and Taguchi methods and concepts in particular. He is also writing a book, A2Q—Approach to Quality, which covers this and other applications of statistical analysis for production of goods and
J.C. Warner, Milwaukee School of Engineering, Warner Consulting, Inc., Racine, Wisconsin
In the first of this tutorial series we explained the need for and developed the concept of design of experiments (DoE) as applied in project management. An example will illustrate how DoE can be applied to obtain insights into the impacts of alternative resource application rates and plans to identify the most effective solutions to a scheduling problem.
PROJECT COORDINATION EXAMPLE
The concepts discussed previously can be understood best through a complete computational analysis of a problem. Generally today these computations are performed by computers, but still the best way to understand the concepts and calculations is through an example. For simplicity, consider a case involving only three factors.
The problem is taken from the opening case in the article published in the May PM NETwork. The computer analysis of a project indicates that the project will not be completed by the required date. It will require too long for the given resources. We wish to reduce that time by either adding specific resources or changing the plan, both of which will require additional costs for the project. To reduce the completion time we could add a product engineer, add a technician, or purchase rights to a patented device instead of developing a functionally similar component independently. Each of these changes will add direct costs to the project. Neither the amounts of reduced project time nor the direct costs are clear. Which to recommend?
Table 2. Controlled Factor Summary
|Factor||Low (−)||High (+)|
|Engineer Staff Level||2||3|
|Technician Staff Level||6||7|
|Obtain Latching Device||develop||purchase|
In every project, a list of possible changes should be developed with as many changes as possible written out. The record in my experience is 108. Most analysts stop around 20. From this list a reasonable number for testing are selected, usually between 3 and 15.
The measures of results for this example will be the computed completion time and costs for each alternative. If management places a specific value on improved completion time, then the comparison of costs and benefits can be made directly.
The possible conditions selected are summarized in Table 2, listing the individual factors and two possible conditions for each.
The measured response is project completion time, in days, and project costs which will be discussed later. I now set up eight different scenarios for the project, which cover all eight possible combinations of the three factors at each of the two levels shown in Table 2. All else in the PERT/CPM calculation remains constant. The computer determines the completion times for each of the combinations. These eight combinations, with results, are listed in Table 3.
These eight responses can be represented as the eight corners of a cube, Figure 8.
The two levels of engineering staffing are represented by conditions, or points, on the left and right faces of this cube. By comparison of the completion times across the left and right faces we can determine the average time for completion when two engineers or three engineers are available. In this case, the average with two engineers is (142+ 108+98+ 128)/4= 119 days, and with three engineers it is (129 + 75 + 74 + 124)/4 = 100.5 days, so adding another engineer will, on average, reduce the project time by 18.5 days.
Table 3. Test Conditions and Results
|Condition||Eng. Staff||Tech. Staff||Source||Time days||Costs, $K|
Completion times in boldface. Figure 8.
For the technician staffing, with six technicians the project will be completed in (142+ 129+ 124+ 128)/4= 130.8 days, on average, while seven technicians will bring the project in at (108 + 75 + 74 + 98)/4 = 88.8 days, a substantial improvement. Adding a technician will substantially cut project time.
Comparison of the bottom face average (106 days) with the top face average (113.5 days) shows that purchasing the patented device instead of developing a new one will increase the completion time. This is true without regard to the engineering and technician resources available. The exact reasons are buried in the technical details of the project; if we wish, we now have a tight focus with which to examine them.
If we look only at the two staffing factors, we can see that when both an engineer and technician are added, the completion time drops further. This can be seen best in a response surface plot, shown in Figure 9.
With two engineers, increased technician activity reduces completion time. With three engineers, however, the addition of technicians has an even greater impact. Adding both together is more beneficial than each addition alone. This is the result of an interaction between these two factors.
For any given set of factor conditions, the effect of a small change in each factor can be seen in a “linear graph,” Figure 10. These figures, computed by Catalyst2, include the effect of interactions.
When the analysis is performed dynamically on the computer, the settings, indicated by the thin vertical lines in each graph, can be moved about. As this is done, the slopes of the effects change, reflecting the impact of interactions.
Graphical analysis, such as performed in Figure 10, works well with two or even three factors, but with a more typical situation, and when precision estimates are to be included, computer software is highly desirable. Many software packages are available for IBM mainframe, VAX, PC-DOS and Macintosh computers. In my opinion, by far the most effective and easy to use software for this type of analysis is Catalyst. It treats the calculations as the tools they are, staying quietly in the background while providing easily interpreted results.
The added resources to reduce the completion times add costs. If the additional cost is too great, then the benefits could be outweighed, and management would rightfully decide that the shorter times are not worth the expense.
For this example simple conditions were used to illustrate one way to account for costs. Engineers were expensed at $60K per year, and technicians at half that. A fixed cost of $53K per year (50 percent of the labor cost for two engineers and six technicians) was added. The cost of purchasing the device outside was placed at $25K. Costs were incurred only during the project. As soon as the project was complete the personnel would be reassigned to another project. Each cost was then put into units of dollars per day and then the total cost for each of the eight conditions determined. These total costs are given in Table 3.
The cost data can be analyzed in exactly the same way as for the completion times. Placing total project costs on the respective corners of the cube, averaging to find the values for each face, and making comparisons, all proceed as before. The analytical results by the computer are shown in Figure 11.
|Determination of significant effects||Reduced project time of additional staff, cost increase due to additional engineer|
|Magnitude of the main effects||Amount of time increase from purchased device, completion time saving from added technician|
|Magnitude of the interaction effects||Time and cost value of added engineer and technician together|
|Degree of project improvement||Projected best completion time of 71 days|
The total cost of the project was $158K with two engineers and six technicians as originally planned. This cost can be reduced to $102K by the addition of one technician and one engineer. The cost of additional staff will be offset by the shorter project time for everyone. Figure 11 shows clearly that the additional technician has the greatest impact, while the additional engineer also cuts costs. If only an engineer is added, the net cost of the project goes up, as shown in Figure 12. The cost benefits from the third engineer stem largely from the synergistic effect of adding technicians and engineers together.
The cost reductions enjoyed in this example stem from the fact that the project time, and hence total cost, is reduced more than the additional staff cost per day. In some cases we cannot enjoy such benefits, and the value of the project completed at different times must be developed. That is to say, we need a full cost/benefit analysis, with both costs and benefits denominated in the same units, dollars. Such a sophisticated analysis would include the benefits of reduced completion time on production from the investment, market share and profits. The earlier the project is completed the sooner the investment can begin producing. Early market entry is generally considered the more profitable period.
Suppose that each day reduced from the project time is worth $1000 to the company. Then we can translate the reduced time into increased revenue, and complete the benefit-cost comparison. In such a case the units should be either income or some type of profit, depending on the sophistication of the data.
BENEFITS OF DESIGN OF EXPERIMENTS
The end result of a complete designed experiment is that the effects of controllable variables become clear. From each experiment we obtain the data as shown in Table 4.
The improvement in resource allocation for this example is clear. The same project is completed in less time with lower net cost. The benefits of the earlier completion add to the improvement.
One major value of a designed experiment is that the data is “used” more than once. The visual analysis makes clear that each of the eight data points is used for determining each effect. There are eight possible effects (overall average, E, T, S, ET, ES, TS, and ETS), so each measurement is used eight times. As a consequence, the results apply generally throughout the “factor space” enclosed by the cube. Thus, all the results and benefits listed above apply with statistically sound confidence across the conditions studied. If we can only find an additional engineer for half time instead of full time, we can estimate quickly the value of that addition, based on the data already available.
Another class of factors mentioned earlier are those which are out of the control of anyone, but which may have some impact on the project. Since they are not controlled, they are usually left to cause managerial ulcers, or ignored. For example, the original concept may turn out to be unworkable. A key person may be sick for a month.
Factors out of our control are called “noise” factors. If we knew which ones were most likely to cause grief, we could at least keep an eye toward that area. If we could allocate managerial focus to more significant “noise” variables, we could perhaps take steps to minimize their impact. This too, can be done with a suitable experimental design.
More Complex Designs and Objectives
Typical projects entail more complex experimental designs. The basic principle of selecting conditions for study based upon the analysis to be performed remains the same, with equally powerful results. Furthermore, it is not necessary to make a measurement at every possible condition. In general, complex designs test only a select fraction of the possible conditions, with no loss in accuracy. This economy of data collection does not sacrifice the global nature of the conclusions-the results are still valid across the entire range of conditions studied.
Problems and Objectives
The project coordinator has specific objectives which the analysis is to answer, and the choice of experimental design controls the mathematical potential and limitations of the results. The coordinator's specific objectives include locating major stumbling blocks to the project objectives, minimizing the time to completion and/or project costs. As discussed later, one can also seek to minimize the risk, or unforeseen variability, of a project. The major stumbling blocks are those which have the greatest impact on the project's objectives. The minimum time or cost stems from the specific condition that yields the most favorable result. Unwanted variability results from events which were not, or could not be, anticipated, but which have significant negative impact on the project results.
The choice of experimental design controls not only the number of tests which must be made and the amount of study required, but how well the desired information is developed. There is a clear trade-off between infermation and analysis.
For example, if the objective is to determine which factors are most likely to improve results, then we might sacrifice information about interactions and some precision generally, and choose a design with seven factors and only eight trials. If we were concerned about possible interactions and accurate magnitudes of specific improvements, we might choose a design with five factors in sixteen trials.
The individual responses shown above are linear. If we believed that increasing from six to seven to eight technicians produced diminishing returns, then we could select a design that utilized three or more levels, and agree to spend the extra time to perform roughly twice as many trials as necessary for two levels of each factor. The choice depends on the analytical objective, initial assumptions and available resources to perform the study.
Since the experimental design approach provides much more information about the project than ever before, we can now expect answers to questions that previously were restricted to idle debate. DoE will easily find out whether an unavoidable delay in a task will seriously upset the plan. For example, if development' of a component hinges on a new non-cept which may not work out, is there an alternative, and will this impact on the total project? The answer is not intuitively obvious. Perhaps an outside purchase will free up engineering time for another task. This becomes particularly relevant when the level of analysis is at the organization, i.e., multi-project, level as opposed to the single project level.
To do this, we setup a design that takes into account both the controllable factors, such as discussed above, plus the noise factors. A set of “crossed arrays” will show not only which noise factors can cause serious project delays, but also whether any interactions between the noise a-rid control factors are significant. Armed with this information, a manager can keep a weather eye on the most critical un-controlled items, and take action when they first arise.
In this and the previous article published in the May issue, we discussed the basic concepts and application of DoE. In the next article we will explore the concepts and methodologies of Taguchi, a Japanese engineer and statistical analyst who has, through his philosophy on uses of DoE, contributed significant insights into quality improvement in volume manufacturing. As project managers become mom familiar with DoE and Taguchi, similar contributions for improved project management are expected.
2. Available from BBN Software, Cambridge, MA.
Jay Warner is assistant professor of mechanical engineering at the Milwaukee School of Engineering and principal scientist with Warner Consulting, Inc. He received his B.S. in physics from the University of Massachusetts-Amherst, and his Ph.D. in metallurgy from Iowa State University. He implemented use of Taguchi's methods for experimental design throughout a large multi-plant company, writing analytical software and holding in-house symposia, and promoted the “Quality Revolution” in a $100 million firm through companywide training and individual project support. He has authored A2Q®Method, which focuses on sound statistical methods to realize available opportunities for improvement.
JULY 1992 pm network