Design of Experiments for Project Managers
Editor's Note: This is the third article in a series that introduces a relatively new concept in project management, design of experiments. It is a direct result of the submission of a paper, “Optimization In Project Coordination Scheduling Through Application of Taguchi Methods,” by Mike Santell, John Jung, and Jay Warner , which will appear in the September issue of the Project Management Journal. In addition, Dr. Warner will present a workshop at the PMI ‘92 S/S for those who wish to learn more about design of experiments in general and Taguchi methods and concepts in particular. He is also writing a book, A2Q—Approach to Quality, which covers this and other applications of statistical analysis for production of goods and services. We thank Dr. Warner for this special effort for our benefit.
In the previous two articles we discussed the basic ideas and applications of DoE. In this article we will explore the ideas and methodologies of Taguchi, a Japanese engineer and statistical analyst who has, through his philosophy on uses of DoE, contributed significant insights into quality improvement in volume manufacturing. As project managers become more familiar with DoE and Taguchi, similar contributions for improved project management are expected.
Suppose a project has been assigned and developed. APERT/CPM network is in place and the milestone dates computed. At this point managers usually want to know two things-how the completion time can be reduced, and the chances that the project will run seriously overtime. In cases of a new product introduction, we know that the first product on the market enjoys a distinct advantage over the competition. Thus, reduced completion time may have value far in excess of the additional direct costs of the project. In such cases the analytical objective is to minimize the project completion time. The DoE procedures described in the previous tutorial can apply for this problem.
In addition, management needs to know how much confidence to place on the completion date—is it likely to come in much later? Uncontrolled changes, which by definition are beyond control of the project coordinator, can wreak havoc on the best laid plans of mice and managers, and due to penalty clauses, can be very expensive.
In the case of one aerospace program, the project coordinator explained that they had the project cost under control, but they wanted to be very confident that it would come in by the promised date. If an analyst could predict which activities were most Iikely to delay the program significantly, then they could prepare appropriately. There is a great deal of difference between a project that we expect to complete in 200 days, with a 10 percent probability that it will take over 222 days, and another that we expect to complete in 200 days, with a 30 percent probability that it will take over 222 days.
VARIATION IN COMPUTER SIMULATION TRIALS
The discussion up to now has assumed that measurements which come from a computer simulation are absolutely accurate. To get at uncontrolled variation we need to introduce some kind of “random” variation. Let me review briefly the problem of variation, or error, in physical measurements.
In the physical world nominally identical objects do not yield identical performance. A product designer must consider two possible sources of variation. First, devices may produce different results due to changes in “controllable” factors. For example, increasing the thickness of a steel section will increase the stiffness of a beam. The designer knows this, and can control the stiffness of the beam through the design specifications. Control factors and their effects were discussed in the previous tutorial session. Second, different results also stem from changes not under the control of the designer. For example, variations in a weld may influence the stiffness of the structure. In the least controlled case, the product performance may depend on the conditions under which the customer uses the device. These uncontrolled factors are called noise factors.
In one specific case, standard components in an electronic furnace controller functioned down to 40°F. A foundry using these controllers shut down for an extended Christmas break during a particularly cold winter. Upon their return, the operators could not restart the furnaces because the shop temperature had fallen to 32°F. Operating conditions outside the designer's control caused the loss of product performance.
NOISE FACTOR EFFECTS
Individual activities do not always occur at the pace and time specified in the schedule. In other words, they are subject to noise effects. Depending on the location of a given task in the network, an unanticipated delay may have no effect, or it may delay the whole program. A fortuitous foreshortening may help, or it may have no impact. Changes in each activity will have similar, unknown effects. (A corollary of Murphy's Law, which says that all unexpected project changes cause delays, turns out not to be universally true.)
Noise factors can lead to a concept similar to the electronic “signal to noise” ratio, which compares the controlled change, the signal, to the uncontrolled change, the noise. The statistical signal to noise ratio, promulgated by Taguchi , relates closely to the average (response due to controlled factor) divided by the standard deviation.
The designer who would reduce noise effects may tighten product tolerances, and issue detailed instructions and disclaimers with the product. But the variation remains, no matter how we try to avoid deleterious operating conditions. A better procedure is to select control conditions which are less sensitive to noise effects.
Noise Effects as Distributions
It was to determine the overall impact of such uncontrolled changes that Monte Carlo techniques were developed. By considering a large number of activity-level changes, an overall estimate of completion time and the likely range of possible completion times can be developed. A histogram of the calculated completion times produces a distribution of possible times. Figure 13 shows a hypothetical example.
If we could run this project three thousand times, each time with the same people, plans, and activities, then most of the time the project would be completed in about 200 days. In 271 computer simulations of this project, 9.7 percent of 3000 trials, the completion time was more than 222 days. Since in reality only one team can do the project, and then only once, we say there is a 9.7 percent (round to 10 percent) probability that the project will take over 222 days. Likewise, by counting the simulation trials we find that 2094 out of 3000, 69.8 percent, are completed before 208 days. Thus we say there is a 70 percent chance that the project will be completed by 208 days. The median, 199.5 days, indicates that half the time the project will take 200 or more days to complete.
As the number of trials reaches infinity, the histogram becomes a distribution, which describes the envelope of the histogram. There are sound statistical principles which explain why the distribution should be near to a normal distribution most of the time. An analysis of the variations due to noise effects can clearly indicate the differences in the 90 percent probable completion date between two projects with the same estimated completion date, as shown in Figure 14.
Generating distributions from many random trials is inherently inefficient, with typically 3000 “runs” per network. The equivalent procedure in manufacturing is to make many measurements of large numbers of parts, trusting that random variation and large sample sizes will cover the full range of possibilities. Specific causes of variation often get overlooked in the mound of data. Likewise, Monte Carlo tests do not reveal the individual effects of specific activities. Nonetheless, if the analyst is aware that uncontrolled events in one activity may have dramatic effects on the project, management can be forewarned and forearmed.
|condition||Eng. Staff||Tech. Staff||Source|
SDE for Noise Effects
Again, DOE has superior means for evaluation of individual effects. Taguchi calls the method “inner and outer arrays ,” while Box and Fung refer to “transmitted variation” . For the analyst, Taguchi's method is somewhat easier to conceptualize and set up, while Box's method is more efficient.
The term inner and outer array stems from the method of display. An array represents the list of conditions under study, reproduced here from Table 3 in the second tutorial session (shown as Table 5).
Table 5 illustrates an array containing only controlled factors. Each row represents one test condition, or one corner of the factor cube, such as Figure 8, shown previously.
We now set up a second array, consisting entirely of noise factors, such as possible completion times for specific activities. The factors for the outer array are selected from all the activities in the project based upon the engineers' assessment of likely difficulty. Table 6 summarizes three such activities.
For simplicity I choose a design with three factors and two levels each. Each activity maybe completed ahead of or behind the plan.
The outer, or noise factor, array presents the conditions that combine the three noise factors, Table 7.
Each row of this outer array is now one set of test conditions for the noise factors. At each inner array test condition we now perform all the noise factor tests, in this case there are eight possible noise factor combinations. Any set of conditions can make up the outer array, so long as the set forms an orthogonal array.
|Completion Time Weeks|
|P||Special Machine Mfg.||8||14|
|Q||Sample Build (20 Units)||2.5||5|
|R||Time, Temp Study||3||7|
|Outer Array Condition||P |
Special Machine Mfg.
Sample Build (20 Units)
Time, Temp Study
The total design can be displayed as crossed inner and outer arrays. “Crossing” the inner and outer arrays for this example, Figure 15, produces 8 x 8 = 64 possible conditions, or individual networks, each slightly different from the others.
For example, the condition in the upper left corner of the crossed array, marked 1, has the settings shown in Table 8. The cell marked 22 has the conditions shown in Table 9.
Computation determines the completion date for each network. While 64 network computations may seem large, it is far less than 24,000 (3000 for each corner of the control cube) for an analysis using random selection of noise factor levels. The number of network computations will always remain much smaller than a random selection method, because the latter requires large sample sizes to assure a full distribution of completion times. By contrast, a DoE approach specifically selects a few conditions that provide the same complete range of possibilities.
|Control Factor||Description and Level||Noise Factor||Description and Level|
|A||Eng. Staff Level: 2||P||Build Special Machine: 8 weeks|
|B||Tech. Staff Level: 6||Q||Sample Build: 2.5 weeks|
|C||Component Source: Dev.||R||Time, Temp Study: 3 weeks|
|Control Factor||Description and Level||Noise Factor||Description and Level|
|A||Eng. Staff Level: 2||P||Build Special Machine: 14 weeks|
|B||Tech. Staff Level: 7||Q||Sample Build: 5 weeks|
|C||Component Source: Dev.||R||Time, Temp Study: 3 weeks|
Graphically, a series of small cubes representing the noise effects can illustrate these designs. Each small cube is placed at a corner of a large cube, representing control factors, as shown in Figure 16.
The large cube represents the control factors, with each factor extending in the three axis directions, respectively. The small cubes represent the noise factor variations. Strictly speaking, the small cubes should appear in three new dimensions, since there are three new factors, P, Q, and R. However, six dimensional graphics software is not yet available. Figure 17 is a detail view of a small cube.
The results for each row in Figure 15 represent the possible outcomes for a given set of control factor settings and various noise factor levels. Thus each row constitutes measurements near a corner of the large cube, and the differences between these measurements can be treated as statistical variation. Thus, the average and standard deviation of each row indicates the estimated completion and spread in the completion date.
We can thus determine, using standard DoE methods, the conditions that will produce the shortest completion time, as well as those conditions that will yield the most consistent completion time. In terms of the discussion starting this session, we can determine which control factor settings will be least likely to suffer unforeseen delays, i.e., those settings that have the smallest standard deviation. In the manufacture of physical products, Taguchi argues persuasively that reducing the standard deviation is equally as valuable as reducing the average .
Improved stability of the project estimate may be worth a great deal to the corporation for penalty clause avoidance and credibility with clients. As a possible case, some additional staff could be assigned on a case by case basis to those activities which unexpectedly need more resources.
In the simulation the observed completion time variations at each setting of control factors are due to specific changes in the noise factor levels, so analysis can also determine which factors cause larger variations. Which array is inner and which outer is mathematically arbitrary One can thus analyze the noise factors to determine which has the greatest impact on the project results, independent of the control factor settings.
There are also interaction effects between control factors and noise factors. In complex multi-program situations it is entirely possible that a serious technical delay in one activity can have an impact on other projects. For example, a technical roadblock in the development of a component may force purchase of outside technology. In turn, the purchase releases engineering resources for other activities, thus reducing completion times in other areas. Reference  documents such a case. Occasionally, if a specific activity gets delayed the efficient route to project completion maybe to reduce resources! Interaction terms are of great interest when they are sizable, since there is no reasonable way to predict their presence a priori.
Number of Trials Required
The number of trials in the inner-outer array approach can get large rapidly, since the total is the product of the inner array rows and outer array rows. The alternative procedure by Box and Fung in Reference  is more mathematically rigorous, and offers significant savings in cases of very large arrays. They obtained equivalent results with 288 trials, compared to 1296 using crossed arrays. The differences are subtle. The Box approach uses experimental design methods to make only those measurements necessary to obtain the specific factor effects and interactions desired. Visually the difference appears only in the shape of the small cubes of Figure 16.
VALUE OF DETAILED ANALYSIS TO MANAGEMENT
Such a full analysis of noise factors pinpoints those specific activities that may have significant impacts on the program, both positive and negative. The project coordinator now knows which activities to watch most closely during program reviews. Management can change resource allocations when the uncontrolled events appear, well before their impact causes serious delays in the total program. Key activities are not necessarily only those which lie on the critical path, especially when other activities compete for the same resources.
CONSIDERATIONS OF DOE APPLIED TO PROJECT MANAGEMENT
As we have seen above, the mathematics of DoE is well developed. Successful application to program management, or any specific problem, entails some additional requirements for the technical-statistical interface.
Confidence in PERT/CPM Analysis
First, the analyst and manager must have faith in the PERT/CPM analysis and process. If one chart is not credible enough to make decisions based on its recommendations, then the accumulation of more will not change minds. Experience with the project described in Reference  was very positive. The development group whose projects were the subject of the analysis remained on the predicted track through many individual projects.
Clear Objective Statement
Second, a clear objective statement is necessary. Reducing project time or variability is not necessarily the same as minimizing project cost. Where two or more simultaneous goals exist, there are algorithmic means to balance conflicts.
Selection and Specification of Factors
Third, selection and specification of factors must be done carefully. A group review procedure assures that selected factors are central to the total project, not simply one engineer's viewpoint. In particular, one should select noise factors from those offered by the individuals performing the activities. These people best understand the minute workings of the activities necessary to assess which are most likely to go astray.
Cost of DOE Implementation
Every gain has its price, and DoE is no exception. For maximum results, the analyst must think through the project, select potential management changes and determine a significant form of result. Then the network must be exercised, once for each condition selected by the DoE model. All this detailed analysis is a challenge to those who prefer to grasp answers intuitively. From the viewpoint of management, however, analytical thinking time costs far less than no analysis at all. Improved decision-making confidence is worth a great deal, in addition to the value of shorter project completion times. On a more personal level, understanding the project's likely behavior grants to the analyst a “knowledge-power” authority unknown before.
A project, as represented in a PERT/CPM network, is a system that may be examined in the same way as a “thingie” machine. One can make changes in those factors under management control and measure a result. Design of Experiments methodology describes specific changes, so that one may find the effect of individual management changes. DoE can realize the most efficient analytical procedure, one measurement for each factor effect. It is effectively the only way to determine interactions when the result of changing two factors at once does not equal the sum of the results from each change separately. Once the outcomes of potential management changes are available, management has the information to decide which changes will best achieve the objectives.
AH projects are subject to unplanned events. DoE can help managers “expect the unexpected.” Not only does the control-noise factor analysis reveal the overall probabilistic spread in the project outcome, but it also pinpoints the specific unexpected noise factors that cause the greatest delays. Thus, management effort can focus much more effectively. The “ulcer level” of the project is reduced sharply. As analysts, our objective remains the same—improved project control and improved project payoff. Design of Experiments can make major improvements in our effectiveness.
1. M.P. Santell, J.R. Jung, and J.C. Warner. 1987. Optimization in Project Coordination Scheduling Through Application of Taguchi Methods. PMI 19th Annual Symposium (October 2), Milwaukee, WI. Drexel Hill, PA: The Project Management Institute.
5. Taguchi, Genichi; Clausing, Don, Technical Editor; Tung, Louise Watanabe, English Translater. 1987. System of Experimental Design: Engineering Methods to Optimize Quality and Minimize Costs. UNIPUBKraus International Publications: White Plains, NY: pg. 625 ff.
6. Taguchi, Genichi. 1986. Introduction to Quality Engineering. Asian Productivity Organization, White Plans, NY: Unipub Kraus International Publications, p. 101
7. Box & Fung. February 1986. Center for Quality and Productivity Improvement, Report #8, Madison. WI.
8. Taguchi, Genichi. 1986. op. cit., pg. 13 ff.
Jay Warner is assistant professor of mechanical engineering at the MilwuukeeSchool of Engineering and principal scientist with Warner Consulting, Inc. He received his B.S. in physics from the University of Massachusetts-Amherst, and his Ph.D. in metallurgy from Iowa State University. He implemted use of Taguchi's methods for experimental design throughout a large multi-plant company, writing analytical software and holding in-house symposia, and promoted the “Quality Revolution” in a $100 million firm through companywide training and individual project support. He has authored A2Q® Method, which focuses on sound statistical methods to realize available opportunities for improvement.”
AUGUST 1992 pm network