Design of Experiments for Project Managers
Editor's Note: This is the first article in a series which introduces a relatively new concept in project management, design of experiments. It is a direct result of the submission of a paper, “Optimization In Project Coordination Scheduling Through Application of Taguchi Methods,” by Mike Santell, John Jung, and Jay Warner  for consideration for publication in the Project Management Journal. The reviewers concluded that it was appropriate for publication except that PMI readers lacked the background in statistical design of experiments to adequately understand that paper. Dr. Jay Warner graciously agreed to write a series of “tutorials” for publication in the PM NETwork to prepare readers for the more technically oriented paper in the PMJ. That paper will appear in the September issue of the PMJ. In addition, Dr. Warner will present a workshop at the PMI ‘92 S/S for those who wish to learn more about design of experiments in general and Taguchi methods and concepts in particular. He is also writing a book, A2Q—Approach to Quality, which covers this and other applications Of statistical analysis for production of goods and services.
J.C. Warner, Milwaukee School of Engineering, Warner Consulting, Inc., Racine, Wisconsin
Consider for a momenta (purely hypothetical) situation. Upper management has charged you with planning and coordinating a major development project. People will be hired, equipment purchased and installed, all with the intent to produce a product or a new product plan. Together with key people, you have developed the core of the project, a network plan for the project, Figure 1.
Each task has been carefully estimated and linkages with other projects were delineated. Necessary people skills are known, and the people available with each skill have been established.
However, when you put all the information into the computer, it reports that the shortest completion date is 20 percent over the due date required by upper management, a month too long. What do you do? It would be unprofessional, not to mention professionally risky, to promise anything and spend extra time planning excuses when the results don't come in. Much better would be to conduct analyses which lead to specific recommendations including:
Optimum materials resource allocation. The problem outlined above is a specific rendition of the larger issue, how to optimize resource allocation to reach an ideal combination of low cost and high benefit.
Optimum staffing level. For example, when we ask what is the optimum staffing level for a product development project, we wish to determine whether adding an engineer or technician's time will materially reduce completion time. Adding a technician could have little effect, if there was not significant activity to accomplish. Reducing staff by one individual also could possibly save costs with no loss of result.
Figure 1. New Product Development Program
Effects of “unanticipated” problems. Two kinds of changes can occur in a project plan. “Controlled” changes are planned for, and are made by adjusting resource amounts or distributions. “Uncontrolled” changes am just that—out of the control of the project coordinator.
If a delayed task is on the critical path of a project, then the whole project has been delayed. When a number of simultaneous projects use the same resources, a delay at one point can prevent availability of a resource necessary in another project, even if the delayed task is not on a critical path. The complexities easily surpass human understanding.
What we need is a prediction, ahead of time, pinpointing which specific tasks or sub-projects are most likely to cause problems if the individual task gets into trouble. Then we can focus management attention on the tasks that are most sensitive. If something is unavoidably delayed, we will know well ahead of time what downstream impacts will occur, and we can take action to correct the situation early.
ALTERNATE METHODS OF SOLUTIONS
Since you, as program coordinator, have plenty of resources at your disposal, we can assume that the entire project outline is available on a computer. Thus, you can make trial adjustments in the plan to see what will reduce the completion time or cost. There are four basic approaches: trial and error, Monte Carlo simulation, sensitivity analysis, and design of experiments.
Trial and Error
When first faced with the situation described, a typical response is to try something. One can often see where major tasks are located, and each item on the critical path can be examined. The choice of what to change, and how much, is fundamentally matter of intuition, i.e., experience and judgment. The difficulty is that prior experience usually does not cover this exact situation, and may not even approximate the decisions which must be made now. The number of options and combinations of options, even if restricted to the critical path, can easily become overwhelming. No wonder some analysts rely upon inspired guesses.
When the individual tasks have been defined by the specific subject experts, you could easily wind up in a shoving match to see who can make, or resist, a change with the most determination. The technical feasibility of these changes gets lost in the dust.
Monte Carlo Simulation
To avoid some pitfalls of such a “cut and try” approach, the Monte Carlo simulation method can be used. The time estimate for each task is adjusted randomly within a reasonable range. When each task has been so adjusted, the completion time is re-computed. This is repeated many times, so that a final average completion time represents an average of many possible scenarios, and we can have considerable confidence in the likelihood of the predicted outcome.
Monte Carlo methods provide virtually no information about the estimated completion times should any controlled change be adjusted. Therefore, we cannot tell how to reduce the total project time. What happens if the project manager chooses not to develop a component, and buys the engineering from outside? Can we significantly reduce the completion time by adding a few key people? Which people? Monte Carlo analysis cannot tell. The simulations must be run over. The computation time effectively means we cannot consider even fifty changes.
We must change the project model and compute the two cases to determine whether a controlled change is worthwhile. One way to structure the repeat trials is called sensitivity analysis. A single factor is adjusted slightly from the initial condition, and sometimes the Monte Carlo procedure is repeated. Repeating this procedure for a number of factors tells how much change is likely to occur in the completion date, for a small change in each factor. We can then determine which factors are significant, and focus on them.
The cost in CPU time can be enormous when Monte Carlo runs are used, since three runs are required for each factor. In addition, sensitivity analysis cannot detect synergistic, or interaction, effects between factors. If an engineer is added to the staff, and a component is purchased instead of designed, then will the time saved be equal, less or more than the sum of the two? We cannot tell by sensitivity analysis.
We need something that requires less computer time (trials) than Monte Carlo, which will also give sound information about each of the factors, as well as the synergistic effects between them.
Design of Experiments (DoE)
The answer is statistical design of experiments, or DoE. By careful examination of the mathematics used for the analysis of measurements, we can predetermine which measurements are necessary for answering the questions posed above. Only the key measurements and no more will provide the answers. The impact of each tested factor, and the relative influence of that factor, is obtained. The synergistic effect between factors, if any, can be obtained at the same time. In principle, one can obtain one piece of information about a factor for each test performed.
The mathematics for such breakthrough methods has been available since 1935 . Modern explications are readily available   . In the last decade it has been “discovered” for American manufacturing and development applications. It is especially suitable in simulations, such as computer aided design and project management, where an overwhelming number of factors can readjusted. This report will present an overview of experimental design sufficient to understand an analysis. In the September PMJ, a paper by Mike Santell, John Jung and this author will demonstrate a complete case history of an application in project management.
Concepts and Principles
The basic approach of design of experiments is that measurements will be compared to draw generalized conclusions about the behavior of the system upon which the measurements are made. For our purposes, a system can be any process in which there are a number of potential controls and inputs with a given output.
Figure 2. Thingie Manufacturing Machine
DoE has been applied to manufacturing—for example to improve casting finish and dimensions, to product design, to increase torque strength in wrenches and design of overload heating elements, and to service operations—the format of instruction sheets, productivity of an engineering development group and a marketing study for an industrial capital good.
For ease of discussion, let's first examine the manufacture of thingies. The machine for producing thingies is shown in Figure 2.
A thingie is a common household device that is manufactured in quantity, similar to a ratchet wrench, an engine block, or a plastic bottle. Since everyone is familiar with thingies it is unnecessary to go into the technical details of their manufacture or use. Suffice it to say that in production there is a starting material, input at the left in Figure 2. There is a lever on the front of the machine, which can be set left or right, called position - or +. There are some handles on top to control different parts of the process, which can be turned to different positions. Each of these controls is a variable, or factor.
The system requirements for application of experimental design are:
- The factors involved must be controllable,
- The factors must be adjustable to at least two separate values,
- Each factor setting must be completely independent of the others,
- The output must be measurable in a quantitative form, and
- It must be possible to perform trials.
Most of these requirements are obvious and easily met. In a computer simulation the variables are specified by the input data, and can be changed. Factor independence means that a control can be set to any desired level without adjusting another. This independence results in orthogonal behavior, Figure 3, and it is virtually an absolute requirement for the success of a designed experiment. Quantitative output means we have to measure the output numerically, an obvious result with computer simulations. It must be possible to perform trials, or experiments, on the system in question. It is not reasonable to have eight design groups develop the same product, but we can model eight different PERT/CPM plans by computer and compare results. Clearly, these restrictions are not severe. The vast majority of manufacturing and service processes fit into the requirements for a factorial design experiment.
Consider a simplified hypothetical project. Each CPM simulation of the project predicts a completion date and cost. Adjustments are made in the resources available and further calculations made. A comparison of the results from the first and second conditions tells us which resources produce faster completion times and at what cost.
Clearly, deductions from a collection of randomly selected conditions will be difficult, and untrustworthy. Adjusting one resource level at a time will provide results, but the procedure is sluggish at best. Conclusions are valid only for the specific settings of the other variables, and synergistic effects will remain hidden. The nature of the conclusions must be hedged severely. The computed improvements are only valid when all else is fixed at one setting.
With two independent factors the simplest approach is to set each factor at two settings, a low and high position, Figure 4.
Two factors (such as engineering and technician resource labor) have been set at each of two settings, a low side, and a high side. The trial conditions, and the results, can be shown in tabular form also, Table 1.
Table 1. Two Factor Factorial Design, Completion Time
|Trial||Engineering Resources (persons/day)||Technician Resources (persons/day)||Completion Time (days)|
At each condition listed, i.e., each corner of the square, measurements are made of the output, or response.
The situation is shown as the square on the horizontal plane in Figure 5 with the output shown by the response surface in the third dimension.
The conditions for shortest completion time are easily seen. We can determine whether a factor causes a constant trend in the response. In Figure 5, when technician resources are low (right rear side of the square base), increasing engineering resources raises the completion time! When technician resources are high, increasing engineering time has the opposite effect. We cannot discuss the effect of engineering time unless we also know the amount of technician resources. This is called an interaction, or synergistic effect. Where the response surface is twisted, an interaction exists.
Analysts who persist in one-factor-at-a-time procedures will never discover interactions. We need to learn whether interactions are significant, and which ones are large, to better control and understand the project, and to help the managers perform their jobs better. Furthermore, we need to get this information in a timely, efficient manner.
For three factors the square in Figure 4 becomes a cube, with the eight possible settings at the corners, Figure 6.
The + and - signs the order, Engineers (E), Technicians (T), and component source (S) settings. The left side of the cube represents four test conditions, --, --+, -++, and -+-, or E-T- S-, E- T - S +, E- T+ S +,
and E- T+ S-. There are an equal number of conditions on the left side with T+ and T-, two each; an equal number of conditions with S+ and S-. Only factor E is the same for all the four conditions on the left face of the cube, at the Elevel. On the right side the four test conditions are +-, +-+, +++, and ++-. Again, there are an equal number of T+ and T - conditions and S+ and S- conditions. Factor E is constant at the E+ level. Examine the conditions on the front face and rear face, and satisfy yourself that only factor T settings are constant. The bottom and top faces of the cube bear the same relationship to factor S. The respective figures are shown in Figure 7.
By writing the project completion time in the corners of the cube we can make quick assessments for reducing that time. The average time on the left face can be compared with the average on the right face.
By comparing other cube faces one can determine the other single factor effects of technician resources and component sourcing on completion time. One can obtain the magnitudes of all three pair-wise interactions and the single three-factor interaction. Together with the grand mean of all the data, there are eight results from the eight measurements. If PERT type calculations are used, statistical confidence intervals for each effect can also be obtained.
This segment of this tutorial on design of experiments and Taguchi methodology presents the nature of the problem and the analytical process. In the next article, an example will be analyzed using this methodology. In a subsequent article, we will discuss Taguchi concepts and methodology which open new doors to statistical analysis.
I. There are designs that do not use orthogonally selected factor settings. However, these settings are selected with equal care for slightly different objectives, so that the general admonition that one should select one's settings to mathematically suit the specific goal in mind, is retained.
1. M.P. Santell, J.R. Jung, and J.C. Warner 1987. Optimization in Project Coordination Scheduling Through Application of Taguchi Methods. PMI 19th Annual Symposium (October), Milwaukee, WI. Drexel Hill, PA: The Project Management Institute.
2. Fisher, R.A. 1935. The Design of Experiments. Edinburgh, Scotland: Oliver & Boyd.
3. Box, George E.P., William G. Hunter, and J. Stuart Hunter. 1978. Statistics for Experimenters. New York: John Wiley & Sons, Inc.
4. Box, George E. P., and Norman R. Draper. 1987. Empirical Model Building and Response Surfaces. New York: John Wiley & Sons, Inc.
5. Taguchi, Genichi; Don Clausing, Technical Editor; Louise Watanabe Tung, English Translator. 1987. System of Experimental Design: Engineering Methods to Optimize Quality and Minimize Costs. White Plains, New York: UNIPUB, Kraus International Publications.
Jay C. Warner is assistant professor of mechanical engineering at the Milwaukee School of Engineering and principal scientist with Warner Consulting, Inc. He received his B.S. in physics from the University of Massachusetts-Amherst, and his Ph.D. in metallurgy from Iowa State University. He implemented use of Taguchi's methods for experimental design throughout a large multi-plant company, writing analytical software and holding in-house symposia, and promoted the “Quality Revolution” in a $100 million firm through companywide training and indivual project support. He has authored A2Q® Method which focuses on sound statistical methods to realize available opportunities for improvement.
MAY 1992 pm network