Structural factors as predictors of U. S. defense acquisition project outcomes

an exploratory analysis


Naval Postgraduate School Monterey, CA


This paper explores relationships between the structural factors and the outcomes of U.S. defense acquisition projects. Periodic status reporting for these projects requires data on outcome measures such as cost and baseline variance, along with data on structural factors such as the organization managing the project, the project’s complexity, the product or commodity under acquisition, and the project phase. Regression analysis uses reported data from 1997 through 2010 to examine the research hypothesis that these structural factors are correlated with project outcomes. While the results are inconclusive, the analysis serves to promote debate on how these perennially problematical projects may be analyzed and managed.

Keywords: defense acquisition; project outcomes; cost variance; project baseline


Perennial criticisms of large complex defense acquisition projects (e.g., GAO, 2010; 2011a) support the argument that project management theory is impoverished (Koskela & Howell, 2002). The numerous managerial and policy reform initiatives undertaken over the past several decades have not, according to knowledgeable observers, led to better cost, schedule, and performance outcomes (Assessment Panel, 2005; McKinney, Gholz, & Sapolsky, 1994; Munechika, 1997). Acquisition reform has been described as futile (Chin, 2004) and little more useful than “rearranging the deck chairs on the Titanic” (Miller, 2010). Reform’s apparent failure suggests deficiencies in the underlying theories of defense acquisition project management, for example, in the presumed reasons for and causes of success and failure.

One such taken-for-granted belief is that the structural factors of an acquisition project (i.e., those factors that contribute to the project’s form) are correlated with the program’s cost, schedule, and performance outcomes. This presumption is evident in U.S. Department of Defense (DOD) policy requirements that acquisition project managers must provide periodic status reports on key cost, schedule, and performance metrics, along with information on key project structural factors. From this perspective, the reported structural factors are presumed to provide information to help explain the reported project outcomes and, to the extent possible, interventions can be designed to shape those factors and thus influence outcomes.

This presumption has remained largely without validation (see, for example, Berteau et al., 2011; Sadeh, 2000), mainly due to the absence of accessible data on defense projects. In the last few years, however, the Defense Acquisition Management Information Retrieval (DAMIR) initiative has provided access to historical project data that contain potential outcome measures and factors, which make this type of analysis possible (DOD, 2011).

Purpose and Method

This paper seeks to shed light on this implicit theory of defense acquisition project management. It employs data from DAMIR to undertake exploratory analysis of several structural factors to assess the extent to which they are correlated with project outcomes. These structural factors include those that would be likely choices as factors in any acquisition project—the project’s organization; the type of equipment or system under acquisition (e.g., aircraft, ship); the project’s degree of complexity; and the project’s life-cycle phase (i.e., whether in development or production)—and for which multiple years of data are available. The paper’s research hypothesis—that the structural factors are correlated with project outcomes—is examined through regression analysis. Regarding outcomes, two dependent variables, one specific and one general, are employed to make the analysis more robust.


Regarding theory, this paper makes several contributions. First, it investigates the extent to which the claims of Koskela and Howell (2002) are borne out in the context of acquisition project management. Second, it adds to the body of scholarship on the management of large complex projects, an important segment in the field. Third, it is unique in its focus on structural factors as potential predictors of project outcomes; it contributes therefore to theories on both factors and measures of project success. Finally, it introduces readers to DAMIR, its data, and illustrates the sort of analysis that it enables. Although access to DAMIR is currently limited to those who have an official need for its data, this criterion includes scholars who perform sponsored research for any DOD organization. As DAMIR matures, access may become less restricted.

From the standpoint of practice, investigations like this one have the obvious benefit of making possible the prediction of outcomes of new projects based on particular configurations of structural factors—assuming they are correlated—along with insertion of appropriate managerial interventions to avert unfavorable outcomes. On the other hand, if these factors and outcomes are uncorrelated, managers could focus on other factors as predictors. Regarding wider institutional reform, the conclusions of such analyses could motivate changes such as realignments of organizational responsibilities for certain types of projects, revisions to the extent of oversight, and reporting for programs that have certain structural features and expansion (or contraction) of data collection and reporting requirements on outcome measures and factors of interest.


Following this introduction, the paper gives brief reviews of the history of defense acquisition reform policy and the relevant project management literature on project outcomes and factors for predicting those outcomes. It then turns to develop various hypotheses and models that are suggested by that discussion and describes the data to be employed and then presents the statistical analyses and provides interpretation of the results. The paper concludes with a summary of findings, policy recommendations, and recommendations for further study.


Policy Review

Explicit attention to defense acquisition reform was evident as early as the 1950s, with the activities of the second Hoover Commission, which comprehensively reviewed federal bureau functions, organization, and policies, paying special attention to their economic and efficient operations, including acquisition projects in the DOD (MacNeil & Metz, 1956). This focus on defense acquisition coincided roughly with the emergence of modern project management, because complex weapons projects such as the Manhattan Project and the Atlas project demanded new management knowledge, skills, and tools (Baumgardner, 1963, 1979; Kwak, 2005).

Continuing concern with acquisition outcomes was evident during the 1970s, with President Nixon’s Blue Ribbon Defense Panel and the congressionally chartered Commission on Government Procurement, both of which addressed similar concerns as those of the Hoover Commission (McKinney, Gholz, & Sapolsky, 1994).

The 1980s saw a series of similar reform-oriented studies, including President Reagan’s Private Sector Survey on Cost Control (Grace Commission, 1984); his Blue Ribbon Commission on Defense Management, also known as the Packard Commission (Blue Ribbon Commission, 1986); and Secretary of Defense Cheney’s 1989 Defense Management Review (Mavroules, 1991, p. 18). Efforts to implement reforms identified by these and other studies included Deputy Secretary of Defense Carlucci’s Defense Acquisition Improvement Program (Munechika, 1997, pp 6–9), as well as legislation intended to reduce layers of bureaucracy (Horgan, 1995) and to make the acquisition workforce more professional (Mavroules, 1991).

Despite this history of reform efforts, acquisition outcomes continue to be judged, on the whole, as less than favorable (Assessment Panel, 2005). Since the early 1990s, the Government Accountability Office (GAO) has produced reports on the individual outcomes of major defense acquisition programs (MDAPs1), and it continually assesses defense acquisition as a “high risk” area of government operations (GAO, 2008).

Literature Review

Reasons for the problematical nature of defense acquisition reform have been well-documented in Fox’s two books (1974, 1988), which remain the most influential and comprehensive studies of defense acquisition management in the United States. Fox’s analyses, however, preceded the availability of data on individual program outcomes and factors; thus, he was unable to explore systematically how those might be related.

Project management literature has much to say regarding project outcomes and factors for success. First, the question of defining and measuring project success, especially for complex projects with diverse stakeholders, has received much attention (Pinto & Slevin, 1988; Gansler 2011, pp 129–234). Agreement on criteria for success is elusive, with agreement only on general standards such as overall mission accomplishment and stakeholder satisfaction (Murphy, Baker, & Fisher, 1974; Baker, Murphy, & Fisher, 1988). This problem is exacerbated by the lack of data on achievement of cost, schedule, and quality targets, whether because of proprietary concerns (mainly in the private sector) or simply because data haven’t been systematically collected. As a result, most studies rely on stakeholders’ subjective assessments to determine program outcomes (Crawford, 2002).

Second, factors for project success have for the most part been proposed either as general factors or as specific factors for particular projects (Baker, Murphy, & Fischer, 1983; Cleland & King, 1983; Shenhar, Levy, & Dvir, 1997); according to Cooke-Davies (2002), they have resisted identification. Examples of factors that scholars have attempted to link with project success include: use of project management techniques (Morris, 2002; Ibbs & Reginato, 2002), project planning (Dvir, Raz, & Shenhar, 2003), the project manager’s personality (Dvir, Sadeh, & Malach-Pines, 2006), behavioral and organizational factors (Hyväri, 2006a; 2006b; Zimmerer & Yasin, 1998), and contract type (Sadeh, 2000).

Pinto and Slevin (1987) have provided the most well-known framework of critical factors for project success, the applications of which appear frequently in the literature (Pinto & Slevin, 1989; 1990; Finch, 2003). These factors, however, deal mainly with project processes and capacity—for example, mission, management support, communications, personnel, which can vary over the project life cycle (Pinto & Prescott, 1988), rather than structural factors that are the subject of this paper and which remain relatively stable throughout a project’s life.

A project’s performance is typically judged against its initial baseline estimates, which are critical in large and complex defense acquisition programs characterized by high levels of strategic and technological uncertainty (Fox & Miller, 2006). Uncertainty creates incentives for “strategic optimism” (Flyvbjerg, Holm, & Buhl, 2002), to “game” the performance objectives (that is, changing behaviors to focus on hitting targets at the expense of achieving desired desired outcomes; Bevan & Hood 2006), or to strategically buffer the program from scrutiny (Oliver, 1991). Poor initial estimates contribute significantly to program variances (Bertison & Davis, 2008; Feuring, 2007; Harrington, Morgenstern, & Nelson, 2000; Quirk & Terasawa, 1986). In essence, the initial program baseline may constitute a figurative “rigged deck” when it comes to evaluating program success.

Regarding MDAPs in particular, Berteau et al. (2010, 2011) have addressed some structural factors (e.g., organization, contract type) in examining cost overruns, which is obviously one possible measure of outcomes. Their examination did not, however, examine other significant structural factors, nor did it seek significance of correlations through tools like regression. Brown’s recent research (2011) examines outcomes in relation to MDAP interdependencies; although extremely promising, this work relies on data that are not yet easily accessible.

To summarize, the outcomes of defense acquisition programs continue to be perceived as unfavorable, and little is understood about possible correlations between a program’s outcomes and its structural features.

Research Hypothesis and Variables

The discussion up to this point has provided motivation for the following research hypothesis: Program structural factors are correlated with program outcomes. This section presents the development of variables for regression models, along with hypotheses for analysis.

Source of Data

DAMIR contains required periodic reports dating from 1997 to the present on the performance of defense acquisition programs. One such periodic report for MDAPs is the Selected

Acquisition Report2 (SAR), which includes data for several potential structural factors and outcomes. Variables of interest were selected based on the author’s judgment of their potential for correlation, as well as on their availability in DAMIR.

Independent Variables—Structural Factors

From data that are readily accessible3 in DAMIR, four structural factors were selected to explore their possible correlations with outcomes: the managing organization (its component); the type of equipment or system under acquisition (its commodity); the degree of complexity (expressed in terms of unit cost quartile); and the MDAP’s life-cycle phase; these are further explained below. That these factors are required in SARs reveals certain assumptions, for example, that the factors provide useful managerial information, that programs may be managed differently depending on the values these factors take, or that the outcomes of programs depend on these factors.


This variable denotes the program’s managing organization within the DOD. A component is a categorical variable with one of four nominal values: Air Force, Army, Navy, or DOD (in cases of programs in which more than one of the other three components share cooperative managerial responsibilities4). Because the components all have different organizational structures, regulations, and managerial processes for defense acquisition program management, program outcomes may vary according to component.


Complexity has been identified as an important factor with different manifestations in program management (Fox & Miller, 2006; Brown, Potoski, & Van Slyke, 2009; Witty & Maylor, 2009). Unfortunately, SAR formats do not permit an easy way to capture data that reflect these ideas of complexity. Accordingly, program acquisition unit cost (PAUC)5 is used as a proxy for the complexity of the system to be acquired. Systems with higher unit costs (e.g., ships, aircraft) tend to have higher levels of complexity, and those with higher complexity may have higher risks, hence greater potential for worse outcomes than those with lower complexity. For simplicity, this is a categorical variable that takes on the nominal value (Q1–Q4) of the population quartile in which a program’s PAUC falls.


Some programs have varying levels of risk associated with the type of equipment or system under acquisition, which may be due to factors such as technological maturity and the state of the U.S. industrial base, with concomitant potential effects on outcomes. This categorical variable takes on a nominal value for one of the following eight program commodities:

  • aircraft (fixed- and rotary-winged, including unmanned aerial vehicles);
  • ships (surface craft and submarines); ground vehicles (tracked and wheeled);
  • missiles (air and surface);
  • munitions (bombs, artillery projectiles, warheads);
  • command, control, communications and intelligence (C3I; radios, information systems);
  • space (satellites, launch vehicles); and
  • other (systems, such as those for soldier support, which don’t fit in with the other commodities).
Program phase

This variable has one of two nominal values: development or production. This variable is associated with the relative maturity and risk of the program, which may be reflected in its outcomes. Specifically, programs in the early project phases (i.e., development) have a higher level of uncertainty—thus higher risk—than those in the later phases of production.

Dependent Variables — Outcomes

The criticisms of defense acquisition cited above emphasized unfavorable outcomes at the level of individual programs, such as cost increases, schedule slips, and performance shortfalls. Accordingly, in this study, dependent variables are outcomes for individual acquisition programs and are functions of the independent variables for those programs. For robustness, two outcomes are selected. The first is represented by a specific continuous variable that is suitable for multiple linear regression, and the second by a more general categorical variable suitable for binary logistical regression. These are described further below.

Percent unit cost variance (PUCV)

At the beginning of an acquisition program, initial baseline cost estimates are established in several categories both for total costs and unit costs (e.g., PAUC mentioned earlier). Cost estimates are updated for each SAR, and any variances from the baselines are reported therein. These variances are reported in dollars and in percentages, as well as in base year and inflation-adjusted dollars.

Because total costs and unit costs can both differ significantly among programs (e.g., the unit cost for a missile program may be dozens of thousands of dollars, whereas for a ship, perhaps hundreds of millions of dollars), variances that are expressed in dollar terms can be misleading. Accordingly, this study will use the percentage measure (PUCV) as a dependent variable, with PUCV defined as that which is reported in each periodic SAR. For example, if in year 1 the estimated baseline unit cost was US$1 million, and in year 2 the estimate rose to US$1.1 million, PUCV was a positive (and unfavorable) 10%.6

Occasionally, unit cost variance appears excessive, as in the case of program cancellation, when all program costs would be allocated to perhaps only a few developmental platforms. Accordingly, this study sets plus and minus 100% variance as upper and lower limits and eliminates any data points outside these limits as outliers.7

Program breach

A breach occurs, and must be reported in the SAR, when an estimate of a cost, schedule, or performance parameter is determined to be significantly less favorable than its baseline estimate. Breaches fall into two main categories: Nunn-McCurdy breaches8 and acquisition program baseline (APB) breaches.

A Nunn-McCurdy breach (GAO, 2011b) occurs and must be reported when a major increase (ranging from 15% to 50%, depending on specific criteria used) is determined to have occurred in a program’s current estimate of unit cost from the baseline estimate. A Nunn-McCurdy breach is an indicator of potentially serious problems with a program, and reporting a breach triggers a variety of actions by higher authorities, including required reports to Congress on corrective actions, including possible program termination.

An APB breach occurs and must be reported when the current estimate for any of a program’s key parameters is determined to exceed its threshold (or minimally acceptable) value as defined in the APB.9 Because APB thresholds are the minimally acceptable values of the most important program parameters, the determination that any might not be achieved represents a serious issue, requiring the attention of higher authorities, along with the appropriate corrective action.

The BREACH variable has binary nominal values: “yes” (if a program has experienced any type of breach during a SAR reporting period) or “no” (if not).

Although PUCV is a specific cost variable, BREACH is general in nature and accounts for schedule and performance effects, in addition to cost effects, of sufficient importance to be reported to higher authorities. These two dependent variables provide a robust range of responses, from specific to general, and also cover each of the three major project outcomes of cost, schedule, and performance.

Null hypotheses

The following null hypotheses follow: There is no relationship between program outcomeseither PUCV or BREACHand:

  • H10: Component
  • H20: Commodity
  • H30: Complexity
  • H40: Phase

Data Summary

This section gives an overview of data corresponding to the variables of interest. The data include all acquisition programs that were designated as MDAPs and that submitted a SAR from 1997–2010.10 Minitab 16 was used for all figures and analyses.

Table 1 lists numbers of SAR submissions and MDAPs by commodity and component.

SAR Submissions Air Force Army DOD Navy MDAPs Air Force Army DOD Navy
Space 128 0 0 15 Space 16 0 0 2
Aircraft 145 81 9 145 Aircraft 19 13 1 17
Missile 35 66 8 71 Missile 3 8 4 8
C3I 26 93 42 52 C3I 7 27 5 12
Ship 0 0 0 128 Ship 0 0 0 18
Munitions 39 38 0 31 Munitions 6 6 0 3
Ground Vehicle 0 59 0 13 Ground Vehicle 0 7 0 2
Other 15 5 6 2 Other 1 1 3 1

Table 1 – Number of SAR submissions and MDAPs by commodity and component.

Figures 1 and 2 provide bar charts depicting percentages of SAR submissions per PAUC quartile (representing complexity) according to component and commodity. As expected, Navy and Air Force programs, representing a preponderance of ship, space, and aircraft programs, appear to have the highest levels of complexity.

– PAUC quartile by component

Figure 1 – PAUC quartile by component.

– PAUC quartile by commodity

Figure 2 – PAUC quartile by commodity.

Figure 3 shows numbers of SAR submissions for programs by phase and component.

– SAR submissions by phase (D = Development; P = Production) and component.</b>

Figure 3 – SAR submissions by phase (D = Development; P = Production) and component.

Table 2 provides summary statistics for PUCV.

Variable N Mean Std Dev Median
PUCV (N=1168) 1169 -0.39 15.78 0.0

Table 2 – Summary statistics for percent unit cost variance (PUCV).

Figures 4 and 5 show a scatter plot for PUCV and bar chart of BREACH counts,11 according to reporting periods in the time frame of interest (1997–2010). These outcomes appear relatively uncorrelated with time, or growing slightly more unfavorable, a circumstance that supports the perceptions, as noted above, that acquisition reform efforts over the years have not proved successful.

– PUCV, 1997-2010

Figure 4 – PUCV, 1997-2010.

– Count of breaches, 1997-2010

Figure 5 – Count of breaches, 1997–2010.

Regression Analysis

This section presents regressions for the dependent variables (program outcomes) and the independent variables (structural factors of interest). First, the multiple linear regression results for PUCV are given, along with general comments on the adequacy of the model. Second, the logistic regression results for BREACH are given, again with commentary on the model. Finally, each of the structural factors is discussed individually regarding its contributions to the two models.

The general model for the outcome—PUCV or BREACH—of a program during time period t is:

Outcome, = f (component, PAUC quartile, commodity, phaset, outcomet-1)

Component, PAUC quartile, and commodity remain unchanged over a program’s life, while phase may change; where appropriate, dummy variables are created for the categorical predictors. The model also includes a lagged dependent variable to account for the prior period’s outcome.

Multiple Linear Regression

For each outcome, the model for program i and period t is given by:

PUCVit = αit + ß1COMPONENT + ß2PAUC QUARTILE + ß3COMMODITY + ß4PHASEit + ß5PUCVit-1 + eit

Table 3 provides unstandardized factor coefficients from progressive regressions for PUCV.

Predictor Coefficients (Standard Error)
Constant 0.381
PUCV Lag 0.546
Component (Reference is Air Force)
Army -1.491
DOD 0.412
Navy 0.450
Commodity (Reference is Space)
Aircraft 0.951
C3I -1.749
Ground Vehicle 0.615
Missile 0.199
Munitions 0.226
Ship 1.963
Other -0.989
Phase: Development (Reference is Production) 1.846**
PAUC Quartile (Reference is Q1)
Q2 1.943
Q3 6.650***
Q4 4.996***
R-squared .277 .279 .283 .286 .300
F statistic 381.37 96.28 35.37 32.88 28.08
1. *, **, *** indicate significance at .90, .95, and .99 levels, respectively.
2. Number of observations: 998 (when the lag factor is included); 1,168 (when the lag factor is omitted).
3. Variance inflation factors for the full model ranged from a maximum of 3.87 to a minimum of 1.06 (M = 2.16, SD = .89), indicating that multicollinearity was not high.
4. The Durbin Watson statistic (2.16) for the full model indicates the absence of autocorrelation in the sample.

Table 3 – Regression results for PUCV.

Discussion of Results

As expected, the lag variable in each iteration is significantly related to (p <.01) and accounts for most of the variance in the dependent variable. However, the structural factors of interest explain less than 3% of the variance in the outcome, a finding that does not serve to refute the null hypotheses. Overall, this model must be judged to have very little explanatory power for PUCV.

As for the individual factors, the results for component might not have been predicted. Historically, the Air Force has been acknowledged as having the most well-developed institutional program management processes, followed by the Army, and then the Navy with the least developed processes (HASC, 1990, pp 32–45). Accordingly, Air Force programs might be presumed to have better program outcomes than Army and Navy programs, and Army programs are presumed to have better outcomes than Navy programs. (No such presumption could be made for DOD programs.) The negative coefficients for Army and Navy programs, however, indicate that PUCV for those components tends to be lower than in Air Force programs.

Regarding the other factors, several points can be made. First, the signs of coefficients for phase and PAUC quartiles are in the expected directions: PUCV in the production phase is expected to be lower than in development, and PUCV is expected to increase as PAUC increases. Second, when a program is in the top two quartiles (Q3 and Q4) that relationship to PUCV is significant; thus, higher complexity is significantly related to unfavorable outcomes. Third, phase is significantly related to outcomes; developmental programs are associated with unfavorable outcomes. Finally, little can be said regarding commodity, other than the obvious conclusion that it has little relationship to or explanatory power for the dependent variable.

Binary Logistic Regression

In general, this model predicts the odds of a breach’s occurrence compared with a breach’s nonoccurrence. For each outcome, the model for program i and period t is given by:

In(odds(BREACHit)) = αit + ß1COMPONENT + ß2PAUC QUARTILE + ß3COMMODITY + ß4PHASEit + ß5BREACHit-1

The results of binary logistic regression for BREACH include tables of coefficients12 (Table 4), goodness-of-fit tests (Table 5), observed and expected frequencies (Table 6), and measures of association (Table 7).

Predictor Coefficient (Standard Error) Odds Ratio 95% CI Lower– Upper
Constant -0.198
BREACH Lag (Yes) 1.960***
7.10 5.33 – 9.45
Component (Reference is Air Force)
Army -0.018
0.98 0.62 – 1.54
DOD 0.529
1.70 0.68 – 4.25
Navy -0.123
0.88 0.58 – 1.34
Commodity (Reference is Space)
Aircraft -0.296
0.74 0.41 – 1.35
C3I -0.577
0.56 0.28 – 1.14
Ground Vehicle -0.211
0.81 0.35 – 1.88
Missile -0.805**
0.45 0.22 – 0.89
Munitions -1.158***
0.31 0.14 – 0.69
Ship -0.0384
0.96 0.46 – 2.00
Other -1.384**
0.25 0.07 – 0.95
Phase: Development (Reference is Production) 0.302*
1.35 1.00 – 1.83
PAUC Quartile (Reference is Q1)
Q2 -0.741***
0.48 0.29 - 0.79
Q3 -0.575**
0.56 0.33 – 0.97
Q4 -0.685**
0.50 0.28 – 0.92
1. *, **, *** indicate significance at .90, .95, and .99 levels, respectively.
2. Counts: Yes – 474 (event); No – 534

Table 4 – Logistic regression results for BREACH: Coefficients.

Method Chi-square DF p
Pearson 169.288 130 0.012
Deviance 192.236 130 0.000
Hosmer-Lemeshow 12.245 8 0.141

Table 5 – Goodness-of-fit tests.

1 2 3 4 5 6 7 8 9 10 Total
Observed 14 23 25 27 37 60 70 96 71 51 474
Expected 16.6 22.9 25.7 29.4 33.3 57.1 70.0 85.6 81.6 51.9
Observed 87 79 78 81 69 41 31 22 36 10 534
Expected 84.4 79.1 77.3 78.6 72.7 43.9 31.0 32.4 25.4 9.1
Total 101 102 103 108 106 101 101 118 107 61

Table 6 – Observed and expected frequencies: BREACH.

Pairs Number Percent Summary Measures
Concordant 193415 76.4 Somers’ D: 0.54
Discordant 56189 22.2 Goodman-Kruskal Gamma: 0.55
Ties 1048 1.4 Kendall’s Tau-a: 0.27
Total 253116 100.0

Table 7 – Measures of association: BREACH and predicted probabilities.

Discussion of Results

Regarding the models’ predictive qualities, the goodness-of-fit tests, with p-values ranging from 0.000 to 0.141, indicate that the model does not fit the data well. This conclusion is supported by the poor fits between the observed and expected frequencies in several groups. The ratio of concordant and discordant pairs and the measures of association indicate low to moderate predictive ability.

For the factors of component and phase, the results reflect generally the same conclusions as the linear regressions above. Specifically, the effect of component is mixed, with both Navy and Army programs showing slightly better odds of “no breach” than the Air Force. Development programs have higher odds of breach than production programs.

For commodity, however, the results are different from the linear model. Here, space programs have the highest odds of a breach, perhaps an intuitive result, given their complexity.

The significant results for the PAUC quartile are not intuitive, because they show that programs in quartiles 2 through 4 have about half the odds of a breach as programs in quartile 1. One would not expect that the lowest cost programs would have the highest odds of a breach. A count of breaches in each PAUC quartile confirmed, however, that quartile 1 contained the highest percentage of breaches. This issue is discussed further in the concluding section below.

Finally, as in the linear regressions, the factor for lag dominates other variables in its levels of significance, its odds ratios, and its contributions to the predictive power of the models. To illustrate, when the logistic regression for BREACH was run again with all variables except lag, the measures of association ranged from only 0.12 to 0.24, indicating this particular model’s poor predictive quality.



The null hypotheses were framed as: There is no relationship between program outcomeseither PUCV or BREACH—and:

  • H10: Component.
  • H20: Commodity
  • H30: Complexity
  • H40: Phase

The null hypothesis H10 is not rejected. No significant relationship was found, and the conventional wisdom about which components would have more and less favorable outcomes was not supported.


The two models showed inconsistent results with few cases of significance. Accordingly, H20 is not rejected.


The linear and logistic regressions gave different results for the two outcomes. In the linear model, higher PUCV was significantly associated with higher PAUC quartiles. In the logistic model, lower odds of a breach were significantly associated with higher PAUC quartiles. This suggests that (1) although programs with higher complexity (i.e., higher PAUC) have a greater likelihood of unit cost increases, those increases do not necessarily result in breaches; and (2) the breaches in quartile 1 are for reasons other than unit cost (e.g., schedule slips or performance shortfalls). Accordingly, H30 is rejected, and the research hypothesis accepted.

In practical terms, however, this finding may have little utility. Essentially, it says that, regardless of complexity, all programs are likely to have unfavorable outcomes: breaches for less complexity and cost increases for more complexity. Thus, the complexity factor doesn’t help managers to identify potentially problematic programs.


In both models, the phase of a program is a significant factor in its outcome. Developmental programs have greater likelihoods of PUCV increases and of breaches. This is consistent with project management concepts of the life cycle as representing a roadmap of activities and events that progressively increase knowledge and reduce risk in a project. H40 is rejected, and the research hypothesis accepted.

Predictive Ability of the Models

Despite the significant findings for two of the four factors, neither of the models explains more than a very small percentage of the outcome measures’ variance. These models must be judged to have insufficient predictive power to be useful for defense acquisition managers and policymakers.


Although such inconclusive analysis may render conclusions and recommendations problematical, it should serve to generate useful debate, and it also suggests some meaningful points.

First, more analysis is needed. DAMIR’s capabilities will likely be essential in this task, in that it provides a rich mine of historical data that will continue to be collected for the foreseeable future. Although this study relied on only four data elements from only one set of reports (i.e., the SAR), DAMIR offers much more data for sophisticated analysis by astute investigators.

Second, this study’s models had no meaningful power to explain and thus predict program outcomes. Clearly, further research is needed in order to enable reasonably reliable predictions of a program’s future outcomes.13 This reinforces the bulk of project management literature’s findings on the difficulty of identifying significant factors in project outcomes.

This research challenge is magnified for defense acquisition. Scholars (Fox & Miller, 2006; Kronenberg, 1990) have noted the broad range of political, economic, technological, and managerial factors, as well as their myriad interactions, which may influence program outcomes. To the extent that these factors change over time and in response to environmental influences, every program is unique in the factors that determine its outcomes. Because the circumstances of each program are complex, unique, and unstable, so too are the reasons for success and failure.

If valid, this perspective would explain the poor predictive power of this study’s models but it also has significance for managers and policymakers, who would hope to have some ex ante understanding of the conditions under which any particular defense acquisition program is likely to fail. If predictions of outcomes are not possible then neither are the specific prospective interventions to avert failure. Rather, managers and policymakers must plan for and attempt to account for all eventualities; thus, defense acquisition program management has become highly regulated and closely monitored, according to some, increasingly and excessively so (Dillard, 2005).

Alternatively, other outcomes could be used to measure acquisition program performance. The project management literature that was previously reviewed noted difficulties with using outcome metrics such as cost, schedule, and quality for other than simple projects. Yet, the DOD continues to focus primarily on those three metrics for its largest acquisition programs. Considering the significant numbers of unfavorable cost variances and breaches documented in this study, the extent to which these metrics have any real managerial or policy effectiveness is questionable.

To complement these traditional project metrics, other appropriate measures might be developed to reflect the levels of political and institutional interest in defense acquisition programs. These might account for performance in process and governance areas such as transparency, accountability, and regulatory compliance. They might also account for performance in the nation’s economy or the industrial base. They could also account for contributions to military capability, an obvious measure for which programs are not currently assessed. Identification of these outcome measures would of course be accompanied by development of criteria for success and failure in each area, as well as the factors that would enable success or failure.

Such changes would obviously entail a major shift from the implicit foundations of the traditional project management orientation for defense acquisition programs. Yet, as Koskela and Howell (2002) have argued, these foundations appear to be inadequate, and without such changes, the path of failed acquisition reform may continue in the foreseeable future.


Assessment Panel. (2005). Defense acquisition performance assessment: Executive summary, December.

Baker, B., Murphy, D., & Fisher, D. (1988). Factors affecting project success. In Project management handbook (2nd ed), D. Cleland & W. King (eds.), pp 902–919. New York: Van Nostrand Reinhold.

Baumgartner, J. (1963). Project management. Homewood, IL: Irwin.

Baumgartner, J. (1979). Systems management. Washington, DC: Bureau of National Affairs.

Berteau, D.J., Ben-Ari, G., & Sanders, G. (2010). Cost and time overruns for major defense acquisition programs. Center for Strategic & International Studies, Washington, DC.

Berteau, D.J., Ben-Ari, G., Sanders, G., Hofbauer, J., Ellman, J., & Morrow, D. (2011). Cost and time overruns for major defense acquisition programs. Center for Strategic & International Studies, Washington, DC.

Bertisen, J., & Davis, G. A. (2008). Bias and error in mine project capital cost estimation. The Engineering Economist, 53(2), 118–139.

Bevan, G., & Hood, C. (2006). What’s measured is what matters: Targets and gaming in the English public health care system. Public Administration 84(3), 517–538.

Blue Ribbon Commission. (1986). Report to the President on defense management. Washington, DC: Government Printing Office.

Brown, M. (2011). Acquisition risks in a world of joint capabilities. Acquisition Research Program Report, UNC-AM-11-162, August 1. Naval Postgraduate School.

Brown, T.L., Potoski, M., & Van Slyke, D.L. (2009). Contracting for complex products. Journal of Public Administration Research & Theory, 20, i41-i58.

Chin, W. (2004). British weapons acquisition policy and the futility of reform. Burlington, VT: Ashgate Publishing.

Cleland, D.I., & King, W.R. (1983). Systems analysis and project management. New York: McGraw Hill.

Cooke-Davies, T. (2002). The “real” success factors on projects. International Journal of Project Management, 20, 185–190.

Crawford, L. (2002). Profiling the competent project manager. In The frontiers of project management research, D. Slevin, D. Cleland, & J. Pinto (eds.). Newtown Square, PA: Project Management Institute.

Defense Acquisition Guidebook. (2011). Section 2.1.1 The acquisition program baseline (APB), July 29. Retrieved from

Department of Defense (DOD). (2011). Air Force print news: Defense Acquisition Management Information Retrieval Website. Defense AT&L. December 6.

Dillard, J. (2005). Controlling risk in defense acquisition programs: The evolving decision review framework. International Public Management Review, 6(2), 72–86.

Dvir, D., Raz, T., & Shenhar, A. (2003). An empirical analysis of the relationship between project planning and project success. International Journal of Project Management, 21(2), 89–95.

Dvir, D., Sadeh, A., & Malach-Pines, A. (2006). Projects and project managers: The relationship between project managers’ personality, project types, and project success. Project Management Journal, 37(5), 36–49.

Feuring, J. (2007). The impact of human capital on the cost of Air Force acquisition programs (No. AFIT/GCA/ENV/07-M5). Air Force Institute of Technology, Wright-Patterson Air Force Base, OH, School of Engineering and Management.

Finch, P. (2003). Applying the Slevin-Pinto project implementation profile to an information systems project. Project Management Journal, 34(3), 32–39.

Flyvbjerg, B., Holm, M., & Buhl, S. (2002). Underestimating costs in public works projects: Error or lie? Journal of the American Planning Association 68(3), 279–295.

Fox, J.R. (1974). Arming America: How the U.S. buys weapons. Cambridge, MA: Harvard University Press.

Fox, J.R. (1984). Revamping the business of national defense. Harvard Business Review (Sept–Oct), 63–70.

Fox, J.R. (1988). The defense management challenge: Weapons acquisition. Boston: Harvard Business School Press.

Fox, J.R., & Miller, D. (2006). Challenges in managing large projects. Ft. Belvoir, VA: Defense Acquisition University.

Gansler, J. (2011). Democracy’s arsenal: Creating a twenty-first-century defense industry. Cambridge, MA: MIT Press.

Government Accountability Office (GAO). (2008). Defense acquisitions: Results of annual assessment of DOD weapon programs. GAO-08-674T. April. Washington, DC: GAO.

Government Accountability Office (GAO). (2010). Defense acquisitions: Assessments of selected weapon programs. GAO-10-388SP. March. Washington, DC: GAO.

Government Accountability Office (GAO). (2011a). Defense acquisitions: Assessments of selected weapon programs. GAO-11-233SP, March. Washington, DC: GAO.

Government Accountability Office (GAO). (2011b). Trends in Nunn-McCurdy cost breaches for major defense acquisition programs. GAO-11-295R, March. Washington, DC: GAO.

Grace Commission. (1984). President’s private sector survey on cost control: Summary report, January.

Harrington, W., Morgenstern, R. D., & Nelson, P. (2000). On the accuracy of regulatory cost estimates. Journal of Policy Analysis and Management, 19(2), 297–322.

Horgan, L. E. (1995). Panacea or Pandora’s box: The program manager reform. Administration and Society, 27(1), 107–126.

House Armed Services Committee (HASC). (1990). The quality and professionalism of the acquisition workforce. Print No. 10. May 8. Washington, DC: Government Printing Office.

Hyväri, I. (2006a). Project management effectiveness in project-oriented business organizations. International Journal of Project Management, 24(3), 216–225.

Hyväri, I. (2006b). Success of projects in different organizational conditions. Project Management Journal, 37(4), 31–41.

Ibbs, C. & Reginato, J. (2002). Measuring project management’s value. In The frontiers of project management research, D. Slevin, D. Cleland, & J. Pinto (eds.), 177–186. Newtown Square, PA: Project Management Institute.

Koskela, L. & Howell, G. (2002). The underlying theory of project management is obsolete. Proceedings of the PMI Research Conference, 293–302.

Kronenberg, P.S. (1990). Public administration and the Defense Department: Examination of a prototype. In Refounding Public Administration, G. Wamsley, et al. 274–306. Newbury Park, CA: Sage.

Kwak, Y.H. (2005). Brief history of project management. In The story of managing projects: an interdisciplinary approach, E. Caryannis, Y.H. Kwak & F. Anbari (eds.), 1–9. Westport, CT: Praeger.

MacNeil, N., & Metz, H. (1956). The Hoover report, 1953–1955. New York: Macmillan.

Mavroules, N. (1991). Creating a professional acquisition workforce. National Contract Management Journal, 24(2), 15–33.

McKinney, E., Gholz, E., & Sapolsky, H. (1994). Acquisition Reform–Lean 94-03. Lean Aircraft Initiative. Cambridge, MA: Massachusetts Institute of Technology.

Miller, T.H. (2010). Rearranging the deck chairs on the Titanic: Why does acquisition reform never work? Defense AT&L, November-December, 27–30.

Morris, P. (2002). Research trends in the 1990s. In The frontiers of project management research, D. Slevin, D. Cleland & J. Pinto (eds.), 31–56. Newtown Square, PA: Project Management Institute.

Munechika, C. (1997). Acquisition reform: This, too, shall pass. Air Command and Staff College Research Paper AU/aCsC/0385/97-03, Maxwell AFB, AL.

Murphy, D., Baker, B., & Fisher, D. (1974). Determinants of project success. Boston: National Aeronautics and Space Administration.

Oliver, C. (1991). Strategic responses to institutional processes. Academy of Management Review, 16(1), 145–179.

Pinto, J. K., & Prescott, J. E. (1990). Planning and tactical factors in the project implementation process. The Journal of Management Studies, 2 (3), 305–328.

Pinto, J. K., & Slevin, D. P. (1987). Critical factors in successful project implementation. IEEE Transactions of Engineering Management, EM-34(1), 22–27.

Pinto, J. K., & Slevin, D. P. (1988). Project success: Definitions and measurement techniques. Project Management Journal, 19(1), 67–72.

Pinto, J. K., & Slevin, D. P. (1989). Critical success factors in R&D projects. Research Technology Management 32(1), 31–66.

Quirk, J., & Terasawa, K. (1986). Sample selection and cost underestimation bias in pioneer projects. Land Economics, 62(2), 192–200.

Sadeh, A. (2000). The role of contract type in the success of R&D defense projects under increasing uncertainty. Project Management Journal, 31(3), 14–22.

Shenhar, A.J., Levy, O., & Dvir, D. (1997). Mapping the dimensions of project success. Project Management Journal, 28(2), 5–13.

Whitty, S.J., & Maylor, H. (2009). And then came complex project management (revised). International Journal of Project Management, 27(3), 304–310.

Zimmerer, T. W., & Yasin, M. M. (1998). A leadership profile of American project managers. Project Management Journal, 29(1), 31–38.

Keith F. Snider is Professor of Public Administration and Management in the Graduate School of Business and Public Policy at the Naval Postgraduate School, Monterey, California, USA. He received his PhD in Public Administration and Public Affairs from Virginia Tech. His teaching and research interests lie in the areas of defense acquisition policy, defense project management, and public administration theory and history. Dr. Snider’s publications on these topics appear in several scholarly journals and books. Since 2003, he has served as Principal Investigator for the Acquisition Research Program at NPS, managing DOD-sponsored research projects conducted at NPS and other universities.

1From this point forward, defense acquisition projects will be referred to as “programs” or MDAPs, in keeping with usual DOD usage. MDAP is defined in 10 U.S.C. 2430 as a DOD acquisition program that is designated as such by the Secretary of Defense and is not a highly sensitive classified program, which is estimated to require an eventual total expenditure for research, development, test, and evaluation of more than US$365 million or an eventual total expenditure for procurement of more than US$219 billion (both in FY 2000 constant dollars).

2 10 U.S.C. 2432 requires the Secretary of Defense to submit a SAR to Congress for all MDAPs. The SAR reports the status of total program cost, schedule, and performance, as well as unit cost breach information. SARs are submitted annually and, on an exception basis, quarterly when estimates for some cost and schedule parameters exceed their targets.

3 The factors are included in standard SAR formats and require little or no manipulation or transformation.

4 For example, the F-35 Joint Strike Fighter is being acquired for use by both the Air Force and the Navy; thus, as a joint program, its component is designated as “DOD.”

5 Defined as the estimated cost of development, procurement, and construction necessary to acquire a system, divided by the total number of fully configured items (to include research and development units) to be bought through the life of the program.

6 Many circumstances could cause such a result, such as (1) cost increases due to poor initial estimates; (2) cost increases due to desired and beneficial changes (e.g., increased weapon system capability); or (3) changes in quantity to be procured. Discussion on reasons for program cost growth, appropriate cost metrics, and the idea that cost growth in some circumstances may be justifiable and worthy, is reflected in the DOD’s responses to recent GAO’s critical assessments of acquisition programs. See, for example, GAO (2011a, pp. 153, 179–180).

7 One program, National Missile Defense, was excluded due to its extremely high unit cost (in excess of US$20 billion), which is attributable both to its very complex “system of systems” nature and to its production as one single system (i.e., quantity of one). Additionally, this program submitted only one SAR in 1999. Other programs are also very complex, with very low production quantities (e.g., Warfighter Information Network— Tactical); however, they typically have submitted SARs in several years. For this reason, they are included in the analysis of this study.

8 Named for Senator Sam Nunn and Rep. David McCurdy, co-sponsors of the original 1982 legislation (10 U.S.C. 2433 and 2435).

9 10 U.S.C. 2435 and 2220 provide statutory basis for APBs for major programs. The APB is established at program initiation and represents its approved description in terms of key cost, schedule, and performance parameters, expressed as both objective (desired) and threshold values. Examples of key parameters that might be included in an APB are (Defense Acquisition Guidebook 2011):

  • Cost – estimated total cost for each appropriation type (e.g., procurement, military construction), unit cost, ownership cost
  • Schedule – projected dates for major milestones, major tests, and initial operational capability
  • Performance – projected system attributes such as operational availability, range, airspeed, accuracy

10 This study did not include DOD’s chemical demilitarization programs. Although these are sometimes included in lists of acquisition programs and are required to submit SARs, they differ significantly in that they are focused on destroying weapons, rather than acquiring them.

11 The bars with the highest counts correspond to annual SAR submissions in December. The bars with lower counts represent exception SARs, as described earlier. Annual SARs were not submitted in either 2000 or 2008.

12 Each of the estimated coefficients represents the log of the odds of an event, where odds are defined as the ratio of the event’s probability of occurrence over the probability of nonoccurrence. (Here, the “event” is an unfavorable outcome (i.e., occurrence of a breach). In Table 4, the coefficient of 1.960 for BREACH Lag indicates a positive relationship between that factor and BREACH. Regarding interpretation of odds ratios, those close to one indicate a minimal effect on the dependent variable. The odds ratio of 7.10 may be interpreted as follows: if a breach occurred in the prior period (the lag), the odds of a breach occurring in the present period increase by a factor of 7.1.

13For example, Brown’s work in examining program interdependencies might contribute to a better factor than PAUC to represent program complexity (2011).

©2014 Project Management Institute Research and Education Conference



Related Content

  • PMI White Paper

    Agile Regulation member content open

    By National Academy of Public Admiistration | PMI The National Academy of Public Administration recently presented the results of a year-long effort to identify the Grand Challenges in Public Administration.

  • Project Management Journal

    Mixed-Methods Research for Project Management Journal® member content locked

    By Jiang, James | Klein, Gary | Müller, Ralf We continue our series of editorials providing guidance for future submissions to Project Management Journal® (PMJ).

  • Project Management Journal

    Servant Leadership and Project Success member content locked

    By Nauman, Shazia | Musawir, Ata Ul | Malik, Sania Zahra | Munir, Hina Employing self-determination and social identification theories, we examined how servant leadership, which focuses on employees’ needs, influences project success.