Program risk management
how it is done in major defense programs
Howe School of Technology Management,
Stevens Institute of Technology, NJ, USA
The Project Management Institute (PMI), in A Guide to the Project Management Body of Knowledge (PMBOK® Guide) (PMI, 2005), defined project risk as “an uncertain event or condition that, if occurs, has a positive or a negative effect on at least one project objective, such as time, cost, scope, or quality. A risk may have one or more cause and, if it occurs, one or more impact.” A similar definition of project risk was suggested by the U.K. Association for Project Management (Simon, Hillson, & Newland, 1997). Because risks have some impacts on project objectives, project teams have to be aware of those risks and action plans should be developed in response to the risks.
Even though risk, by definition, has either negative impacts or positive impacts on project objectives, many project teams put more emphasis on managing the negative risks (threats) than managing the positive risks (opportunities). While the negative risks cannot be ignored, not paying attention to the positive risks may result in the loss of opportunity to enhance the project success (Hillson, 2002; Olsson, 2007). To change the mindset of practitioners and to make the discipline broader and more inclusive of opportunity management, Ward and Chapman (2003) suggested that “project uncertainty management” should be practiced instead of project risk management.
In program management, the literature on program risk management is limited. To manage program risks, different authors have suggested process and methodologies similar to the ones used for project risk management. Since a program is “a group of related projects managed in a coordinated way to obtain benefits and control not available from managing them individually” (PMI 2005), the question is whether program risk management is similar to the management of risks of individual related projects. More study should be conducted on risk management at a program level.
To address these issues, a preliminary study was conducted to investigate program risk management. In particular, we looked at how it was done (our research question). Our samples in this study included 12 major defense programs from five organizations. The majority of programs were based on contracts with the U.S. government. From the program report (submissions for the Aviation Week 2007 Program Excellence Award), we used content analysis to observe the pattern of program risk management in these programs and to contrast the findings with the literature.
Literature on risk management mostly discusses processes and methodologies for managing project risks. While some studies proposed a way to identify or assess project risks for project selection purposes (e.g., Miller & Lessard, 2001; Flyvbjerg, 2006), some others proposed the processes and methodologies during project management (e.g., Chapman & Ward, 2000; Vose, 2000; Cooper, Grey, Raymond, & Walker, 2005) and system operation (e.g., Baron & Pate-Cornell, 1999). For project management in general, the PMBOK® Guide suggests that a project risk management process should include risk management planning, risk identification, qualitative and quantitative risk analysis, risk response planning, and risk monitoring and control (PMI, 2005). This process is similar to the processes proposed by the Australian and New Zealand Standard (AS/NZA 4360), the U.K. Association of Project Management's Project Risk Analysis and Management (PRAM) Guide, and the U.K. Office of Government Commerce (OGC) Management of Risk (M_o_R) guideline (Cooper et al., 2005).
Risk management planning is a process of deciding how to approach and conduct risk management activities for a project. The risk management plan, the output of this step, includes the decisions on the methodology, role and responsibilities, budget, time, risk categories, definition of risk probability and impact, probability and impact matrix, stakeholder's tolerance, reporting format, and tracking method that will be used for the project risk management (PMI, 2005). Among other items in the risk management plan, the following literature discusses methodology, risk categories, and risk tolerance.
In terms of the methodology, Raz and Michael (2001) suggested a list of risk management tools that are widely used and those that are associated with effective project risk management. They stated that tools and techniques such as risk impact assessment, risk classification, and ranking of risks contribute to effective risk management and project performance. For the categories of risks, Keizer and Halman (2007) suggested 12 risk categories of radical innovation projects, which include, for example, organization and project management, commercial viability, consumer acceptance and marketing, product family and brand positioning, and product technology. For IT projects, Taylor (2006) proposed project management, relationships, solution ambiguity, and business environment as categories. In construction, Datta and Mukherjee (2001) suggested external and immediate project risks as the main categories. Technical, political, social, economic climate, and domestic climate risks fall under external risks. Conceptual difficulty, mode of contract, project management, and failure by contractors are considered as immediate risks. Elkington and Smallman (2002) studied risk management in the utilities sector. In terms of risk tolerance, Kwak and LaPlace (2005) pointed out that multiple perspectives should be used when developing risk tolerance levels. They argued that the risk management plan should address the tolerance levels not only specific to the firm, but also with regard to the key participants and stakeholders of projects.
Risk identification is an iterative process used to determine the risks and document the characteristic of the risks that might impact the project. In general, the risk identification process leads to the analysis of risk either qualitatively or quantitatively, or both. In addition, the identification of a risk may suggest its root cause and response plan that should be recorded for further analysis (PMI, 2005).
The output of this step is a risk register, consisting of the list of identified risks, potential responses, root cause of risk, and so forth. In their study of radical innovation projects, Keizer and Halman (2007) suggested 10 most frequently identified risks, which include meeting consumers' needs, organization and management of project, stability of the product, quality and safety requirements of production system, and supplier management. Zou, Zhang, and Wang (2007) suggested keys risk in construction projects in China.
Qualitative risk analysis includes methods for prioritizing the identified risks for further action, such as quantitative risk analysis or risk response planning (PMI, 2005). Several methodologies have been proposed in the literature for risk assessment and prioritization (e.g., Baccarini & Archer, 2001; Zeng, An, & Smith, 2007). In general, the identified risks are prioritized based on their probability of occurring and the corresponding impact on project objectives if the risks occur. Jaafari (2001) suggested that the evaluation of risk should go beyond the on-time and within-budget project delivery and include the impact of risk to the long-term business objectives. Risk tolerance levels, identified in the risk management plan, are the thresholds for determining whether the risks have high, medium, or low rating. Sometimes, risks with a low rating of probability and impacted will not be rated. On the other hand, they will be included in a watchlist for future monitoring. Quantitative risk analysis is performed on some high-rating risks to analyze their effect and to assign a numerical rating to those risks (PMI, 2005). Techniques such as Monte Carlo simulation and decision tree analysis are generally used in the quantitative risk analysis process.
Risk response planning is a process of developing options and determining actions in responding to the identified risks, mostly high or medium rating. In other words, it is a process of developing the strategies to enhance opportunities (positive risks) and reduce threat (negative risks) to the project objectives. PMI (2005) suggested avoid, transfer, and mitigate as strategies for negative risks, and proposed exploit, share, and enhance as strategies for positive risks. Acceptance is a strategy that can be used for both negative and positive risks. For IT projects, Taylor (2006) found that four types of strategies were used: control, negotiation, research, and monitoring. In construction, Datta and Mukherjee (2001) suggested nine response plans, corresponding to the interpretation of the risk management matrix (high-medium-low on external and immediate project risks). They proposed, for example, that if the rating of both external and immediate risks is high, the project team should consider abandoning the project. If the external project risk is high but the immediate project risk is low, the team may reconsider the project proposal, develop alternatives, or transfer risks. If both external and immediate project risks are low, the team may consider planning for contingency and go for the project. Dillon, Pate-Cornell, and Guikema (2005) proposed a mathematical model for the optimal use of budget reserves to minimize technical and management failure risks during complex project development.
After risk identification, assessment, and development of response plans, the next step in the process is risk monitoring and control. In essence, it is a process of identifying, analyzing and planning for newly arising risks. It also helps keep track of the identified risks and those on the watchlist, reanalyzing existing risks and monitoring trigger conditions for contingency plans. In addition, the execution of risk responses is also reviewed to evaluate their effectiveness during risk monitoring and control (PMI, 2005).
The main objective of this research is to investigate program risk management of a major defense program from a forprofit organization. With this research objective, our focus was on exploring the process of program risk management in such a setting.
Data source: In 2004, Aviation Week & Space Technology magazine initiated an Annual Program Excellence Award for the aerospace and defense industry. Since then, each year, major aerospace and defense contractors are encouraged to submit their candidates for the award based on a common framework. About 20 programs are then nominated and evaluated by an independent award team. Programs are evaluated based on four major areas, each including several specific criteria. The areas and their relative weights are: Value Creation (10%), Organizational Processes (30%), Addressing Complexity (30%), and Metrics (30%). Program risk management is reported in the Organizational Processes area.
Sample: Based on the information availability, we investigated 12 programs from five organizations that were nominated for the award in 2007 (see Table 1). Several programs studied were from the same organization. Investigating the risk management process of those programs ensures the consistency of the finding (the process) within those organizations. In addition, studying programs from different organizations helped ensure the validity of the findings across organizations.
Table 1. Samples
|Program name||Program definition||Organization||Customer|
|TBMCS||Modernization of the Theater Battle Management Core Systems||Lockheed Martin||U.S. Air Force|
|F-22||Development to sustainment of the F-22 Air Fighter||Lockheed Martin||U.S. Air Force|
|MH-60||Development of the common cockpit for MH-60 platforms||Lockheed Martin||U.S. Navy|
|THAAD||Development of the Terminal High Altitude Area Defense, a key element of the Ballistic Missile Defense System||Lockheed Martin||Missile Defense Agency|
|GBAS||Development of the Ground-Based Augmentation System to replace the current low visibility landing system||Honeywell||FAA|
|GEnX||Development of pneumatic values for control of the various functions of the GEnX aircraft propulsion gas turbine engine||Honeywell||General Electric|
|IPIC||ICBM Prime Integration Contract: System engineering and integration support for Intercontinental Ballistic Missile Program||Northrop Grumman||U.S. Air Force|
|E-2C||Full scale production of the E-2C Hawkeye early warning and control platform||Northrop Grumman||U.S. Navy|
|DSP||Defense Support System: Total system support for Defense Support Program||Northrop Grumman||U.S. Air Force|
|EA-18G||Development of the next generation electronic warfare aircraft EA-18G Growler||Boeing||U.S. Navy|
|FCS||Lead system integrator of the Future Combat Systems program||Boeing||U.S. Army|
|SM-6||Development of the Standard Missile-6, extended range active missile||Raytheon||U.S. Navy|
Content analysis: Using the nomination reports, we conducted content analysis (Bauer & Gaskell, 2000) to investigate program risk management. In particular, we performed content coding to extract the information from each report. Based on the coding, we were able to group the information regarding program risk management into several topics. For each topic, we then performed cross-case analysis to identify whether any pattern emerged from multiple cases. In the next section, we will discuss the findings in the topics, namely attitude toward risk management, risk management boards, risk management process, and the use of database.
Findings and Discussions
Table 2 summarizes the patterns of the findings we found from content analysis.
Table 2: Research Evidence From Content Analysis
|Issues||Number of program||Program that especially highlights the issue|
|Attitude toward risk management|
|Corporate culture, policy, and procedure support risk management||12||TBMCS, MH-60, SM-6|
|Risk management as part of program performance||12||MH-60|
|Everyone is an active participant in risk management||12||SM-6|
|Opportunity management is actively practiced||4||TBMCS, THAAD, EA-18G, FCS|
|Risk management boards|
|Team-level board: managing risks at the lowest possible level||12||F-22, THAAD, E-2C, EA-18G|
|Program-level board: providing oversight and program-wide coordination||12||F-22, SM-6, E-2C, TBMCS|
|Members of the boards are from different parties: customer, contractors||12||TBMCS, EA-18G, E-2, THAAD|
|Risk management process|
|Formal and standard process||12||MH-60, F-22, IPIC, SM-6|
|Risks are identified at all levels||12||TBMCS, F-22, MH-60|
|Qualitative risk analysis is performed for prioritizing risks||12||EA-18G, MH-60, GBAS, IPIC, FCS, THAAD|
|Quantitative risk analysis is performed||1||MH-60|
|Risk response plan is developed||12||MH-60, THAAD, DSP, SM-6|
|Risk monitoring and control are done both at the team and program level||12||SM-6, IPIC, F-22, THAAD, E-2C, EA-18G|
|The use of database|
|Database is used to facilitate risk management||4||TBMCS, F-22, THAAD, SM-6|
Attitude Toward Risk Management
Based on the content analysis, we found that all the programs we studied had a positive attitude toward risk management. In fact, risk management was considered an important part of program management. All 12 programs had a well-defined risk management process. The primary objectives of the process are (1) to identify critical areas and risk events, both technical and non technical, and take necessary action to handle them before they become serious cost, schedule, or performance problems; and (2) to ensure that all participants (including the customers and subcontractors) remain aware of program risks and their potential consequences so that they can be continuously monitored and effectively mitigated. In the corporate level, program risk management has been recognized and has been integrated into the corporate procedure or culture to ensure that risk management is practiced in every program by everyone.
The program manager of the MH-60 program reported that risk management has always been an integral part of program performance management. “Risk management evaluates threats to technical, cost, and schedule performance and integrates risk reduction actions into our cost and schedule systems.” In addition, a corporate policy has been issued to emphasize the implementation of risk management on every program. The program manager of the Theater Battle Management Core Systems (TBMCS) reported using a proactive partnership with the customer to provide early identification and management of programs risks. For the MH-60 and Standard Missile-6 (SM-6) programs, the team members (including the subcontractors) are encouraged to recognize that risk occurs and it is all right to identify potential risk areas without any fear of retribution. As part of the corporate culture, the programs motivate the teams to take responsibility for risk management. As a result, “all the members, at anytime, can raise a risk in any forum for evaluation and disposition,” reported the program manager of the SM-6.
In addition to managing negative risks, some of the programs in our study also practice opportunity management. In such programs, risk and opportunity were managed together using the same process, reviewed by the same boards, and recorded in the same database. The program manager of TBMCS noted that “the risk management process is also intended to identify opportunities for potential cost savings, schedule enhancements, improvements to the management and engineering processes/design, and scope growth to the contract.” Along the same line, the EA-18G program manager reported that “the opportunity management practice focuses on identifying and capturing opportunities for improvement in cost, schedule, and/or performance over and above current plan. This approach encourages each team member to apply innovative thinking to improve existing task, develop corresponding improvement plans, and share the ideas with key decision makers to ensure that opportunities are realized.”
While several projects and programs have been struggling with risk management, the finding that the program risk management has been rigorously practiced in these major defense programs is very encouraging. It shows that risk management has been an integral part of these organizations and the practice involves people from different organizational levels, supported by the policy, procedures, and culture of the organization. What we have learned from these organizations could be considered a best practice that other organizations can use to benchmark with their risk management practice for further improvement.
It is even more encouraging to find that opportunity management has also been practiced in some of the programs. This finding is rather surprising, since the practice of opportunity management in general project/program management community is rare (Hillson, 2002; Ward & Chapman, 2003; Olsson, 2007). We learned from the cases that some of the programs have extended the standard risk management practice to incorporate a return on risk investment. To do so, once a risk is identified, a cost risk exposure is calculated for every technical, cost, or schedule risk. Then, the proactive risk mitigations are initiated and tracked based on cost-to-benefit potential. Resources are dynamically allocated to those risk mitigations that offer high expected returns. Similar methods can also be applied to opportunity management. The program manager of THAAD reported that “to date, THAAD has invested $40M on 80 separate risk reduction activities that have reduced cost exposure of $117M.” The program manager of EA-18G noted that “opportunity management has returned several million dollars to the program management reserve pool, avoided millions more in cost growth and improved schedule margin in several critical areas.”
In the next sections, we will discussed how program risk management is done, starting from who is involved in program risk management. Note that we use the term risk management to represent both risk and opportunity management.
Risk Management Boards
Since the programs we studied were major defense programs consisting of extended teams, we found evidence that the program stakeholders were involved in risk management in at least two different levels—the team level and the program level. Each level forms its own risk management board. At the team level, the team takes responsibility for the program risk in their areas. Risk management at the program level is performed to facilitate the coordination and oversight of the overall program risks. The program manager of F-22 referred to this setting as “a tiered risk management board.”
Based on the key program deliverables or products, the programs we studied were organized using the integrated product team (IPT) structure. In general, members of each IPT may come from one or different organizations: government, contractor, subcontractors, suppliers, or others. In essence, each team is considered a subproject team within a program. At the team level, risk management was performed by each IPT.
Each IPT is responsible for risk identification and assessment, including the development of mitigation plans with respect to the risks in their area (a risk management process is discussed in the next section). The team is also responsible for highlighting risks that warrant elevation to the Program Risk Management Board (PRMB). In the case of F-22, the program uses “Risk Integrators throughout the IPT structure to coordinate risk management activities within that specific IPT.” For the E2C program, in addition to PRMB, each IPT has its own risk management board to address its internal risks. The EA-18 program uses the systems engineering team to assist in risk management. In particular, since each IPT has to present a weekly risk status at a program management meeting, to keep the IPT on track, the systems engineering team provides weekly notification to the IPT of late action plans.
At the program level, the majority of the programs we studied identified that they have a formal PRMB. Since these programs were major government-contracted defense programs, the members of the board generally come from the organization executing the program (the prime contractor) and the government (the customer). In other words, the typical joint PRMB consists of the program managers (both from the contractor and customer), the chief engineer, and the IPT team leads. As we found in our study, the chairperson(s) of the board can be the program manager or the chief engineer (e.g., F-22 and SM-6). In many cases, the board is jointly chaired by the both of the program managers from the contractor and the customer (e.g., E-2C and TBMCS).
The responsibility of the board is to oversee the risk management process. It also ensures team-wide coordination and mitigation of issues prior to them becoming problems. In order to do so, the board meets regularly. In some of the programs (e.g. E-2C), the board meets monthly to review and discuss new potential risks, and manages existing risk mitigation efforts. In the EA-18G program, the board (called the Program Risk Advisory Board, or PRAB) convenes quarterly for a detailed two-day review of all program-level risks and opportunities. At the review, the team members who are responsible for particular risks report their team's progress against a step-by-step plan for mitigating the likelihood and/or consequence of the risks. The board provides oversight by identifying additional mitigation steps, approving closure, or suggesting alternate plans. In addition to risk management, this review activity also ensures that leadership gathers regularly to take a holistic view of the program execution.
Our finding here shows the practicality in program risk management. Since each program is very large and involves many teams, it is practical that, at the team level, each team should exercise its own risk management. By doing so, the program-wide risks are reviewed and mitigated at the lowest possible level. At the program level, the use of the program review board helps ensure team-wide coordination. In other words, the program risk review board should oversee, among other risk items, the risks associated with the issues of integration, interactions, and interdependencies of subprojects. In terms of the composition of the board, we agreed with the findings that the risk review board includes members from different roles (customers, contractors, suppliers, etc). This leads to the benefits of having multiple perspectives in risk management, which will eventually lead to meaningful risk management for all parties. In other notes, information from the literature (e.g., Kwak, 2005; Manley, Shaw, & Manley, 2007) also supports the use of multiple perspectives in risk management.
Risk Management Process
All the programs in our study have a formal and standard risk management process. The informants indicated clearly that their risk management process is comprehensive, proactive and forward looking. Since these programs were based on contracts with the Department of Defense (DoD), their risk management processes were aligned with DoD policy.
We found that the typical steps in the risk management process includes risk identification, risk assessment, risk handling, risk surveillance, and risk closure. The program manager of F-22 indicated that their risk management system addresses “both pre-contract and post-contract phases, accounting for changes in the manner that risk issues are surfaced, monitored, and mitigated.”
Risk Identification: We found that typically, risks are identified at all levels (IPT and program levels) and throughout the entire program life cycle. During the early stage of the contract, the program manager of MH-60 reported that the contract requirements, Statement of Work (SOW), specification and other Request for Proposal (RFP) materials, as well as the internally developed work breakdown structure (WBS), Integrated Master Schedule (IMS), and program plans were examined. This initial assessment led to areas where the team focused on improving the performance position to meet technical requirements, cost targets, and schedule goals, while balancing all three elements. In the MH-6 case, the risk identification process also includes the review of the WBS against the internal risk taxonomy matrix. This matrix, in fact, serves as a checklist for risk categories, which include requirements, design, integration and test, management processes, program constraints, production, and logistics (including obsolescence).
Risk Assessment: To identify risk priority, risks were assessed by using both qualitative and quantitative approaches. While all of the programs we studied reported using a qualitative risk assessment, the MH-60 program used both qualitative and quantitative risk assessment methods.
For the qualitative risk assessment, we found that risks were classified by likelihood and impact levels. The likelihood represents the probability of risk occurrence. For the impact level, the adverse trends in performance-measuring parameters from the impact or risks are measured and predicted. In the programs we studied, those parameters were defined with respect to the technical, cost, and schedule dimensions. In terms of assessment scales, while some of the programs (e.g., EA-18G and MH-60) used a scale of 1 to 5 to assess likelihood (Remote to Near Certain) and impact (High to Low), the program manager of the Ground-Based Augmentation System (GBAS) reported using “High-Medium-Low” scales. However, in any case, we found that the explicit operational definitions for both likelihood and impact were used to facilitate a consistent evaluation standard. After the assessment, risks were added to a matrix. This risk matrix helps facilitate the risk prioritization and the group review and discussion of the risk and corresponding step-by-step mitigation schemes. For MH-60, the program manager indicated that the cumulative effect of all of the risks on program cost in dollars is provided by a summation of the individual factored cost exposures and added that “cost exposure is reviewed in a pre-mitigation and a post-mitigation basis, enabling the program to review the predicted reduction of cost exposure.”
In addition to the qualitative risk assessment, the MH-60 program also employed the quantitative risk assessment. In particular, the risk likelihood was evaluated in terms of percent and the impact level was identified in terms of dollars (cost) or days (schedule). Factored cost or schedule exposure is defined as the product of the likelihood and impact in dollars or days. The MH-60 program manager also pointed out that each risk analysis included the determination of a root cause. “By categorizing the root cause, potential mitigation actions become more evident and more effective. For example, if a root cause is traceable to timing and information deficiencies, scheduling and management reserve or parallel development actions are evident potential abatement approaches.”
Risk Handling: After the program risks were evaluated and prioritized, the action plans were developed for responding to the moderate and high risks. Avoidance, mitigation, and the use of contingency (acceptance) were the common action plans. Low risks were maintained on a watch list, reviewed regularly (quarterly, in the case of MH-60) for changes, and closed when no longer applicable. The program manager of THAAD reported that it was the responsibility of the IPTs to develop risk-handling options to mitigate the risks and monitor the effectiveness of the selected handling options. “The key to success of the risk management effort is the identification and allocation of the resources required to implement the developed risk-handling options.”
Risk Surveillance: At the team level, the RMB of each IPT continually tracked high- or moderate-risk items. On a particular risk item, once the board agrees that the risk level changes from moderate or high to low, that risk item is placed on the watch list and monitored for changes. The board also monitors whether any risk items need possible additional funding from the management reserve, whether any risks warrant elevation to the PRMB, and whether there are any newly identified risks. At the program level, the PRMB meets regularly (monthly in the case of E-2C) to review and discuss new potential risks and to manage existing risk mitigation efforts. In the EA-18G program, each IPT presents a weekly risk status at program management meeting, while the PMRB met quarterly for a two-day review of all program level risks. In terms of risk closure, closure criteria were developed to evaluate risk items. According to the criteria, if risks (especially the ones on the watch list) are assessed as no longer a factor, they are closed and removed from the watch list.
Based on our analysis, the program risk management process used by the 12 programs we studied were similar to the process suggested by PMI (2005), the Australian and New Zealand Standard (AS/NZA 4360), the U.K. Association of Project Management's Project Risk Analysis and Management (PRAM) Guide (1997), and the U.K. Office of Government Commerce (OGC) Management of Risk (M_o_R) guideline. This process, in fact, was used mainly for managing risks at the IPT level. At the program level, risk management practice is more or less providing oversight. However, since a program consists of many subprojects, risk surveillance/monitoring must be done rigorously at the program level to ensure program-wide coordination. As discussed earlier, the appropriate use of risk management boards helps facilitate coordination. The use of a risk database (discussed in the next section) also significantly helps.
The Use of a Risk Database
Our analysis shows that a risk management database was used in the programs we studied. We found that the database can be as simple as risk matrix spreadsheets or an online data repository system. In the database, risks are described, catalogued, updated, tracked, and so forth. In general, besides the use of the database to ensure (a) the integrated risk management, (b) the risk verification by risk management boards, (c) the risk scoring, system (d) the establishment of metrics and closure criteria for mitigation tracking, additional purposes of the database were (1) to be the central location for obtaining risk assessment data for the program, (2) to provide for the team to respond quickly to emerging risks, (3) to facilitate shared monitoring of risks affecting multiple subsystems (IPTs), and (4) to be the tool used to communicate risk areas and status to the chief engineers and program leadership council. In the F-22 program, its risk management database integrates key elements of the risk management process, metrics, team-partner roll-up, real-time updating, multiple site accessibility, template integration, and scalable databases. In the MH-60 program, the program manager noted that its database has “an automated notification feature where any team member has the ability to enter or status risks and the owner/auctioneer receives an immediate notification so they are aware of the updates and can access the information directly.”
With a high complexity level and a long duration of the program, we agree that it is practical to use a risk database as part of program risk management. The database provides an opportunity to coordinate risk management across the program (among different IPTs and the program boards). In addition, with the online database, program risks can be documented and maintained irrespective of geographical location. This finding, in fact, supports the study of Patterson and Neailey (2002).
What Have We Learned?
The findings from this study suggest several implications for program risk management, especially for government-contracted programs. Practitioners may use such information for benchmarking and further improving program risk management in their organization. Researchers may use our findings as a basis for future study.
To be effective in program risk management, the findings show that risk (and opportunity) management should be a part of the organization's policy, procedure, and culture. In particular, it should be a part of program performance management. Even though every member of the program is encouraged to be an active participant in risk management, in terms of governance, a program should implement a tiered risk management board—team (subproject) level and program level. The team level board has a responsibility for managing risks in their area. The members of the board may come from different parties, such as customer, contractor, subcontractors, suppliers, or others. Because a program consists of multiple teams, having system engineers to help coordinating risk management may be practical. At the program level, the board should oversee program risk management. While the team-level boards are responsible for managing risks in their areas, the program board should emphasize also managing risks with respect to system integration. The members of the program board should also come from different parties and the board should be co-chaired by the program managers from the customer and the contractor.
The findings also suggest that effective risk management practice needs a formal and standard risk management process. The process should include risk identification, risk assessment (qualification and quantification), risk handling (response planning), risk surveillance (monitoring and control), and risk closure. This process should be used to manage both risks and opportunities. For risk identification, risks should be identified at all levels. The implementation of the team-level board encourages the identification of risks at the lowest possible level. As already mentioned, the program-level board, in addition to providing oversight, should focus on the identification of risk associated with system integration, interdependencies, and interactions. After the identification, risks should be assessed, at least in a qualitative way, to identify their priority. Then an appropriate response plan should be developed. If risks are assessed quantitatively to identify their impact in dollars or days, and the investment on the response plans is recorded, the return on risk investment can be calculated after the impact of the residual risks are assessed. For risk surveillance, as already mentioned, the tiered risk management board should be implemented. While the program board provides oversight, risks with respect to each team are monitored by the team. However, some risks may warrant elevation to the program board. The risks that are identified by the program board may not be managed by the board per se. The board may assign those risks to the respective program teams. Since the programs we studied involve high levels of system complexity, it is practical to implement an online database system in order to facilitate integrated and real-time risk management. We summarize what we have learned in this research into a conceptual model for program risk management, shown in Figure 1.
Figure 1. A Conceptual Model for Program Risk Management
The objective of this study was to investigate program risk management. Based on the reports submitted for the Aviation Week 2007 Program Excellence Award, we conducted content analysis to explore risk management of 12 major defense programs from five organizations. The majority of these programs (11 programs) were based on government contracts.
We found that with support from organizational policy, procedure, and culture, risk and opportunity management have been practiced rigorously in these programs. The members of the program have positive attitudes toward risk and opportunity management. In addition, formal and standard risk management process and methodologies, including a risk database, have been used. Because little research has been done on program risk management, these findings contribute to a program risk management process, derived from real-life contexts. Researchers may use these findings as a basis for future study. For practitioners, our findings may be considered a best practice for other programs to benchmark against further improvement. With the sample only from the contracted programs from aerospace and defense industries, to apply this finding to the programs in different settings, practitioners may have to adapt it to their programs.
With its preliminary nature, this study has some limitations. We recognize that the content analysis was done based on the self-reported document. The information in the document may not represent what was done in reality. However, with our research design, analyzing the emerging patterns from multiple cases, our findings were cross-validated. Nonetheless, further research, such as case studies, should be conducted to further investigate the risk management practices of these programs.
Baccarini, D. & Archer, R. (2001). The risk ranking of projects: A methodology. International Journal of Project Management, 19(3), 139.
Baron, M. M. & Pate-Cornell, M. E. (1999). Designing risk-management strategies for critical engineering system. IEEE Transactions on Engineering Management, 46(1), 87.
Bauer, M. W. & Gaskell, G. (Eds.) (2000). Qualitative researching with text image, and sound: A practical handbook. London: Sage Publications.
Chapman, C. & Ward, S. (2000). Estimation and evaluation of uncertainty: A minimalist first pass approach. International Journal of Project Management, 18(6), 369.
Cooper, D., Grey, S., Raymond, G., & Walker, P. (2005). Project risk management guidelines. West Sussex, U.K.: Wiley.
Datta, S. & Mukherjee, S. K. (2001). Developing a risk management matrix for effective project planning – An empirical study. Project Management Journal, 32(2), 45.
Dillon, R. L., Pate-Cornell, M. E., & Guikema, S. D. (2005). Optimal use of budget reserves to maximize technical and management failure risks during complex project development. IEEE Transactions on Engineering Management, 52(3), 382.
Elkington, P. & Smallman, C. (2002). Managing project risks: A case study from the utilities sector. International Journal of Project Management, 20(1), 49.
Flyvbjerg, B. (2006). From Nobel Prize to project management: Getting risk right. Project Management Journal, 37(3), 5.
Hillson, D. (2002). Extending the risk process to manage opportunities. International Journal of Project Management, 20(3), 235.
Jaafari, A. (2001). Management of risks, uncertainties and opportunities on projects: Time for a fundamental shift. International Journal of Project Management, 19(2), 89.
Keizer, J. A. & Halman, J. I. M. (2007). Diagnosing risk in radical innovation projects. Research Technology Management (September-October), 30-36.
Kwak, Y. H. (2005). Examing risk tolerance in project-driven organization. Technovation, 25, 691-695.
Manley, R. C., Shaw, W. H., & Manley, T. R. (2007). Project partnering: A medium for private and public sector collaboration. Engineering Management Journal, 19(2), 3-11.
Miller, R. & Lessard, D. (2001). Understanding and managing risks in large engineering projects. International Journal of Project Management, 19, 437-443.
Olsson, R. (2007). In search of opportunity management: Is the risk management process enough? International Journal of Project Management, 25(6), 579.
Patterson, F. D. & Neailey, K. (2002). A risk register database system to aid the management of project risk. International Journal of Project Management, 20, 365-374.
Project Management Institute. (2005). A Guide to the Project Management Body of Knowledge (PMBOK® Guide) (2004 ed.). Newtown Square, PA: Project Management Institute.
Raz, T. & Michael, E. (2001). Use and benefits of tools for project risk management. International Journal of Project Management, 19, 9.
Simon, P., Hillson, D., & Newland, K. (Eds.) (1997). Project risk analysis and management (PRAM) guide. U.K.: The Association for Project Management.
Taylor, H. (2006). Risk management and problem resolution strategies for IT projects: Prescription and practice. Project Management Journal, 37(5), 49-63.
Vose, D. (2000). Risk analysis. Chichester, U.K.: Wiley & Sons.
Ward, S. & Chapman, C. (2003). Transforming project risk management into project uncertainty management. International Journal of Project Management, 21, 97-105.
Zeng, J., An, M., & Smith, N. J. (2007). Application of a fuzzy based decision making methodology to construction project risk assessment. International Journal of Project Management, 25(6), 579.
Zou, P. X. W., Zhang, G., & Wang, J. (2007). Understanding the key risks in construction projects in China. International Journal of Project Management, 25(6), 601.
© 2008 Project Management Institute