The project risk maturity model--assessment of the U.K. MoD's top 30 acquisition projects

Graham Lovelock, Risk Management Team Leader, Procurement Development Group, DPA

Abstract

Following a successful pilot study of the HVR Project Risk Maturity Model (RMM), the UK MoD Defence Procurement Agency (DPA) commissioned a full programme of risk management capability assessments for the Integrated Project Teams (IPTs) responsible for its 30 major military equipment acquisition projects. This paper reports the findings from these assessments and shows how they were used to identify and prioritize process improvements at both project and corporate levels. It also reports on the findings from subsequent assessments (completed in October 2003), which measure the extent to which these improvements have been achieved.

The authors believe that this is the first occasion on which a Risk Maturity Model has been used to assess an organization's projects on this scale. The paper concludes with a number of significant lessons learned. These lessons include common areas of weakness in the risk management process and the ways in which an organization's executive can use maturity models to set targets and gain assurance as to the effectiveness of the processes that its project teams deploy.

Introduction

This paper reports on the way in which a project Risk Maturity Model (RMM) has been used to assess the risk management capability of project teams within the UK MoD's Defence Procurement Agency (DPA). The purposes of the paper are to:

  • show how RMM measurements can be used to identify and monitor improvements to an organization's risk management at both project and “corporate” levels, and
  • report findings on common weaknesses that may also be significant to other large organizations.

The paper starts by explaining the context in which the RMM has been used. It then describes how the RMM is structured and how its measurement capability was proven in a DPA pilot study. Finally, following a fuller roll-out of RMM assessments, the paper discusses the lessons learned about the DPA's risk management processes and how these lessons have been addressed with corrective action.

Smart Procurement and the CADMID Project Cycle

The Smart Procurement initiative was introduced by the UK MoD in 1999. The aim was to simplify the procurement process and gain better focussed control over the major project decision points, and to this end the CADMID (Concept, Assessment, Demonstration, Manufacture, In-service, Disposal) cycle was introduced. Smart procurement mandates two major decision points; Initial Gate prior to the Assessment phase and Main Gate prior to full-scale development in the Demonstration phase. The most critical point is Main Gate approval, since this is the time at which the UK Government commits funding to the full project acquisition cost. The MoD's Investment Approvals Board (IAB) is responsible for decisions at the two gate approvals. To help formulate its decisions, the IAB is provided with a business case that includes an assessment of the project's exposure to cost and schedule risk. Main Gate approval should only be given if risk has been reduced to an acceptable level.

At both Initial Gate and Main Gate, three Confidence Figures are presented for the overall project cost and schedule implications of continuing the project to the start of its In-service phase. The three Confidence Figures provide a forecast for the 10, 50 and 90 percentile project outcomes and are based on the output of quantitative cost and schedule risk analysis, using as input three-point-estimates for all scheduled activities, anticipated costs, and schedule and cost risk impacts. It is expected that there will be a relatively high degree of uncertainty at the end of the Concept phase, so the Confidence Figures should be relatively wide at Initial Gate. Risk reduction during the Assessment phase should then reduce the spread of the Confidence Figures to an acceptable range prior to seeking Main Gate approval. Exhibit 1 illustrates this principle by showing the confidence figures estimated for a project and comparing the project outcome with the forecast at Main Gate.

Confidence figures at Initial and Main Gate Approvals

Exhibit 1 – Confidence figures at Initial and Main Gate Approvals

Only 10% of projects should exceed the envelope defined by the schedule estimates and 10% of projects exceed the equivalent envelope for costs. Since project cost and schedule performance are positively correlated, it can be expected that a number of these will be the same projects. If the risk-based forecasts are accurate, between 80% and 90% of all projects should therefore be delivered within their forecast envelopes for both schedule and cost.

The DPA Risk Maturity Model Programme

In 2001, the Risk Team within the DPA Procurement Development Group (PDG) was asked to identify a process for measuring the risk management capability of DPA Integrated Project Teams (IPTs). DPA IPTs are responsible for the management of the acquisition of equipment procurement projects up to the start of the In-service phase of the CADMID cycle.

A need to measure risk management capability was recognised when evidence from a number of sources demonstrated that Main Gate approval had been achieved on a number of projects on which risk had subsequently proved to be too high. As a result, the proportion of projects exceeding their envelope of cost and/or schedule estimates was higher than forecast. The implications were that either risks were not being adequately managed following Main Gate approval or that the risk assessments provided at Main Gate were not realistic due to deficiencies in the analysis (Hopkinson, 2001).

HVR Consultancy Services was contracted by the PDG Risk Team to measure IPT risk management capability using the HVR Risk Maturity Model. The resulting RMM assessment programme has evolved through the following four phases:

  1. Pilot Study – August to December 2001
  2. Assessment of 30 Major Projects – April 2002 to March 2003
  3. Reassessment of Major Projects – April to October 2003
  4. Full roll-out to all large DPA Projects approaching Main Gate – From October 2003.

During this period, expertise in the audit process used to conduct assessments has been transferred from HVR to the PDG Risk Team. As a consequence, the DPA is now self-sufficient in the use of this technique.

The HVR Project Risk Maturity Model

The Project RMM is a computerized tool that is a development from a generic Risk Maturity Model proposed by David Hillson (1997). A number of other sources have been used to develop the model, including:

  • The PRAM Guide, published by the Association for Project Management (1997),
  • The “Turnbull Guidance” (Internal Control: Guidance for Directors on the Combined Code).
  • Insights provided by papers, principally the International Journal of Project Management, and books, most notably from Project Risk Management – Processes Techniques and Insights (Chapman & Ward 1997).
  • The experience of risk management consultants working for HVR Consulting Services.

The model has been structured to give an overall project assessment of risk management capability at one of four levels. These were defined by Hillson's 1997 paper “Towards a Risk Maturity Model”. The model has also been structured to indicate the areas of the project's risk management process that should be prioritized for improvement. This is achieved through the assessment of six risk management “perspectives” as illustrated in Exhibit 2.

RMM Results for a Pilot Study Project

Exhibit 2 – RMM Results for a Pilot Study Project

The Project RMM contains 46 questions. For each question the model offers four alternative responses, corresponding to the four levels of RMM capability. In order to select a response, the project has to satisfy all the criteria that is contained in the response and also satisfy or exceed all criteria contained in responses appropriate to lower RMM levels. Weighting values within the model assign more significance to some questions than others and also allow for the fact that certain questions contribute to the measurement of more than one perspective. In other words, the response to a single question may give you different pieces of information about the risk management process dependent upon your angular view. The product is designed to be a multi-dimensional model that avoids ambiguity by making a single level assessment for overall risk management capability (Chapman & Ward, 2003).

Pilot Study Assessments August 2001 – December 2001

A RMM pilot study of eight IPTs was conducted in the period September – December 2001. The pilot study projects were selected to represent a broad range in terms of size, project phase and equipment type. Another key factor for selection was each IPT's reputation for management effectiveness, including its ability to deliver on schedule. The inclusion of both “good” and “poor” projects allowed the RMM to be calibrated (Hopkinson & Brown, 2002).

Exhibit 2 shows the RMM output for one of the IPTs assessed during the pilot study. The following paragraphs explain how this output should be interpreted.

The bars on the histogram represent the IPT's risk management capability as measured from six perspectives. Three of these perspectives, Risk Identification, Risk Analysis and Risk Mitigation represent the core risk management process. Each of the three plays a critical role in the process. A weakness in any one of them undermines the risk management process as a whole.

The other three perspectives (Stakeholders, Project Management and Risk Management Culture) measure aspects of the process that are also critical to capability. The project stakeholders include its customers, its subcontractors and the Executive of the company or organisation to which it belongs. Each has an important role in defining the project objectives and ensuring that risk is borne by the party that is best placed to manage it. The project management perspective includes an assessment of the effectiveness of the risk review processes and the extent to which risk management is integrated with other project control processes such as planning and cost control. Finally, the risk management culture perspective is designed to assess the project team's ability to work collectively on the management of risk at source. Key questions concern the quality of internal communication and the willingness of individuals to identify risks and carry out the actions needed to mitigate them.

The RMM's overall assessment of a project's risk management capability is based upon which of the six perspectives has the lowest value. This is considered to be the weak link in the chain and, therefore, the aspect of the risk management process for which improvements should be prioritized.

The RMM results for IPT 1 (Exhibit 2) show that it had a Level 2 risk management capability. For this to be raised to Level 3, recommendations were made to improve its processes for risk analysis and risk mitigation.

RMM Results for this IPT and the other pilot study assessments were generally judged to be fair and correct by both the DPA PDG team and the participating pilot study IPTs. Lessons learned from the pilot study were used to make some minor alterations to the model. On this basis, the RMM was judged to have been calibrated and, therefore, suitable for rolling out to projects on a larger scale.

Major Project Report RMM Assessments April 2002-March 2003

Following the pilot study, a decision was taken to roll-out the RMM assessment process to all of the projects in what is referred to as the Major Projects Report population. This is a collection of thirty projects, selected on the basis of size and procurement significance, whose progress is reviewed annually by the UK National Audit Office. The current population includes projects for all three of the UK's armed services, covering an initial acquisition value totaling approximately £60Billion.

By the end of March 2003, RMM assessments had been completed for all but one of the thirty MPRs. Exhibit 3 summarizes the results obtained.

Summary of 29 RMM Assessments April 2002 - March 2003

Exhibit 3 – Summary of 29 RMM Assessments April 2002 – March 2003

The results showed that significant improvements would be required for the majority of projects in order for the DPA to achieve its RMM target for MPRs to be at Level 3. All IPTs were given prioritized recommendations for process improvement. Another approach used by the PDG Risk Team to drive improvements was to organize learning from experience sessions at which weaker performing projects were able to learn from projects that had strengths where they had weaknesses. The 29 assessments had confirmed the earlier pilot study observation that the relative strengths and weaknesses of IPTs' risk management capability varied significantly. It was therefore possible to identify which IPTs were particularly capable in each aspect of the risk management process and to use their practice as a model for other IPTs to follow.

Three Common Risk Management Process Weaknesses

Beyond variations in the patterns of risk management capability, three significant areas of weaknesses were found to be common to the great majority of IPTs. These were:

  1. Weak application of processes for the quantification of overall project risk for cost and schedule.
  2. Inadequate engagement with stakeholders on risk issues, most commonly with the IPT's “Customer 2” organizations (the eventual users of the equipment).
  3. Weak translation of risk mitigation plans into implemented action, often associated with a failure to review progress against the implementation of such actions.

These three weaknesses were recognised to be of strategic importance to the risk management capability of the DPA at a “corporate” level. The way in which each of the three was acted upon is described in the following paragraphs.

Weak application of quantitative risk analysis processes can be expected to result in unrealistic three-point estimates and the Confidence Figures derived from them at the critical programme approval points of Initial and Main Gate. In all cases where this weakness applied, the consequences were found to be that three-point estimates were unrealistically narrow and, arguably, biased towards optimism. This weakness was of fundamental importance to the decision making process that underlies Smart procurement. If figures describing the exposure of projects to risk are unreliable, how can the IAB make a judgement as to the readiness of projects to proceed to the next stage of the CADMID cycle?

These actions have included the establishment of two specialist teams: one to provide guidance and advice on the three-point-estimating process that projects are expected to apply, and the second to provide actual support to project teams in the creation of estimates and the use of computer-based analytical tools to provide the Confidence Figures. These teams, numbering 15 people in total, are available to support teams across the whole of MoD and are not restricted to supporting DPA projects alone. On-line access to detailed guidance on three-point-estimating is provided for all MoD teams at http://www.ams.mod.uk/ams/content/docs/risk/webpages/3ptestim.htm. The computer tools have been corporately provided, and support all elements of the risk management process.

Inadequate engagement on risk issues with Customer 2 stakeholders can be expected to result in project trade-offs that do not adequately reflect the requirements of the armed services. This is a difficult issue, since many IPTs manage projects that have a number of Customer 2 interfaces. Engagement with Customer 2 organizations creates a complex stakeholder environment that exposes projects to the risk of frequent changes to the technical requirements. However, it has been recognised that the consequences of this risk are lower if the Customer 2 organizations are actively engaged in the risk management process during the main risk reduction phases in the CADMID cycle prior to Main Gate approval. As a consequence, the level of Customer 2 engagement has been increased during the Concept and Assessment phases.

Weakness in translating risk mitigation plans into implemented action is, in the opinion of the authors, a common problem with project risk management processes that is not confined to the DPA or the defence industry. This opinion is based on discussion with other risk practitioners in professional forums. For example, the UK Association for Project Management's (APM) Special Interest Group in risk management held a seminar on this issue in 2001. Nor is this problem necessarily confined to projects. At a presentation to the APM Risk SIG a corporate risk manager for a large UK listed company admitted that his company had previously been “very good at risk identification, risk assessment and risk filing”!

The evidence emerging from RMM assessments suggested that IPTs were able to identify risks, assess them and identify risk mitigation responses more effectively than they were able to actually implement the responses. Yet the implementation of responses is a critical link in the risk management process. The obstacles to implementing risk mitigation responses were often associated with planning inertia or the prioritization of short-term objectives at the expense of longer-term project outcome. Correction of these defects requires commitment from the IPT's leadership, and many IPT RMM assessment reports identified this as a key recommendation for improvement. Progress against the implementation of these recommendations was reviewed during the RMM reassessments described in the next section.

RMM Reassessments April 2003-October 2003

A new objective was set for the MPR population to have reached a risk management capability of RMM Level 3 or above by October 2003. To this end, full reassessments were made for the 17 IPTs that had failed to achieve RMM Level 3 or higher on their first assessment. These re-assessments included a review of progress against the actions recommended in reports from the first assessment. The results showed that all but two IPTs had achieved the improvement required to reach the minimum level of risk management capability. Figure 3 below shows a comparison between the overall RMM levels measured.

Comparison of Second RMM assessments with Initial Assessments for 17 IPTs

Exhibit 4 – Comparison of Second RMM assessments with Initial Assessments for 17 IPTs

Exhibit 4 provides confirmation that the actions taken by the DPA have mostly been effective in raising the standards with which risk management is conducted by its major project IPTs. Fifteen of the 17 reassessed projects achieved RMM Level 3 or above, and four of these moved to Level 4 during the course of the year. A key advantage of using the Risk Maturity Model has been that improvements in risk management capability have been measured.

Moving forwards – A Risk Maturity Model Policy for all DPA projects

For strategic management purposes, DPA projects are banded into four categories dependant on total value. Most of the MPR population falls into the highest category, Category A, although some fall just below this. At the bottom end of the scale are the Category D projects valued at £20Million or less. As from April 2004 projects in Categories A to C seeking Main Gate approval will be expected to have demonstrated that their risk management capability is at RMM Level 3 or higher. This policy has been designed to ensure that projects submitting business cases for IAB approval do so with risk data that meets a minimum level of integrity.

Conclusions

This paper has described the way in which a Risk Maturity Model has been used to measure the risk management capability of project teams within a large and complex organisation. A consequence of the measurement process has been that it has been possible to prioritize actions for improvement at a project and a corporate level. Measurement has also allowed the effectiveness of these improvements to be monitored and reviewed.

The authors believe that the 29 assessments conducted represent the largest and most comprehensive survey of risk management practice within a single organisation published to date. Three common points of weakness were found that had to be addressed at a “corporate” level. It is not unlikely that these same points of weakness exist in other project-based organizations.

References

Chapman, C & Ward, S. (1997) Project Risk Management; Processes, Techniques and Insights (1st Edition). Baffins Lane, Chichester UK: John Wiley & Sons Ltd.

Chapman, C & Ward, S. (2003) Project Risk Management; Processes, Techniques and Insights (2nd Edition). Baffins Lane, Chichester UK: John Wiley & Sons Ltd.

Comptroller and Auditor General (2004). Ministry of Defence Major Projects Report 2003. Buckingham Palace Road, London, UK National Audit Office.

Simon, P, Hillson, D & Newland, K. (1997). Project Risk Analysis and Management (PRAM) Guide. High Wycombe, UK: Association for Project Management.

Hillson, D (1997, March) Towards a Risk Maturity Model. International Journal of Project and Business Risk Management, 1 (1)

Hopkinson, M. (2000, May) Using Risk Maturity Models. Kluwer's Risk Management Briefing, (Issue 40, May), Pages 4-8.

Hopkinson, M. (2000, November) The Risk Maturity Model. Risk Management Bulletin, 5 (4), 25-29. Hopkinson, M (2001, June) Schedule Risk Analysis: Critical Issues for Planners and Managers. PMI Europe Conference, London, UK.

Hopkinson, M & Brown, R. (2002, November). Measuring Risk Maturity in UK MoD Projects. Project Manager Today, Risk Management Conference, London, UK

Turnbull, N et al. (1999) Internal Control: Guidance for Directors on the Combined Code. London, UK: Institute of Chartered Accountants.

This material has been reproduced with the permission of the copyright owner. Unauthorized reproduction of this material is strictly prohibited. For permission to reproduce this material, please contact PMI or any listed author.

© 2004 HVR Consulting Services Ltd.
Originally published as a part of 2004 PMI Global Congress Proceedings – Prague

Advertisement

Advertisement

Related Content

Advertisement

Publishing or acceptance of an advertisement is neither a guarantee nor endorsement of the advertiser's product or service. View advertising policy.