Practical implications of governance frameworks for public projects


Ole Morten Magnussen, Researcher, Concept Program, Norwegian University of Science and Technology

Helene Glasspool, Management School, Southampton University, UK


There are various governance frameworks set up by different authorities and governments for public projects, but the effects of these on the development of project plans and estimates is not clear. Such frameworks are set up as self-evidently appropriate, but we know that apparently self-evident correctness sometimes do not apply to complex projects (Malgrati & Damiani, 2002). The effect of bias in the estimates, such as optimism bias and strategic under-estimation is known (Flyvbjerg et al., 2003); however, it is not clear how the various governance frameworks exacerbate or ameliorate these effects, and this is crucial to understanding how to tackle this problem, rather than the “sledgehammer” of simply adding factors onto estimates.

Supported by PMI, the Concept Programme (Norway) and Southampton University are currently undertaking a study analysing the frameworks for front-end appraisal and governance of public investment projects. The aim of this work is to look at how the governance regimes for major investment projects in different countries affects project performance, as well as comparing this with the frameworks' intended effect. It is focussing on cost and time management (and considering how and why underestimation occurs, rather than simplistically comparing estimates with out-turns, which would not distinguish under-estimation in the early governance phase from execution-phase effects). The study is also looking into four case-studies, to see how the implemented governance frameworks actually affect project management and how consistent they are with the stated aims. There is only room to report a small part of this study, so this paper will concentrate on how the frameworks work out in practice, and how consistent the frameworks and good project management are.

This was a small study undertaken to find initial results. A very small number of case studies are being reviewed in just two countries. Norway and the UK were chosen as having a fairly new and well-established public-sector project governance framework. It was found however as part of the UK study that defence projects (the largest public projects) were governed under a different framework from other UK public projects, so it was decided to study a defence project and a civil project in each country. Similarity between the projects in each country was sought but, as in most case-study research, access was difficult and to a certain extent we had to accept the projects that were available. The study was organised as follows. First, the literature gave us a theoretical underpinning for governance in general. This enabled us to specify and structure the characteristics of a public project governance framework, and this was used in semi-structured interviews with experts in these frameworks. Analysis of framework differences then gave the foundations for the case-studies.

Governance is a term with many meanings and usages. Corporate governance has various models in different countries, which can be categorised as shareholder-value systems (US, UK, Canada); where only shareholders are legitimate stakeholders, and communitarian systems, which hold other constituencies such as employees, banks, and the community as legitimate stakeholders (including “family-based” systems, e.g., Asia Pacific) (see e.g., Detomasi, 2006). For us, public governance “refers to the formal and informal arrangements that determine how public decisions are made and how public actions are carried out, from the perspective of maintaining a country's constitutional values in the face of changing problems, actors, and environments” (OECD, 2005). As far as projects are concerned, Governance of Projects are those areas of governance (public- or corporate) that are specifically related to project activities; good project governance ensures relevant, sustainable alternatives are chosen and delivered efficiently (based on APM, 2002). There seem to be three main goals: choosing the right projects, delivering the chosen projects efficiently, and ensuring projects are sustainable. The second of these goals—delivering the projects efficiently—is important to avoid wasting (public) resources and involves the framework established around the project execution. This is governance of projects. Choosing the right projects (to ensure the right objectives are achieved) and ensuring the projects (actually the goals and effects of the project) are sustainable, is governance through projects—the context in which the critical decisions are made. This is the true Governance of Projects on a public or corporate level.

The Development of the Frameworks

UK and Norway are similar countries in many ways. The UK however has a much larger economy with much more limited public funds, and higher unemployment. In the UK there was naturally a motivation for putting emphasis on “value for money” from the start; in Norway the focus from the beginning was directed against cost overrun—a control measure to ensure realistic budgets and a good basis for project execution. The initiatives in both countries are based on a wish to improve governance in a wide sense. There seems to be similar political backgrounds in the two countries—apart from the Nordic/Scandinavian social welfare tradition and the Anglo/American strong market orientation. The UK has a strong public administration tradition and a large influential Civil Service; Government business is divided into Departments, responsibility for a project being entirely within the Department. In Norway (NO) the Sectoral Ministry is responsible for large investment projects.

UK Office of Government Commerce (OGC)

In the late 1990s, Peter Gershon of GEC was asked by the then Prime Minister to look at procurement in government. Gershon wrote an influential report (Gershon, 1999) covering general commodity procurement and projects, and was asked to set up the OGC in 2000, pulling together staff from various agencies. The report led to the establishment of the OGC (2004) Gateway Process™ with six well-defined, standardized and documented Gateways: Gateway Review 0 looking at strategic management at the programme level (several times where appropriate), and Gateways 1 to 5 at the project level covering different stages of the project life-cycle. Private sector engagement comes from the use of private sector experienced consultants who have been individually accredited by OGC for Gateways. The six Gateways look at the Ministerial level all the way down to suppliers. Parliament/Government level is undertaken by mechanisms outside this study.

Later came categorisation (looking at high political significance; riskiness of the programme; and cost, a lower level criterion): the top level is the “Top 20” Mission Critical projects; the OGC will also sit on the project board. The next level is “High Criticality”; for these, Gateway reviews have to use senior people or even all independents. Different rules apply to “Medium” and “Low Criticality” projects. Later still, a general concern for better programme management gave rise to the development of small “Centres of Excellence” as part of the framework, bringing “best practice” to the Department, acting as a liaison point within a Department for the OGC and reporting directly to the Permanent Secretary (head of the Department). More recently a Project Initiation Process has emerged. The espoused aim of the framework is specifically for OGC to achieve financial savings (on commodities and projects combined, according to procedures laid down by the National Audit Office). OGC is currently reforming, becoming a smaller, more focused, organisation, reducing staff by almost half, and introducing new challenges. The OGC currently works by influence; its recommendations have not to date been mandated (although this is set to change); this is the traditional UK civil service culture. The OGC does not consider individual project reports from Gateways, rather they look for systemic trends. Reports on a particular project go only to OGC and the sponsor (in PRINCE2 TM the “Senior Responsible Owner” or SRO) (although special reports on the top “mission critical” projects go to the Prime Minister's Office). There are a substantial number of people involved in implementing the framework and giving advice.

UK Ministry of Defence (MoD)

The one major section of the UK public sector that uses a different framework is the MoD. MoD has always had an “extended life-cycle,” both very early and very late. The framework came in as the relationship with industry changed, becoming more cooperative, and ensuring both the whole industrial base and UK sovereign capability is considered. Contracting defence budgets gave motivations for Value for Money (and to getting more accurate predictions). The CADMID process, part of SMART acquisition, came in around 1998 following McKinsey work, which also showed the need for a “stronger customer” within MoD (known as “Capability Management” led by a Deputy Chief of Defence Staff). The framework is anchored within the MoD Main Board. Following the McKane report (Ministry of Defence, 2006) the procurement and logistics agencies were unified into “DE&S.” This enables the other espoused goal of the framework: to manage the MoD's projects as a single portfolio to get the best capability for the MoD as a whole. The UK MoD system works with different types of projects, each having a different categorisation. There are two Gates: the Initial Gate to release funds for assessment, and the Main Gate to release funds for the main project. Projects go to the Investment Appraisal Board via two routes simultaneously, from the advocate of the project (the SRO) and via “independent” scrutiny (within MoD but independent of the project). (A Foundation Review is also being brought in.) The system is vertically integrated in that Gates look at the entire project, including the industrial base. Each project is undertaken by an “Integrated Project Team” (IPT), responsible on the project to the SRO but responsible overall within DE&S. Thus, the MoD considers the whole portfolio of projects; the “Capability” customer considers the programme; and the IPT the individual project. The Chief of Defence Materiel reports to corporate targets on DE&S overall performance.


The triggering incident in Norway was a series of unsuccessful major projects and repeated project overspend during the 1980s-1990s. Deputy Secretary General of the Ministry of Finance, Peder Berg, led a government committee investigating a number of project cases (Berg et al., 1999). The Ministry of Finance initiated the development of an obligatory Quality Assurance Scheme in 2000, with mandatory external assessment of projects (performed by consultant companies) before the financing decision by Parliament (mandatory for all state-financed projects over NOK 500 million / £42 million, excluding Oil & Gas). The goal was to ensure improved quality-at-entry. It was a bottom-up process within the Ministry, with Peder Berg as a driving force. The decision to introduce this governance framework was made by the Prime Minister's office. In 2005 there was a second generation of the framework reflecting the need to do something at an earlier stage. The same entity is responsible for the framework for all sectors (with few exceptions), expected to give the same governance across sectors. For both generations of the QA-regime the intention was to establish a system where politics and administration is well divided, with the interplay between these two sides well understood.

The whole framework is a control measure. Control rules are documented in the contracts between the Ministry of Finance and consulting companies and the control object is the documents assessed in the regime. There are two gateways. QA1 focusses on the rationale of the project, so is the early choice of concept and strategy, the decision to initiate project pre-planning (using a compulsory dossier of four documents), looking at many alternatives. QA2 is the decision to finance the project (looking at one alternative only), controlling the Project Management Plan, with several sub-documents and a focus on cost. QA1 and QA2 give a tool for control from the top (Parliament—Government—Ministry—Agency); vertical integration stops downward at the agencylevel and the private sector is not addressed. There are several coordination Forums where the Ministry of Finance gathers key interested people for discussions, often resulting in common understanding and definition of terms and professional standards. The Concept Research Programme supports the development of the regime and studies the practices of the Agencies and consultants.

Comparison of Frameworks

The three initiatives seem to have been prompted by similar developments and similar motivations; the OGC and Norwegian initiatives are both anchored at the top political level and organised under the Ministry of Finance. OGC goals are more explicit, administratively focused and measured in terms of money; in Norway there are more clearly politically anchored goals, not specifying the expected effect of implementation. All frameworks looked to include transparency (openness for scrutiny, maximum openness about basis for decisions), learning, willingness to change, setting common, high professional standards, political anchoring of framework on a high level, non-political QA/Gateway review. The process of development, however, was genuinely different. In Norway the initiating process was bottom-up, as was the implementation of the improvement. In the UK both processes were top-down, as was the implementation of the management system.

Different strategies were chosen: Norway breaking with tradition and introducing a new arrangement, the UK building on tradition. The Norwegian and MoD's framework is mandatory; the OGC framework currently works by influence, although that is set to change. The Norwegian framework is a bottom-up process of learning from cases, transferring experience to other sectors and building “the new profession.” The OGC framework to some extent is a top-down introduction of a common “quality system.” Both Norway and OGC have established a support organization looking for systemic trends: in the UK as a permanent public administration entity: in Norway as an external research program. The MoD reports on systemic trends at a top level. OGC looks only at systemic trends—Norway and the MoD also on single cases. Norway has a centralized coordination forum, while the OGC has established distributed “Centres of Excellence” (the MOD is already a single, organised entity).

Comparing the two frameworks highlights some differences. Vertical and horizontal integration is different. A notable characteristic of the Norwegian framework, is its simplicity (a more macro-analytic perspective). Comparison of the framework components shows some of the same characteristic simplicity; the UK side is more comprehensive and adequate for more detailed control measures at a lower hierarchical level (more microanalytic from a PPM point of view). The organisation implementing the UK governance framework also supplies the management system the answer to the question; “how to achieve,” whereas the Norwegian framework only answers “what to achieve.” The use of external consultants is similar in both countries, but in Norway companies are assigned, in UK it is individuals. The Norwegian framework is mandatory and consultants are thus not the ones that have to persuade the agencies and project organisations. In the UK, the assessment requires only a small amount of effort from senior consultants; review roles are defined in detail, and there is a standard report format; in Norway the QA-team (roles agreed in the Forum) performs a complete independent analysis of the project over many months; for MoD, assessments are effectively within MoD; roles and the dossier format are defined in detail. In Norway the control measures are focussed on cost and risk (initially at least, but moving more toward benefit and value), whereas the UK side is focussed on the business case/value for money. The Norway life-cycle chooses the concept and strategy very early; MoD has an even more extended life-cycle with very early gateways. For cost and time estimation, the OGC framework is complex, complete and detailed, the MoD framework being the high level approach linked to concrete guidelines; the Norwegian approach is the simplistic approach; remarkable at this level being maybe the scarce references to time planning—but the only system that carries out full independent cost estimates.

Four Cases

Norway Defence Case (Skjold)

The Skjold class Fast Patrol Boat (FPB) project encompasses the construction of a new vessel, weapon systems, personnel training and logistics and support. The project includes building a series of six vessels. It is an example of a complex defence procurement project. From an overall perspective the Skjold project is currently on budget and schedule. None of the vessels have been delivered to the end user, but the construction phase goes on as planned. The complexity of the decision-making process, the technology, and the contractual arrangements proved a challenge for the quality assurance in this case.

Fast patrol boats have been a part of the Norwegian Navy strategy for a long time. In the early 1990s a need to update current vessels and start planning for the next generation was identified. A pre-series vessel HNoMS Skjold was completed in 1999 as a separate project. The principal decision to establish the Skjold class FPBs as a part of the Norwegian Navy was resolved in a broad political compromise in the Norwegian parliament in June 2001. The recommendation from the Ministry of Defence, however, was not to pursue the Skjold project further. The Chief of Defence had concluded in a Defence Study (2000) that the investment and operating costs of the proposed fleet of FPBs should not be prioritized, considering other investments and current liabilities of the Norwegian Defence.

During the planning phase of this project the experience from updating the previous class of vessels and the development and building of a pre-series vessel (the prototype) was important. The result was a unique vessel hardly comparable to any other. The pre-project documents were subject to a Quality Assurance 2 (QA2) in 2002. The QA2 concluded the project was well planned and prepared to go into next phase. In 2003 the Norwegian parliament finally decided to execute the project. Again the Chief of Defence appealed to the Parliament not to make this decision but wait for the next long-term plan coming up next year. This did not happen and the execution started. The process described here seems to uncover some weaknesses in the quality assurance at the time. Some indications were as follows:

  • The basic need of the project is not part of the QA assignment. (This was not introduced until later as a part of QA1 in 2005.)
  • No independent cost estimation was done. The analysis was based only on the projects own cost data—primarily the First-Target-Prize from the supplier consortium.
  • The QA2 report (March 2002) commented that the documents produced at the time was of good quality up to the stage of entering a contract, but was not prepared to enter the execution phase (p. 9). There was no Project Control Plan established at the time of the QA2; therefore, this was not controlled. This was in an early phase of the Norwegian QA scheme and the practical procedures where not yet established as a common basis (this happened later in 2003).
  • Due to ongoing negotiations, the QA consultants were not allowed to access the suppliers' personnel. This cut them off from a prime source of information. This questions the timing of the QA2 itself.

These weaknesses were due to the governance framework being less than mature and the special situation analyzing a unique, highly complex project within the context of a sensitive Defence sector with a culture not known to be very open for sharing information—for obvious reasons. The quality assurance documented here is representative for the time it represents.

The impact of the QA2 was less significant in this case. No significant changes to cost estimates or schedules were made. The project organisation did not develop or produce any new or specially adapted documents. The analysis did not identify any new risk elements. However, the process of QA2 gave reassurance that the project was well planned. As previously shown, the project goes on according to plan, so there is no sign to indicate this was not a good conclusion.

The potential impacts were not produced in this case. The most important aspect illustrated by this case is that no matter how clear professional advice is for or against the project and whatever result of extensive use of rational methods, the final decision is a political one. This is not altered by the QA2 or any other control instrument. This is how it should be—it is anchored deep is the democratic system and the governance framework.

UK Civil Case (2MS)

The UK Home Office began a procurement process in 1996 after a review of its accommodation concluded that its existing estate needed to be refurbished. In 1998, the Home Office had obtained three competing bids, proposing the existing building at 2 Marsham Street (or 2MS) as temporary accommodation during the refurbishment. Annes Gate Property plc (AGP), however, made a developed and costed variant bid for a new building at 2MS, and it was this plan that was adopted. Two bidders submitted Further Best and Final Offers. AGP's turned out to be the winning bid. Extensive probing of the bid by the Project Team was undertaken, covering the history of the company and previous similar projects, risk, the detailed resourced programme, and a Quantity Survey-type analysis of the price.

The first Gateway Review of the contract was a Gateway 3, in January 2001, in the lead up to placement of contract (this was only around eight months after the foundation of OGC). The aims of the Review were (very briefly) to confirm the business case and benefits plan in the light of the final tender, confirm that the plan should deliver the specified outcomes and Value for Money, and ensure controls were in place. Outstanding issues were looked at a Supplementary Gateway 3, and further issues later arose that led to a Further Supplementary Gateway 3 in August 2001. At this point the Home Office was starting on a relationship with a bidder, who was very experienced and sophisticated, and having expert support was a good idea for the project.

Analysis following advice taken from consultants and correspondence with the National Audit Office indicated that using the UK Private Finance Initiative would give the best value, and in March 2002, the Home Office signed a 29-year contract with AGP for funding demolition, design, and construction of the new accommodation on the site and provision of associated services. Because this was a PFI project, the authors do not have access to detailed time and cost estimates (although no increase in price has been reported to date). However, it is interesting that a Parliamentary enquiry later identified evidence of optimism bias, in over-estimating reductions in staff numbers due to outsourcing, efficiency gains, and changes in working practices.

During the contract, internal governance was managed through an on-going Project Board; this decided when Gateways were to be held, and tracked the external governance processes. Key risk areas or issues could be tracked here, such us uncertainties in the numbers of staff actually going into the building. A Gateway 4 review was held in January 2002 (“slightly early” within the process at the request of the Treasury—implying that they held a watching brief). This had 15 specified purposes, including to check that the current phase of the contract was properly completed, that the business case was still valid and unaffected by events (reflecting some of what a Gateway 0 might be expected to investigate), and looking at risks particularly. (There was also more than one Gateway 4.) There was a separate PFI contract for the information technology provision in the building; however, the governance mechanism appears to have been single project-based, and the governance of this linked project was not clear.

External governance operated through the Home Office' Audit and Assurance Unit, and beyond the Home Office to the National Audit Office (NAO). The NAO were able to come in between contract signature and start of construction, and issued a favourable report in July 2003, particularly on the nature of the PFI contract. The NAO is there to scrutinise public spending on behalf of Parliament (independent of Government), and the report was taken up by the key Parliamentary Accounts Committee (PAC). It met in November 2003 to look into Value for Money, including the running costs, financing, numbers of staff, and refinancing charges. The most senior members of the Home Office and contractors were called to give evidence. The hard-hitting report made recommendations on under-forecasting of staff numbers, identification of wider business benefits from the move to the new building, questioning a specific financing issue, and a question about disposal of the existing estate. PAC reports are taken extremely seriously by the Civil Service. This being a very visible public project, there was also a considerably interest more generally within Parliament, and a succession of Parliamentary Questions—some covering fundamental issues in the project, but many on much more detailed issues only tangential to the success of the project.

The building handover was completed on time in January 2005 amid considerable publicity. The Home Office then began paying AGP a monthly charge for the building and services amounting to £311 million (net present cost) over the life of the project.

Norway Civil Case (IFI2)

The IFI2 project includes the construction of a new building for teaching, research, and ICT operations at the Department of Informatics at the University of Oslo (UiO). The building's planned gross area is around 28,250 square meters. The current base estimate of the building is of NOK 1 040 million, and the current total budget is NOK 1 080 million (both price level 2006). The need for new facilities for the UiO Department of Informatics was explicitly mentioned in a Government proposition to the Parliament in 1998. The driving forces seem to have been the Department for Informatics' expressed needs for more space closely aligned with government strategies to strengthen research and higher education in ICT. In 1999 the Research Council of Norway (NRC) ordered a design proposition for a new building, which was presented in 2000. The initial plans included a 10,000 square meter extension of an existing building financed by the NRC. The new facilities would then be rented by UiO. In 2001, following discussions on the level of rent in this alternative, the Parliament decided to put the new building on the list of prioritised state building projects. This meant 100% state funding of the new building and execution by Statsbygg (Directorate of Public Construction and Property). The project thus went into a new planning phase, and in 2002 Statsbygg, launched a design contest for the IFI2 building. The preproject was completed in February 2004, but the project had to wait for funding, until the parliamentary decision to finance and execute the project was made in May 2005. The new building, according to the current schedule, will be completed in 2010.

The early stages of development shows that issues concerning the execution model and funding sparked some discussion, but there seems to have been no disagreement of the basic need for the project.

The cost focus is prevalent in the QA2 analysis performed in 2004. Without performing complete independent cost estimation, the quality assurance confirmed the project's cost estimates and assumed needs for budgets. The corresponding uncertainty analyses of the cost placed the market situation on top of the list of identified uncertainty elements. Due to price increase in the construction market the budget of the project has later been raised, first by authorisation to use the contingency reserve and later by a regular budget increase in November 2007. Should the successfulness of the QA be judged from the ability to accurately predict costs or just from the ability to identify the most important risk? The project has not been completed, so the real accuracy of the cost estimate remains to be seen. Due to waiting for funding, the project came to market roughly a year later than what had been assumed. Cost uncertainty analyses cannot be regarded as relevant without continuous updates. The project faced a totally different market situation when the decision to finance and execute the project was made, and the uncertainty analysis had not been updated.

There is no doubt that the control focus is prevalent in the QA2 assignment. The main argument is that there should be a review of cost, schedule, and other key areas before the decision to finance and execute large public projects is taken. It is observed that the subordinate agency generally does not oppose this, but their assessment of QA and its output in this project is that it is redundant and cost-consuming. In this case the QA did not lead to budget changes or other direct changes to the decision basis. This is perhaps the reason for one of the project interviewees to suggest that: “The QA is done more to relieve the Ministry of Finance than to help Statsbygg.” However, there seems to be a great deal of consensus on how the process itself should be described. Interviewees from the project organisation and the QA team described the exchange of information as excellent and the interaction between the involved parties as very good.

On the question of output of the process, we observe that the project organisation expects a bit more than a “scratch of the surface” of project documents. The QA2 cost focus is described as important by interviewees from the project organisation. The control of cost should also include an assessment of technical solutions and their cost effectiveness. This would require another focus in the analysis, from control of numbers to evaluation of technical solutions, and thus, more technical skills from those responsible for conducting the analyses.

UK Defence Case (NEADS)

Ground Based Air Defence (GBAD) is an important defence against an increasing range of low-level airborne threats (e.g., helicopters, unmanned air-vehicles, and cruise missiles). In order to be effective, as well as a weapon, there needs to be the ability to take and fuse data from multiple sensors to form a reliable picture of the situation, identify the target, and control of the weapon. UK defence currently has two GBAD weapons in its armoury: “HVM” and “Rapier” which are to be replaced. A “future GBAD” Integrated Project Team (IPT) was set up following a 1994 NATO feasibility study that suggested an £8bn solution, which seemed reasonable to the independent MoD estimators based on historic data—the basis by which this group estimates; a funding line for this amount was “endorsed,” which means that it was accepted as a programme (i.e., it appears in the longterm funding plans, but no Gate has been gone through yet).

There were two stages to the project: (1) the air-defence Command, Control, Communication & Intelligence under which UK air defence assets would be integrated, and (2) the defensive missile and battle-space management (including sensors and data-fusion). The Business Case for the first was prepared with a budget of approximately $1bn. Initial Gate was passed in 2001, allowing an Assessment Phase (and a concept phase for the second phase). However, in 2003 there was a general funding reduction. The Customer had to cut funding; he had two other programmes within his remit at that point, both well forward with a lot of money committed, and at least one clearly politically very sensitive. He therefore changed the £1bn to £200M, to give only limited situation awareness. This very basic version of the first phase is now coming to the end of its second assessment phase and has a Main Gate around March 2008, with forecast in-service date of 2010.

NEADS (Network Enabled Air-Space Defence & Surveillance) was established as the remaining capability, with a slightly limited budget; it is currently in the Concept phase (which began Oct. 2006), expecting an Initial Gate in 2009, expecting a Main Gate in 2012, and has a Planning Assumption for Service Entry of 2020. When the funding was cut (see above), there was a “cancellation charge,” giving the opportunity for some industrial work, which was used to develop the missile (without necessarily the user-requirement, system requirement, capability gap analysis, a concept of employment etc.—let along the bulk of NEADS, the sensors, data-fusion and communications). The budget for the concept stage of NEADS was also cut substantially in 2003/2004, but a Technology Demonstration Programme has been placed which finishes shortly.

From the point of the view of the MoD, the strategic need for this capability is clear. There are two weapons that will be going out of service. The main drivers to decision-making here, which affected the project process fundamentally, therefore appear at the first instance to be three-fold: the essential In-Service Date for the capability due to obsolescence of current equipment; the needs of the UK industrial base, and cost restrictions, increasing during the course of the project so far. Three other drivers also come into play: political sensitivity, opportunistic behaviour, and the requirement to keep some UK sovereign aspects.

In considering the impact of the MoD governance framework on the project, there are two notable aspects of the NEADS project: the organisation takes less interest than they might because they are part of an IPT in which the other projects are nearer to Gates; and the project is in the early stages, where a project within DE&S perhaps finds it easier to minimise visibility and get the task done. The NEADS project has so far been governed through a mixture of internal (to the IPT) and external assurance. It has not undertaken any OGC Gateway reviews so far, and no Foundation Review was undertaken; it will be reviewed by an internal board prior to submission for the Initial Gate expected in 2009. The NEADS project is an example of a complex defence development project. It illustrates why emphasis is needed on the concept stage of such projects, as while there may be a clear understanding of the requirement. The best way to fulfil this requirement in a cost-constrained environment is a highly complex and changing decision. It perhaps illustrates the need for structured governance processes in the long period up to the MoD Initial Gate, where the project can travel a long and windy path. It also illustrates the difference between a straightforward statement of “the” governance process, and the actuality in projects operating over a number of years within an environment of changing political and cost priorities.


A number of conclusions can be drawn from these cases:

  • It is clear from these cases—particularly in NEADS—that the complexity (of decision-making, technology, and contracts) shows why a governance framework is important, although it is also clearly (from Skjold) a challenge for quality assurance.
  • The governance mechanism throughout appears to have been project-based, and the governance of linked projects was not clear (e.g., in 2MS), and more importantly, there are clear issues in the division of a programme into projects (e.g., NEADS). Having said that, in the OGC methodology (2MS) it was clear that Gateways could include study of business-case issues.
  • The cases illustrate why emphasis is needed on the concept stage of such projects. For Skjold, neither the basic need of the project not the value perspective was included in the QA. NEADS illustrates the need for structured governance processes in the long period up to the first decision.
  • The amount of cost and time information available in all of these cases was very limited (for 2MS, particularly because of the PFI nature). However we do know that the cost analyses presented for IFI2 by the project organisation and the QA team were very close. It is interesting that it was another governance mechanism (Parliament) that picked up the optimism bias for 2MS (and that benefits, rather than costs). For Skjold, despite being within the Norwegian QA system, there was no independent cost estimation.
  • In general, the cases have illustrated the difference between a straightforward statement of “the” governance process, and the actuality in projects. This is particularly seen (as positive and negative) in NEADS, but 2MS shows also flexibility in the OGC system, where Gateways can be repeated and moved.
  • Transaction Cost economics will highlight the power of the suppliers. For 2MS, having expert support from those experienced in dealing with sophisticated and experienced contractors appears to have been valuable; Skjold had the problem that the QA consultants were not allowed to access the suppliers personnel.
  • For 2MS, the effect of other governance processes were interesting—NAO, Parliament, and Treasury.
  • It is clear from theses cases that the control focus is prevalent in the QA system—and there are similar hints in the MoD system; this is less true for the Gateways that are more “friendly” gates.
  • Having said that, it is also clear that the QA process gives reassurance (for Skjold and IFI2); for both Skjold and 2MS the project went according to plan, making it more difficult to conclude about the effect of the framework on the system.
  • Finally, the Skjold project shows that no matter what the professional advice or the result of rational methods, the final decision is a political one.

APM. (2002). Directing change; A guide to governance of project management. Available at

Berg, P and Kvarsvik, T (co-ordinators) (1999). Styring av statlige investeringer. Sluttrapport fra styringsgruppen for prosjektet for styring av statlige investeringer. Finansdepartementet.

Detomasi, D.A. (2006). International regimes: The case of western Corporate Governance. International studies Review 8, 225-251.

Flyvberg, B., Bruzelius, N., & Rothengatter, W. (2003). Megaprojects and risk: An anatomy of ambition. Cambridge, UK: Cambridge University Press.

Gershon, P. (1999). Review of civil procurement in central government. April 1999 HM Treasury, London

Howard, C. (2007). Experiences of implementing gateway reviews. Presentation at Project controls EVA 12. Annual conference of the APM Special interest group on Earned value management. London.

Malgrati, A., & Damiani, M. (2002). Rethinking the new project management framework: New epistemology, new insights. Proc. PMI (Project Management Institute) Research Conference, Seattle 2002, pp. 371-380.

Ministry of Defence. (2006). Enabling Acquisition Change: An Examination of the Ministry of Defence's Ability to Undertake Through Life Capability Management. A report by the Enabling Acquisition Change Team Leader, June 2006 (the McKane Report). London: Ministry of Defence.

OECD. (2005). Modernising government, the way forward. OECD available at



Related Content