Introduction
PMI's premier standard, the PMBOK® Guide, is concerned with the processes and knowledge areas for managing a single project. That is very different from developing the organizational capabilities that underpin the enterprisewide processes for managing the totality of projects in an organization and linking those projects closely to the corporate strategy. To address this broader concern, PMI® approved a group of projects organized as the Organizational Project Management Maturity Model Program to create an organizational project management maturity model (OPM3) as a PMI standard, and volunteers have been working on it since 1998 (see www.pmi.org/opm3). Unfortunately, there is no consensus as to the contents of such a standard, or even the principles on which such a standard is constructed. The market supports many existing models, with more appearing all the time. Books on the subject are now starting to appear. Each of them takes a different approach. Consequently the OPM3 program's teams have had to undertake research to establish a sound base for the model, at the same time as developing the model itself as a standard.
In Search of Maturity
There is a growing recognition that project management involves more than the skillful and competent management of individual projects. It also requires a set of systems, processes, structures and capabilities that enable an organization to undertake the right projects, and to support them organizationally. Increasingly, organizations are seeking to understand these additional capabilities, and to assess their maturity using a variety of project management maturity models.
Unfortunately, there is no consensus as to the contents of an organizational project management maturity model standard, or even the principles on which such a standard is constructed. Some 30 existing models serve the market, with more appearing all the time. Books on the subject are now starting to appear (e.g., Kerzner 2001; Knutson 2001). Each of them takes a different approach, and there is, as yet, no equivalent to PMI's PMBOK® Guide. Managing the totality of projects in an organization is very different from managing individual projects, and no standards exist to guide organizations in their development of requisite organizational capabilities.
Since 1998, PMI has been developing a standard in the form of the Organizational Project Management Maturity Model (OPM3) describing the incremental capabilities that result in project management maturity, and the exposure draft is due to be published during the second quarter of 2002. A core challenge facing the OPM3 teams is determining how to lead the profession toward consensus on this important standard, in the face of such observed diversity of thinking in the marketplace.
The team's response has been to incorporate a number of research studies in the scope of the project, in order to ascertain the extent of existing approaches to organizational project management maturity. These studies have been intimately bound into the program strategy, and this paper will focus on three of them: establishing the scope of existing maturity models, establishing the appropriate scope for OPM3, and assessing the profession's understanding about how different elements of the maturity model interact with and relate to each other. Other papers elsewhere (Mechler 2001; Schlichter 2001) deal with other aspects of OPM3.
Existing Approaches to Organizational Project Management Maturity
The concept of organizational “maturity” has been popularized through the very successful “Capability Maturity Model” for software that was developed by the Software Engineering Institute of Carnegie-Mellon University between 1986 and 1993. Integral to the model is the concept that organizations advance through a series of five stages to maturity: initial level, repeatable level, defined level, managed level and optimizing level. “These five maturity levels define an ordinal scale for measuring the maturity of an organization's software process and for evaluating its software process capability. The levels also help an organization prioritize its improvement efforts” (Paulk 1993). The “prize” for advancing through these stages is an increasing “software process capability,” which results in improved software productivity.
Since software is developed through projects, it is natural that the concept of organizational maturity would migrate from software development processes to project management, and this has been reflected in an interest in applying the concept of “maturity” to software project management (Morris 2000). Possibly as a result of this a number of project management maturity models appeared during the mid-90s that were more heavily influenced by the thinking of the project management profession. For example Ibbs and Kwak (1997) used one of these models in their attempt to demonstrate the organizational benefits of project management. This particular model from IPS, along with others such as that from ESI/George Washington University and Kerzner (2001), incorporates elements from the PMBOK® Guide.
Other models that are being used to assess project management maturity include the assessment of project management processes as a part of the organization's overall assessment of the quality of its business processes, using models such as the Baldrige National Quality Award (see www.quality.nist.gov) or the European Forum for Quality Management's “Business Excellence” model (see www.wfqm.org/imodel/model1.htm).
Against this background, the OPM3 Program created the Model Review Team (MRT) to review the breadth and variety of approaches that are currently available to organizations seeking to assess the maturity of their project management processes.
The MRT co-leads, with input from the OPM3 Guidance Team (GT), developed a set of questions to provide a framework for the review. There were a total of 25 questions covering five areas:
• The scope of the model being reviewed, including its boundaries, focus, origin and purpose
• The capabilities of the model under review, including such topics as its coverage of the PMBOK® Guide, the capabilities it contains, the extent to which paths to maturity are modeled, the working definition of “maturity,” and linkages to project success
• How assessment of maturity is carried out, including the assessment process and whether or not organizations can “self-assess”
• The basic structure of the model, including whether it is “staged” or “continuous,” and whether prerequisites are defined
• Whether or not the model contains an implementation plan to assist organizations to become more “mature.”
Seventeen review teams were initially identified and mobilized to review a representative selection of seventeen of the 27 models that had been identified. Each review team was composed of three volunteers (one knowledgeable about the model, one familiar with the model and one other). Using the Framework & Review Questions, each volunteer performed an independent model review. Then, the three volunteers worked as a team to develop a consensus model review document. The co-leads of the MRT used a qualitative research software program, Nvivo, to assist with the characterization and comparison of the seventeen models. Using Nvivo, “categories” were developed and assigned to sections of text in model review documents. Each “category” corresponded directly to all or a part of a Review Question.
Since several of the models reviewed are proprietary and not available within the public domain, questions of commercial sensitivity and ethics were dealt with by identifying each model with a coded reference known only to the MRT co-leads. The encoded report itself has been circulated only to the leadership of the OPM3 program. Two further models that are in the public domain have since been assessed in the same manner.
As a result of this activity, the OPM3 leadership has concluded that the provision of a PMI standard for organizational project management maturity will be of benefit to PMI‘s stakeholders, since there are questions about project management maturity that are left unanswered by the existing range of models available to the profession and the stakeholders that it serves.
Establishing the Content of the Maturity Model
There is a difference between concluding that there is a need for a model and knowing precisely what such a model should contain. It is this latter problem that has given rise to the second of the three research studies covered by this paper. The approach for identifying the contents of the model was predominately that of a “critical realist,” recognizing that there is both an element of “socially constructed” reality regarding how project management capabilities are developed in an organization, and some elements of physical external reality. This was reflected in the project work breakdown structure. “Whereas the first phase is geared towards a social process that involves noting patterns in responses from people and developing consensus around principles, the second phase is organized by an engineering process. This involves observing results, generating hypotheses, and developing formulae that can be tested for repeatability and predictability” (Schlichter & Skulmoski 2000).
Mapping the Field Through an Organizational Survey
With this in mind, the appropriate initial approach to the research (Cooke-Davies 2000) involved the design of a qualitative questionnaire, which was completed in the first quarter of 2000. This was then distributed to 200 companies that had either volunteered to participate in the survey or that employed a large number of the 140 volunteers who composed the OPM3 program's teams at that time. After ascertaining background information that was designed to establish the organizational and market context in which the respondent was working, a total of 61 questions were asked in eight categories:
• Project management and the organization
• Project and organizational success
• Project managers
• Initiation of projects and teams
• Use of methods, methodologies and systems
• Project failure
• Developing capability
• Exploring maturity.
A volunteer administered each questionnaire as an interview, and a total of 47 responses were received. The intention was to analyze the results qualitatively using Nvivo, and from this to deduce the possible shape for a subsequent, more quantitatively oriented questionnaire.
Unfortunately, at this point the team learned something about the difference between research undertaken by volunteers, and research undertaken by a University department or commercial organization. The analysis, which takes much more time than the collection of data, became bogged down by a change in circumstances of a key person, and the requirement to find one or more volunteers suitably skilled in the art of qualitative analysis. This analysis is still not complete, and the frustrations caused by this delay led to both the OPM3 program strategy and the research methods being modified.
Tapping “Subject Matter Expertise” Using a Delphi Approach
The strategy, up to this point, had reflected largely a classic “waterfall” development approach: Initial research was to feed into design, design into build and test and so on. In parallel with the frustrations that were being experienced in analyzing the qualitative research, there was also an increasing pressure from the PMI Board through the Members Advisory Group to do everything possible to accelerate the project timetable.
The response of the OPM3 leadership team was to modify its strategy in two ways: to move away from the “waterfall” development model towards a strategy that owes more to “rapid prototype development,” and to involve members of the project management profession as “subject matter experts” more closely in both the research and the design of the model.
Methodologically, there is justification for the inclusion of more input from members of the project management professional community which “is both the ‘custodian’ of the project management worldview, and also the group of people who, by the nature of their employment, are charged with delivering the practical results of employing the “worldview” to deliver economic and social benefits through the effective management of individual projects” (Cooke-Davies 2000, p. 125).
In practice, the technique adopted to gather this input was a modified Delphi technique. The technique was designed from the start so that as the number of respondents increased (in subsequent rounds) the analysis could be automated, and results made immediately available to the team without the delay of waiting for skilled input.
Identifying the Content of a Model: Delphi Round 1
The Delphi technique is a means of establishing consensus among a group of people who share a common interest, but have differing perceptions and areas of expertise. It works by asking people for their anonymous input, and then feeding back the results of the input to people, in such a way that they can refine their response in the light of the feedback received from others. Any search of the Internet to ascertain the methodological basis of Delphi will quickly reveal two things.
Firstly there is consensus among dozens of sites that “the Delphi technique was developed in the 1950s by Rand Corporation as a valuable tool for modeling future scenarios. Originally used for military purposes, it was quickly adapted to other fields of research and is now used all over the world. The Delphi process has been employed with great success for new product development, sales and marketing research, evaluation of management methods, in demographic predictions, and in financial arenas. By focusing on evolving trends rather than existing conditions, it is particularly effective in reviewing the complex subjects businesses are currently grappling with as they interact with the future” (MG Taylor 2001).
Secondly, there is a large body of detractors of the technique who feel that in a number of high-profile specific decisions affecting the democratic process “the technique was perfected to literally dictate desired outcomes. Through very subtle manipulations a process was set in motion to separate supporters from detractors of the official, desired outcome, the pre-determined position” (Park Cities Taxpayers Association 2001).
It was established that safeguards could be put in place to prevent any abuse of the technique of this sort, and so the technique was started in August 2000.
In the first round, which was open-ended and qualitative, members of the OPM3 Guidance Team were invited to offer their suggestions for the “elements” that constituted a mature organizational project management system. Definitions were provided for maturity. This resulted in some 80 suggestions, which were then worked on to eliminate duplication and to try, as far as possible, to structure elements such that each element only contained ideas that were distinct from those in any other element. As a result, 59 elements were identified. No attempt was made to group them in any way. There were also, at this stage, no attempts to impose definitions on the use of any terminology. Consequently, there was a certain amount of duplication and imprecision about the elements, which the teams would remove in later rounds.
Confirming the Elements of Maturity: Delphi Round 2
All volunteers in the OPM3 team (some 200 people at the time) were invited to review each of the 59 elements, and to indicate the extent of their agreement (using a four point Likert scale) with three statements:
• This element is a factor that contributes to an organization's project management maturity
• This element is a factor that an organization can implement directly, without any prerequisites
• Performance criteria can be established for this element, so that the effectiveness of its implementation can be measured.
The questionnaire was distributed as an Excel workbook (with no macros, so as not to fall foul of some corporate firewalls) and automation occurred in a Master excel spreadsheet. Some 40 responses were received, indicating an overwhelming support for the relevance of the elements, and suggesting some additional ones. As responses were received they were placed in a specific folder on a server, and an “intelligent” Excel workbook containing macros, “read” each file in the folder, and exported it to a second folder. Once the first folder was empty, the results were displayed as a series of charts.
Answers to the first question ranged from a score of +87% agreement for the highest scoring element (the way in which an organization manages change through projects is reviewed periodically, both via internal feedback and review mechanisms and via external comparisons) to +13% agreement for the lowest (the organization has established a multimanagement level reporting structure). The mean for all 59 elements was +63%, and the median +67%.
In response to the second question, respondents identified about one third of the elements as being ones that an organization can implement directly, about one third requiring prerequisites, and one third with opinion divided.
The third question revealed that a majority of respondents believed that performance criteria could be established for 56 of the 59 elements. In addition, a further ten elements were suggested for inclusion.
The conclusion from the second round of the Delphi was that the elements included in the Round 2 formed a good starting point for the OPM3 program's design teams (called Design Cells) to utilize in their first iteration of the model.
Identifying Relationships Between Elements of the Model
Both of the first rounds of the Delphi studiously avoided imposing any structure of the elements that were to be contained within the model. There were two reasons for this.
Firstly, sorting the elements into a number of distinct categories was likely to encourage respondents to view the elements through the lens of a mental model suggested by the association that they individually make with the category title. As Peter Senge observes, “two people with different mental models can observe the same event and describe it differently, because they've looked at different details” (Senge 1990, p. 175). The general approach that has been adopted is that it is less distracting to “group items together” to form categories than to suggest categories in the first place.
Secondly, the act of categorizing elements has far reaching consequences for the infrastructure that inevitably arises to support the categorization (Bowker & Leigh Star 1999). With this in mind, the act of categorization should be performed in a way that is consistent with a potential PMI standard—in other words it should involve the judgment of the project management profession.
“Clustering” the Elements by Analyzing “Affinity”
One appropriate opportunity to assess the judgment of the profession on how to cluster the 59 elements presented itself at PMI‘s annual Seminars and Symposium, which took place in Houston shortly before the second round of the Delphi was ready for shipping. In two working sessions at the Standards Open Working Day, between 50 and 60 members of PMI attending S&S were given definitions of the 59 elements, and were asked to group them into “clusters” of elements that “made sense” to them as belonging logically together. Five separate groups worked on the task, and so five separate sets of answers were obtained.
These five sets of answers were then compared, and “clusters” of elements created that were, by common consent among the five groups, composed of elements that “intuitively” belonged together. The common “theme” that united each cluster of elements was then sought, and given as a title for the cluster. Each cluster subsequently became the content addressed by a single Design Cell, staffed by subject matter experts, and tasked with designing the path (or paths) to maturity within their particular cluster of elements. The ten “themes” that appeared to lie beneath the intuitive groups of elements are:
- Standardization and integration of methods and processes
- Performance and metrics
- Commitment to the project management process
- Business alignment and prioritization
- Continuous improvement
- Success criteria for continuation or culling
- People and their competence
- Allocating people to projects
- Organizational “fit”
- Teamwork.
Confirming Influences: Delphi Round 3
With the formal endorsement of the elements that had been identified to date in Delphi Round 2, and after defining the elements more tightly, the OPM3 teams sought additional input from the profession to identify relationships among the elements. Relationships among elements describe how parts of a mature project organization interact and imply a sequence in which capabilities should be developed.
The idea for Delphi Round 3 is to “pave the way” for the acceptance of OPM3 as a standard during the exposure draft round (described in Appendix A of the PMBOK® Guide). The Delphi Round 3 will obtain input from a wide range of project managers regarding their intuitive understanding of the “elements” of a mature project organization.
It will ask three questions:
• Which of the elements that have previously been identified fit most logically into which “clusters”?
• Which of the elements directly influence which others?
• What role does each of the elements play in the system; i.e., which are “drivers,” which are “dependent” and so on?
The methodology behind the Delphi Round 3 is known as “MicMac” (a French language acronym—see below for a brief description of the technique). This methodology requires each element to be placed in both the rows and the columns of a matrix, and to have each respondent place a “1” in any cell where the “row element” is thought to directly “influence” the “column element.” It was felt that a 59x59 matrix was just about feasible.
Conducting a Pilot Study
Before circulating the Delphi Round 3 to several thousand members of the profession, a pilot exercise was carried out among the OPM3 leadership—both members of the guidance team and the newly appointed Design Cell leaders. In the course of preparing and conducting the pilot, three things happened.
• Firstly, the team reviewing the elements discovered that many elements contained multiple strands of thought, and this made it difficult to decide whether or not a relationship of influence existed between any pair of elements or parts of an element. As a result, a small team of four people worked to “parse” each element into its constituent distinct ideas, and then to remove duplication from among the resulting list of ideas. This led to a new list of 146 elements.
• Secondly, the test of the 146x146 matrix showed that a combination of boredom and fatigue meant that people attempting to complete the matrix “ran out of steam” in the upper half of the matrix. This had the effect of biasing the results in favor of those elements in the higher rows at the expense of those in the lower rows, and thus rendered the “MicMac” analysis invalid.
• Thirdly, while conducting the Delphi Round 3 pilot study, the Design Cell leaders had begun to model paths to maturity for the clusters of elements that had been assigned to their respective Design Cells. A standard format was not used among all Design Cells, and it was quickly agreed that a standard work process and format was necessary in order to integrate the work of multiple Design Cells. Such a process was developed using a series of Excel workbooks and “Metis” software (see www.metis.no). The first step in this “workflow” is that each Design Cell team creates a partial “MicMac” showing the influences exerted on all 146 elements in the total model by the those elements that are assigned to their own respective Design Cell.
In view of these three sets of events, the Delphi Round 3 will now be carried out with the original planned audience (a wide cross-section of project managers), and to answer the same three questions as planned, but with a matrix of elements that contains no more than 40 rows and columns, and ideally nearer to 30. These elements will be a subset of the 146 elements that are determined to exert particular influence, as a result of combining the partial “MicMacs” carried out by the Design Cells. The precise number of elements will be decided by an analysis of the combination of the partial “MicMacs.” The Design Cells and the Engineering team will incorporate the results of Delphi Round 3 into a maturity model prototype they are developing using Metis.
Identifying Influences Using “MicMac”
“MicMac” is a technique developed by Michel Godet (1991, 1994, 1997) to assist with the structural analysis (SA) of a system. It enables the variables within a system (in this case the “elements” of the model) to be placed in one of five categories:
• Driving variables are the least dependent of the whole plane; they are therefore the ones that most condition the rest of the variables, the most explanatory.
• Link variables are at once driving variables and dependent variables; any action on them will alter other variables but will also have an effect on themselves, which could reinforce or cancel out the initial action. The more link variables there are, the more unstable a system will be.
• Dependent variables have the least drive of the whole system and therefore will always be affected by the variables of the two previous groups.
• Excluded variables are the ones that have low drive and are not very dependent at the same time; this is so because they are very stable variables or fairly autonomous from the rest of the system, which means they can be left out of the analysis.
• Pivot variables are variables that have low drive and/or are not very dependent, which makes it fairly difficult to foresee their behavior (Godet 1994).
This categorization will be carried out automatically as a part of the Delphi Round 3.
Conclusion
The Delphi process provides an important link between the subject matter expertise of the Design Cells and a larger sample of the project management profession, which must be captured in the interest of establishing OPM3 as a “standard” and not simply an interesting or novel design. The actual process is being conducted so that large numbers of replies can be processed automatically, since experience gained from earlier attempts at surveys suggested that the analysis is much more difficult to conduct on a volunteer basis than data capture.
The first two rounds of the Delphi resulted in the identification of 146 elements of a mature project organization. A MIC-MAC analysis will focus the teams on a subset of elements that drive the larger system of elements. A wider cross-section of project managers will provide their input via Delphi Round 3 regarding how elements of the subset should be grouped and how they relate to each other. The OPM3 Program's Design Cells will use the results of Delphi Round 3 to guide them as they incorporate all 146 elements into a maturity model prototype using Metis software.
The research underpinning the OPM3 development represents an interesting series of studies, involving volunteers drawn from the project management profession, and ensuring that when the exposure draft is submitted to PMI for exposure to the whole profession, the standard will be founded on a firm footing of research.