Longitudinal analysis of project management maturity

 

 

Mark Mullaly

The assessment of organizational capabilities is a core dimension of organizational learning and improvement. As organizations strive to attain and retain competitive advantage, an understanding of their capabilities and how these compare with competitors and best-in-class organizations is essential. Within the project management sphere, assessment frameworks have become increasingly prevalent, and in particular the development and application of project management maturity models. The majority of frameworks have been developed in the last 3 to 5 years.

While there has been a growing emphasis and focus on assessing project management capabilities, there is little data that is available to provide an objective understanding of the current state of practice. What data does exist tends to be proprietary in nature, and as a result there is little information in the public domain that organizations can utilize to understand and benchmark their capabilities. More particularly, there is very little longitudinal data of project management capabilities and performance over time. Since principles of organization development require the ability to progressively evaluate progress (Nielsen and Kimberly 1976), lack of a longitudinal view of project management capability seriously curtails the ability of organizations to identify key drivers of project management improvement.

This paper provides a review of the literature associated with organizational assessment, identifying the core dimensions that assessment frameworks need to address. It discusses the rise of maturity models as a means of assessing specific functional capabilities within organizations, and explores the history and development of one particular model for assessing project management maturity. While the model discussed in this paper is proprietary in nature, the paper explores the results of a series of benchmarking studies that have placed assessment results in the public domain. Most importantly, it provides an initial longitudinal analysis of the changes in capabilities and performance of organizations in managing their projects.

Approaches to Organizational Assessment

Organizations continually engage in assessment activities. According to Nielsen and Kimberley (1976), this is a product of the organizational need to rationally search for opportunities for continued improvement, assign priorities and make decisions. To support this assessment, they identify five core requirements for assessment:

  • The availability and interpretation of information, in a form that is of use and at the time that it is required.
  • An understanding of what has been assessed, with a clear understanding of the goals of the assessment and a defined knowledge of the outcomes and consequences that result from the assessed resources and procedures.
  • The availability of relevant and appropriate measures of the consequences being assessed.
  • A data collection strategy by which to gather the appropriate measures.
  • An assumption of the cause and effect relationships that define the beliefs and support the decisions to be made as a result of the assessment.

While these cause and effect relationships are often over-simplified, and the causal linkages are generally more complex than the decisions and beliefs imply, from the perspective of Nielsen and Kimberley they are nonetheless necessary in order to make meaningful interpretations.

The increasing prevalence of knowledge work creates additional challenges in assessment (Tuttle and Romanowski 1985). While the underlying performance of an organization can be measured by five dimensions – efficiency, productivity, effectiveness, quality and quality of work, these are less easily measured as the complexity of work increases and its tangible nature declines. Direct outcomes, where there is a direct relationship between outcome and output, are more reliant upon measures of efficiency, productivity and quality. Indirect outcomes, where there is a greater variability of potential outputs, and greater complexity in choosing the right output for the desired outcome, places a much greater importance on the assessment of effectiveness, and to a lesser extent productivity and efficiency.

While the core emphasis of all assessment is on organizational learning (Hellsten and Wiklund 1999), the majority of assessment frameworks draw on the underlying principles of total quality management, rooted in the Plan-Do-Check-Act cycle of Deming (1993). These assessments are typically divided into two key assessment processes: audit and self-assessment (Karapetrovic and Willborn 2001). Audits collect and compare data against a reference standard, evaluating the degree to which the criteria have been fulfilled, while self-assessments are designed to evaluate the strengths, weaknesses and opportunities for improvement against a number of dimensions. Audits are primarily designed to support an external driver of compliance, while self-assessments are more typically internally focused on improvement.

Karapetrovic and Willborn suggest that not only can assessments provide a means of performance measurement, but by their nature they are also enablers of improved performance, particularly with respect to self-assessments. To be effective in improvement, however, requires two dimensions: delivery of the survey data itself, as well as the impact and resulting actions resulting from delivery of the survey data (Conlon and Short 1984). Conlon and Short stated that the way in which information is delivered is an important determinant of the effectiveness of an assessment. Effectiveness has been found to increase based upon member involvement, where the receiving audience is prepared for the assessment feedback and where they are able to understand and take action on the data received.

A large number of assessment frameworks have been adopted organizationally in recent years. Audit frameworks are generally tied to external quality standards such as the various versions of ISO 9000. Tools for self-assessment are also rooted in total quality management, generally based upon quality award criteria, such as the Malcolm Baldrige National Quality Award (MBNQA) and the European Quality Award (EQA).While these provide general frameworks for evaluating organizational effectiveness, the risk with any assessment is that it will lead to a long list of strengths and weaknesses that are not tied to any specific strategies leading to sustainable competitive advantage (Duncan, Peter et al. 1998). Furthermore, customers of assessment frameworks need to cast a critical eye upon what each assessment offers, recognizing that their underlying approaches and ability to support differentiation vary significantly (Biazzo and Bernardi 2003).

Role of Maturity Models

Models such as those used by the MBNQA and EQA support an overall assessment of organizational excellence. , Similar forms of self-assessment addressing specific functional areas of concern have been popularized through the various maturity models. The concept of maturity models has been familiar for some time, however their popularization as a means of assessment has been more recent. One of the best-known maturity models, originally referred to as the Capability Maturity Model for Software and developed by the Software Engineering Institute (SEI) of Carnegie Mellon University, has significantly increased awareness and acceptance of the concept. Originally released in 1991, the CMM-SW popularized the concept of maturity models as consisting of a series of levels across a number of capability areas (Humphreys 1992). Since the popularization of the CMM and its siblings by SEI, a variety of maturity models have been developed to support a range of functions, including innovation (Aiman-Smith, Goodrich et al. 2005), strategic management (De Vries and Margaret 2003), contract management (Garrett and Rendon 2005) and even more specific purposes such as the use of Enterprise Resource Planning software (Holland and Light 2001).

The application of maturity models to project management is comparatively recent. Despite their relative novelty, a large number of models have been released in recent years (Ibbs and Kwak 2000; Skulmoski 2001; Cooke-Davies and Arzymanow 2003; Hillson 2003; Jachimowicz 2003; Sawaya and Trapanese 2004). Many of those developed have adopted the framework and structure originally established by the CMM, with 5 levels and a number of capability areas as the focus for assessment. These maturity models have varying levels of formality, and there is little documentation in the public domain regarding their structure, contents, assessment approach or results. Even less information is available as to the degree to which maturity models actually support improvement in project or organizational results. The most widely known study of the relationship between maturity and organizational results (Kwak and Ibbs 2000) demonstrated no statistically significant correlation between process maturity and project results, although despite the lack of hard results an anecdotal link was claimed.

In evaluating the use and effectiveness of project management maturity models, Jugdev and Thomas (2002) found that the claimed correlation between process capability and project success of many maturity models has not been substantiated. For prospective customers seeking a relevant assessment framework, the failure of any one model to achieve widespread acceptance is equally problematic. Building upon the observations of Duncan, Peter et al, the larger concern is of the ability of project management maturity models to offer a demonstrable means of competitive advantage. While it can be argued that maturity models have in fact helped to elevate the discussion of project management and raised awareness of its contribution to organizational success (Jugdev and Thomas, 2002)., there is still very little empirical information currently available to support their use. No recognizable standard has emerged to assess project management practices, and in particular there is little to no evidence-based data to support assessment and improvement using the available models What information does exist regarding organizational capabilities tends to be proprietary and therefore not publicly available, and in particular there has typically been no longitudinal data available.

One Approach To Assessing Project Management Maturity

While project management maturity models have not necessarily fully demonstrated their contribution, any insights that can be derived can still be of some value. This paper provides a comparison over time of how organizations have been assessed against one project management maturity model. The value of conducting this longitudinal analysis is that it provides an initial understanding of how both the application of project management in organizations may have changed over time, and the corresponding impact these changes have had on the organizations making them.

The data in this paper derives from a benchmarking initiative conducted by a project management consulting company since 1997. Over the six years that the organization has conducted the benchmarking, the results have increasingly suggested a link between the improvement of project management capabilities in organizations and the delivery of successful project results. As well, the findings for each year have identified practices that have had a strong correlation with improvements in demonstrated maturity as defined within the underlying maturity model. Comparing the results over the six years for which data exists provides insights into underlying trends and the impacts of these trends on organizations.

The results of this benchmarking effort have been published in the public domain since the first year, with the executive summary of each year’s research published on Interthink’s web site (www.interthink.ca). The most recent results included in this analysis (Mullaly 2004) reflect the consolidated findings of the 2003 study. Until now, the results have been limited to the findings for the year in which the benchmarking survey has been conducted, with no longitudinal analysis of the resulting data. This paper provides this longitudinal perspective by evaluating the changes in results and their underlying causes over the period that this benchmarking has been conducted.

The Maturity Model

The findings within this paper are derived from a maturity model initially developed in 1993. Similar in structure to many others, this model originally drew its inspiration from the framework and assessment approach of the CMM (Humphreys 1992). The structure of 5 levels that is defined within the CMM has been adapted in order to provide relevance for project organizations, resulting in the following descriptors:

  • Level 5. A fully mature project organization, with processes consistently applied throughout the organization as part of the overall management process.
  • Level 4. A mature project management process applied consistently on all projects, with project management recognized as a formal management discipline.
  • Level 3. An organization with a defined and integrated project management process that is consistently applied on each project.
  • Level 2. Some project management capabilities defined, but not consistently applied.
  • Level 1. A fully ad hoc project management capability; no consistent or repeatable processes.

As well as the levels of maturity defined above, the other dimensions of assessment are comprised of 12 capability areas that reflect the aspects of project management practice within organizations being assessed. Within each capability area, a number of capabilities are defined that represent how each might be carried out in an organizational context. Within each capability, a number of practices are identified which align with each of the levels within the maturity model.

One of the early challenges recognized in using the model was the lack of comparative information to support assessment of an organization’s practices against other organizations, as well as the difficulty in quantifying the impact of improvements on an organization’s practices. In an effort to provide a context by which comparisons could be better established, the survey instrument was developed to support assessments against the model. Subsequently public benchmarking activities were initiated by the firm.

The model offers a number of advantages and disadvantages that need to be understood in evaluating the data presented here.

  • The model itself is proprietary, and has never been published. While the benchmarking results have been placed in the public domain, there has been no empirical verification or validity assessment of the constructs within the model, which were developed primarily through expert opinion, analysis of existing project management standards and testing and validation of the principles with consulting customers. However, the results of over 60 organizational assessments have been reviewed directly with project management stakeholders, and findings of the model have shown a high level of face validity in describing the practices and capabilities.
  • The maturity model aligns with the principles of a self-assessment model as described by Karapetrovic and Willborn (2001). It does not offer a prescriptive model of project management, but allows organizations to evaluate their relative strengths and weaknesses against a range of practices.
  • The focus of the model is one of promoting understanding and improvement. As per the analysis of Tuttle and Romanowski (1985), project management aligns with their definition of an indirect outcome, and therefore the model has been designed primarily as a measure of effectiveness. Productivity and efficiency are not factors that are measured within the model, and both quality and quality of work life are evaluated only to a lesser degree.
  • The assessment of effectiveness is not as optimally linked to project success as would generally be desired. While the underlying assumption of the model is a correlation between process maturity and project success, the benchmarking results to date are primarily based upon self-report data. Only a subset of responses has been verified through interviews and follow-up reviews of defined and applied practices. Because of the difficulty of normalizing project measures across organizations, project success as measured by delivery on time, budget, effort estimate, customer expectation and customer satisfaction have been evaluated by self-reported ranges of result (>-25%, -25% to -10%, -10% to +10%, +10% to +25% and >+25% of target).
  • While the model embeds an understanding and assumptions of cause and effect, these are inherent in the model itself and are applied equally to all organizations. As per Nielsen and Kimberly (1976), these are not adapted or tailored to the individual contexts, needs or goals of individual organizations, although the nature of the results do enable organizations to define their specific project management improvement goals relative to the reported results.
  • The model could be considered analogous to a Level 3 assessment model as described by Biazzo and Bernardi (2003). While not a framework for quality awards, the model adopts a similar framework in that it is not simply prescriptive as would be defined by a Level 1 or 2 instrument and it allows organizations to evaluate their capability against a range of potential practices. It is also not as open and fluid as a Level 4 instrument in that it does not allow for diagnosis or design without relying on judgment criteria, and it certainly cannot be associated with openness of the Level 5 practices – the majority of project management organizations would not have sufficiently structured processes to support the causal analysis this level requires.

Even taking into account these factors, the use of the model as a framework to assess project management maturity over time still offers significant value in that it has been used to collect a large base of benchmark data, predominantly from North American organizations, over a period of six years. Within this base of data, a number of organizations have participated over several years, offering further insights into the impact of longitudinal changes within a specific organization as well as the overall trends of the study as a whole.

Participants

Data presented in this paper is derived from a public benchmarking project. This effort was initially limited to organizations in Canada, but was subsequently expanded to North America in 1998 and worldwide in 2001. Over 550 organizations and 2,500 individuals have participated in the study since its launch in 1998.

Participant organizations were solicited through direct mail, email, advertisements and editorial articles to contribute to the study. No compensation was provided to organizations for participating, although organizations with more than 10 participants were offered a complementary customized briefing of their individual results. For the first two years of the study, the Project Management Institute made available its mailing list within Canada to invite participants; in subsequent years, direct mailings were primarily drawn from the firm’s contact database.

Organizations were encouraged to have a cross-section of participants contribute to the survey, including project managers, sponsors, stakeholders and team members. This diversity helped ensure a more balanced view of how projects are conducted, capturing both the perceptions of project managers in describing their activities and the degree to which these activities are actually observed by other participants.

In general, participant organizations were those who already had some form of organizational project management capability being developed, drawn from a wide array of industries.

Procedures

In responding to the survey, participants completed a 135-question multiple-choice survey, describing practices they utilized or observed in their most recent project. Participants could provide more than one answer, but were asked to rank multiple answers relative to the degree to which the described practice reflected how projects were actually managed.

The survey results were correlated to the maturity model, so that the described practices were each related to a particular level within each of the defined process capabilities. Not all capabilities have practices associated with all levels, and multiple practices described within a question could be associated with the same level. The results for each level were averaged using a weighted formula, so that for an organization to be ranked at a Level 2, for example, all of the practices associated with Level 1 and Level 2 at a minimum need to be met.

Where organizations had more than one respondent, the results of each respondent were averaged to produce the overall results for the organization. As well, the standard deviation of responses was assessed to understand the relative consistency and range of differences in responses across participants in each individual organization. The composite organizational results were then used for all subsequent data analysis in presenting study findings.

In addition to describing their project management practices, participants also responded to a number of demographic questions associated with them as a person, their organization and the most recent project. In all instances, participants were asked to respond to the survey in the context of this project in order to best reflect the most current practices within their organization.

Longitudinal Analysis

Demographic Changes

In understanding changes in results over time, it is first important to understand what changes have been reflected in the respondents to the study and the organizations they belong to. The following sections provide a summary of the changes and trends in demographic information.

Respondents

    # Respondents # Organizations
1998 280 67
1999 337 63
2000 152 22
2001 246 89
2002 342 96
2003 579 89

Table 1 – Summary of respondents by year

Table 1 shows that the number of respondents year over year has been reasonably consistent, with the exception of the year 2000. An average of approximately 300 participants per year and 70 organizations per year contributed to the study (omitting the anomalous years from the average). The figures for 2000 were much lower as a number of organizations declined to participate, citing in particular the focus of their project management staff on Year 2000 remediation efforts. Since this time, the number of individual participants has been growing, while the number of organizations remains fairly static – this is largely a result of a larger number of participants per organization responding, with organizations being encouraged to provide multiple respondents in order to provide as relevant a picture of their processes and capabilities as possible.

For participants, overall experience in project management has remained relatively static year-over-year, with results consistently reflecting a broad range of experience in managing projects. On average, 32.5% of respondents report less than 5 years experience, 27.2% report between 5 and 10 years and 40.3% report more than 10 years of experience.

What has seen a change from year to year has been the training experience reported in project management. While the numbers without formal training and with a bachelor’s degree in project management or a related discipline have remained fairly static, there has been a slight increase in those reporting a master’s degree in project management or a related discipline. There has also been a relatively large increase in those reporting a certificate in project management, rising from low of 11.4% in 1998 to a high of 26.3% in 2002, with an average of 23.5% of respondents for the last three years.

Organizations

The size of participating organizations has remained extremely consistent from year to year, with the majority of organizations (greater than 70% in all but one year) having more than 1,000 employees. 2001 also saw a higher percentage of smaller organizations.

The structure of participating organizations has shown a much greater degree of variation than other dimensions. While incorporated, publicly traded organizations have routinely represented a significant percentage of participants, privately-held organizations have declined in overall presence in the study, from an initial proportion of nearly a third of all organizations to just above 10% in the most recent two years. Government participation has shown the most dramatic shift. While 1999 saw a significant proportion of governmental organizations participating, this rapidly decreased in 2000 but has since grown in size every year, with the most recent year representing over half of all organizations when included with crown corporations.

Changes To Overall Maturity

Anecdotally, organizational emphasis on project management has been increasing. Intuitively, what would be expected as a result is relatively stable to somewhat increasing levels of maturity being exhibited in organizations as they continue to invest in developing and improving their project management capabilities.

Overall levels of maturity of participating organizations

Figure 1 – Overall levels of maturity of participating organizations

However, analysis of the overall levels of maturity for organizations has shown a surprising and counterintuitive shift, as shown in Figure 1. For the first two years of analysis, the proportions of organizations assessed overall at Levels 1, 2 and 3 were relatively constant – approximately 30% of organizations were at a Level 1, while two-thirds of organizations reflected a Level 2 capability (all capability areas were assessed at or above a Level 2) and approximately 5% of organizations were assessed at a Level 3. Since 2000, there has been a marked increase in organizations that were assessed at Level 1, with the percentage of organizations being greater than 60% for all subsequent years. In the period between 2001 and 2003 the average percentage of Level 1 organizations was 70.1%.

As a result of the increase in organizations at Level 1, there has been a corresponding decrease over time in organizations evaluated at Level 2 or above. Level 3 organizations have declined to 0% by 2003; the average proportion of Level 3 organizations in the first three years was 5.1%, compared with an average of 1.1% in the last three years. The proportion of Level 2 organizations for the period has been a surprisingly steady, with close to 30% of organizations at Level 2 since 2000.

Overall maturity by capability areas of participating organizations

Figure 2 – Overall maturity by capability areas of participating organizations

The trends viewed at a macro level in terms of overall assessment results are also reflected when viewing the overall results by year against each of the capability areas. The results for 1998 and 1999 show a clear difference of approximately ½ a level greater maturity overall over the results for all subsequent years. Beginning in 2000, the average results by capability area are surprisingly consistent, particularly in the process capability areas – the variation between years is typically less than ¼ of a level overall from 2000 to 2003.

What is most significant in these findings is the absence of any gradual shift in results – the 1998 and 1999 results are remarkably consistent, as are those from 2000 through 2003, but there is a significant drop-off in maturity from the 1999 to the 2000 results that remains through all subsequent years. The challenge is to understand the reason for this significant shift. Within the demographics of respondents and respondent organizations, there is minimal indication that any real change in maturity should be reasonably expected. While there was a decline in the number of organizations participating in the study in 2000, subsequent years have seen participation levels similar to those seen in 1998 and 1999. As well, while there has been an increase in government participation and a decrease in participation by privately-held incorporated organizations, this has been a progressive change year-by-year, which does not explain the rapid change or the subsequent static profile of maturity in each year.

There have also been no significant changes to the assessment survey or the means by which survey results are evaluated that would explain this shift, as mentioned previously. There have been few changes to the survey, and all of those changes that did occur in the period being explored were clarifications of terminology in a small number of individual questions rather than a significant restructuring of the survey.

The source of the change appears to be external, and attributable to one of two factors: either there has been a material change in how organizations approach project management since 2000, or there has been a significant shift in the composition of organizations participating in the study since that time. The challenge in interpreting the underlying reason for this shift is that there is minimal data to support resolving the question one way or the other.

While a significant change in overall approach to project management could be plausibly the result of changes in the business environment – as a reaction to the management efforts associated with Year 2000 as well as a response to the economic downturn that occurred throughout 2001 for many regions – the data appear too neat in terms of the suddenness of the change and its subsequent consistency to support this as an explanation.

A more likely and reasonable interpretation is a change as to the organizations participating in the benchmarking study. While some organizations have been consistent supporters and participants of the benchmarking study from the outset, for the most part there tends to be a turnover in organizations participating from year to year of approximately 50%. For the first two years of the study, participants were solicited in part through the Project Management Institute mailing list – invitations to participate were therefore received by participants who had selected into the project management community and whose organizations as a result could arguably be expected to demonstrate a higher degree of maturity than the general population. While the community within PMI has still received invitations through their chapters, this has been a less effective and reliable means of reaching prospective respondents and the primary means of soliciting participants is through direct mail to an extensive database of contacts maintained by the consulting firm.

As discussed previously, the data does not support being able to effectively understand the reason for this shift in overall maturity. Further analysis of future years, following up with previous participants and longitudinal assessment of individual companies within the data set would all help to better understand what is influencing the changes being observed within the results.

Changes Within Organizations

While not all organizations that have contributed to the benchmarking effort firm have participated in multiple years, there are a sufficient number of participating organizations that have done so to enable an initial longitudinal analysis at this level. The following sections provide comparisons across multiple years for three different organizations that have participated in the study for at least two years.

Communications

Maturity results – Communications organization

Figure 3 – Maturity results – Communications organization

One of the participating organizations that provides the greatest understanding of how changes in maturity can occur as organizational priorities change is illustrated in Figure 3. The graph provides an overview of the results for the organization for each year that they have participated in the study. The organization is a significant player within the telecommunications field. They have participated in the study each year since 1998, and provide an interesting case study in the evolution of process maturity.

Interestingly, the process capability areas show a fairly consistent maturity of process from 1998 through 2001, with the Project Initiation; Project Planning, Scheduling & Budgetting and Project Management capability areas scoring at or above Level 3. For these four years, there is very little variation in process results, except for a slight improvement in Project Tracking after 1998, a slightly greater degree of maturity for Project Planning, Scheduling & Budgetting in 1999 and a shift in the degree of maturity associated with Program Initiation. There is larger variation in the organizational capability areas, with there being a particular improvement in Risk Management for 2000 and 2001 over the results for 1998 and 1999.

What is particularly noteworthy in this organizations results is the significant decline in overall demonstrated process maturity that has occurred in the 2002 and 2003 results. The 2002 results show a decline of nearly ½ a level in the process capability areas and approximately ¼ of a level for the six organizational capability areas. From 2002 through 2003, there was a subsequent drop of an additional ½ a level in almost all capability areas. Finally, the results for the Organization capability area provide almost a leading-indicator of potential problems; there is a decline in maturity in almost every year for which the study has been conducted.

In discussing these results with the organization, what emerged was an interesting profile of why the observed decline has occurred. The economics for telecommunications companies has been a challenge for a number of years. 2002 saw an extensive number of layoffs, particularly for senior staff within the organization, resulting in many senior project managers leaving the organization. Finally, the emphasis placed organizationally on managing effectively by projects disappeared – instead, the philosophy shifted from ensuring a formal approach to one of ‘get it done.’ As a result, organizational participants strongly recognized the shift in maturity being reflected within their organizations results.

Government

Maturity results – Municipal government organization

Figure 4 – Maturity results – Municipal government organization

The organization shown in Figure 4 has a similar profile in terms of their results, while not as drastic as that of the previous organization. The organization is a municipal government, with participation in the benchmarking study co-ordinated by the PMO within the organization for each year except 1999. The highest demonstrated maturity of the organization again is reflected in 1998 and 2000; later years demonstrate a corresponding reduction in overall maturity. In 2001 through 2003, there is again a remarkable consistency in terms of process, which is in part attributable to a common methodology being introduced in late 2000, but ironically this consistency appears to have come at the expense of greater maturity – the overall scores are lower than they were prior to the methodology being introduced. After 2001, there have been capability areas which have demonstrated improvement – in particular Risk Management and Technology, as well as the Organization capability. Interestingly, the net effect of the improvements from 2001 to 2003 has only been to restore these capability areas to the levels they were at the outset. For the process capability areas, the demonstrated maturity remains approximately ½ a level below that of the results for 1998.

In reviewing the results with the organization, there was a high degree of face validity of the results being demonstrated within the benchmark study. While recent years have focussed on improving the areas of Risk Management, Technology and Organization, there has been little co-ordinated focus on the processes since 2000, resulting in correspondingly lower results from 1998 and 1999. The decline has also been attributed to the champion of the improvement effort retiring from the organization.

Transportation

Maturity results – Transportation engineering organization

Figure 5 – Maturity results – Transportation engineering organization

The last organization is a much more straightforward example, and one that reinforces what is typically expected in comparing results of organizations year over year. As can be seen in Figure 5, there have been few changes in the process capability areas between the 2002 and 2003 studies, with the results varying by less than ¼ of a level in each year. There has been a significant improvement in Risk Management, with an improvement of almost ½ of a level over previous years. This result can reasonably be predicted given the emphasis the organization has placed on improving and formalizing its risk management approach since the presentation of the initial findings in 2002.

While there is less variation in the results and a smaller number of time periods reflected than for the previous examples, this organization demonstrates the impacts that should appear from period to period where conscious improvement efforts are undertaken in response to participating in the study. Given the emphasis on project management within this organization, improvement in overall maturity can be expected in future years.

Conclusions

The previous section provides an assessment of organizational project management capability using descriptive statistics. While the results that are conveyed are individually interesting, it is helpful to explore how they contribute to our overall understanding of organizational project management. In this section, we summarize the overall results, provide an assessment of the implications and meaning that can be interpreted from these results, and explore how a better understanding could be developed through subsequent research efforts.

Observed Results

The analysis produces some interesting observations, many of which were not expected at the outset. A summary of the key observations within the study are:

  • Overall, there has been minimal change within the demographics of the individual and organizational participants with two specific exceptions. The overall composition of individual participants is relatively consistent, with the only change being an increase in the education of participants – particularly as demonstrated by either a certificate or a master’s degree in project management or a related field. For organizations, there has been a greater emphasis on government organizations and proportionately less participation from privately-held organizations. This gradual shift in demographics does not explain the rapid shift in results being demonstrated, however.
  • Even with the shift in reflected maturity, interesting questions are raised about the profiles of both sets of organizations – those participating in the 1998-1999 period and those participating in 2000-2003. The demographic information suggest some changes that could have resulted: with the growth in governmental participation, there could have been a corresponding decline in overall assessed maturity. As well, the increase in education suggest a greater awareness of the project management discipline, which could result in a more accurate perspective of actual practices being provided. Finally, the change in marketing away from the membership of PMI could indicate that subsequent respondents had less of a vested interest in ensuring assessed project management capability is high.
  • Since the initial decline reflected in the third year of the analysis, there have been few meaningful changes in assessed maturity – the overall maturity of organizations since then has been relatively static. Given the stated focus that many organizations have on improving their project management capabilities, this raises significant questions as to the degree to which these efforts are occurring or are having a demonstrable impact. One possible explanation worthy of consideration is that with the significant investment made in project management that occurred leading up to the Year 2000 projects, the changes observed could also suggest a subsequent abandonment of the capabilities created during this period.
  • Within individual organizations that have participated in multiple years, the results of improvement efforts as well as changes in strategy are strongly reflected in changes to assessed maturity. The impacts of the changes reflected within the study are also being confirmed as valid in follow-up consultations with the organizations.

Implications

The observations and conclusions raised by this study present some interesting implications for project management in organizations. Even if the shift in maturity reflected in the results is due to a change in those organizations choosing to participate, rather than a real change in how project management is practiced, it raises questions regarding the on-going commitment of organizations to project management. The implication would be that those organizations that have been assessed as being more mature no longer see value and relevance in benchmarking their capabilities. While intuitively one would expect the opposite to be true, the individual case studies provide two instances that illustrate the tendency of organizations to make significant investments in project management only to subsequently abandon them.

As well, the lack of any significant improvement in maturity over the subsequent four years of the study presents implications regarding the improvement efforts being conducted. On an anecdotal basis, many organizations profess to be making conscious efforts to improve their project management capabilities. While within individual organizations there are some impacts reflected in assessed maturity in response to specific improvement efforts, there does not appear to be a meaningful overall improvement as a community in how project management is practiced. Similar to the ‘productivity paradox’ of IT investments, this raises questions as to the degree to which improvement efforts are in fact being undertaken, and the effectiveness of these change programs in bringing about real improvements in maturity and capability

Overall, these results lead to a question about the role that project management has for organizations – whether it is viewed as a strategic enabler, core competency or simply a fad whose time has come and gone. The most likely current answer, based upon both the stated commitment to improve organizations and the evidenced commitment of time and effort in participating in benchmarking exercises such as this, is that project management is viewed as important by organizations but has not fully developed as an organizational capability. As a result, its role as a strategic enabler or core competency is still in question, and is something that organizations will still need to answer in the coming years.

Opportunities For Further Refinement

While the descriptive statistics within this paper provides some useful insights into the changes that have occurred – and not occurred – in how participant organizations approach developing and applying project management, it frankly raises as many if not more questions as it answers. Many of the questions arise from an inability to support further analysis of underlying causes – this is in part a challenge created by the benchmarking effort being largely survey-based, without follow-on interviews, focus groups or case studies to provide additional context in understanding the choices and influences of organizations.

Some specific opportunities for further improvement of this analysis include:

  • Providing a more rigorous and comprehensive statistical analysis of the underlying data. While this paper provides a descriptive analysis of the available statistics, a more in-depth statistical analysis could provide additional information to support or refute the inferences being drawn here.
  • Encouraging repeat participation from a greater number of organizations, providing a greater understanding of the longitudinal changes that occur within a sufficiently large subset of participating organizations.
  • Include a greater degree of follow-up with participants to verify and validate the results being self-reported. This could include a definition of more concrete and specific measures for project results, allowing a more granular understanding of the association between described project management processes and delivered project results. The limitation on the ability to gather this information, however, is the degree to which organizations track and are able to segment their project results to align with the defined measures.
  • Including greater understanding of the dynamics occurring within organizations. Being able to build in more of this context would help answer many of the questions underlying why changes were being observed that could not be answered in the current dataset.
  • Looking at the outside influences on the organization, in terms of its economic environment, marketplace and competitive pressures. Understanding the market context will help to provide a better understanding of how these factors also influence observed results.

While the longitudinal analysis in this paper provides some valuable understanding of the context of projects in organizations, the answers it provides are still fragmented and incomplete. By incorporating these changes into future versions of the study, it is hoped that greater relevance and insight will result that can constructively contribute to organizations better defining and realizing their project management goals. In the meantime, this public benchmarking initiative continues as it originated, as a means to help organizations and project managers develop greater understanding of our complex world of work. The author wishes to think those individuals and organizations that have contributed their time and effort to the benchmarking study over the years, and who in doing so have made the analysis in this paper possible.

Aiman-Smith, L., N. Goodrich, et al. (2005). “Assessing your organization’s potential for value innovation.” Research Technology Management 48(2): 37.

Biazzo, S. and G. Bernardi (2003). “Organisational self-assessment options: A classification and a conceptual map for SMEs.” The International Journal of Quality & Reliability Management 20(8/9): 881.

Conlon, E. J. and L. O. Short (1984). “Survey feedback as a large-scale change device: an empirical examination.” Group & Organization Studies (pre-1986) 9(3): 399.

Cooke-Davies, T. J. and A. Arzymanow (2003). “The maturity of project management in different industries: An investigation into variations between project management models.” International Journal of Project Management 21(6): 471.

De Vries, H. and J. Margaret (2003). “The development of a model to assess the strategic management capability of small- and medium-size businesses.” Journal of American Academy of Business, Cambridge 3(1/2): 85.

Deming, W. E. (1993). The new economics: for industry, government and education. Cambridge, Massachusetts Institute of Technology.

Duncan, W. J., M. G. Peter, et al. (1998). “Competitive advantage and internal organizational assessment.” The Academy of Management Executive 12(3): 6.

Garrett, G. A. and R. G. Rendon (2005). “Managing contracts in turbulent times: The contract management maturity model.” Contract Management 45(9): 48.

Hellsten, U. and P. S. Wiklund (1999). “Self-assessment as a facilitator for learning.” Quality Congress. ASQ’s … Annual Quality Congress Proceedings: 423.

Hillson, D. (2003). “Assessing organisational project management capability.” Journal of Facilities Management 2(3): 298.

Holland, C. P. and B. Light (2001). “A stage maturity model for enterprise resource planning systems use.” Database for Advances in Information Systems 32(2): 34.

Humphreys, W. (1992). Introduction to software process improvement. Pittsburgh, Software Engineering Institute, Carnegie Mellon University.

Ibbs, C. W. and Y. H. Kwak (2000). “Assessing project management maturity.” Project Management Journal 31(1): 32.

Jachimowicz, V. A. (2003). “Project management maturity model.” Project Management Journal 34(1): 55.

Jugdev, K. and J. Thomas (2002). “Project management maturity models: The silver bullets of competitive advantage?” Project Management Journal 33(4): 4.

Karapetrovic, S. and W. Willborn (2001). “Audit and self-assessment in quality management: Comparison and compatibility.” Managerial Auditing Journal 16(6): 366.

Kwak, Y. H. and C. W. Ibbs (2000). “Calculating project management’s return on investment.” Project Management Journal 31(2): 38.

Mullaly, M. E. (2004). Executive Summary of the 2003 Organizational Project Management Baseline Study. Edmonton, Interthink Consulting Incorporated.

Nielsen, W. R. and J. R. Kimberly (1976). “Designing assessment strategies for organization development.” Human Resource Management (pre-1986) 15(1): 32.

Sawaya, N. and P. Trapanese (2004). “Measuring project management maturity.” SDM 34(1): 44.

Skulmoski, G. (2001). “Project maturity and competence interface.” Cost Engineering 43(6): 11.

Tuttle, T. C. and J. J. Romanowski (1985). “Assessing performance and productivity in white-collar organizations.” National Productivity Review 4(3): 211.

This material has been reproduced with the permission of the copyright owner. Unauthorized reproduction of this material is strictly prohibited. For permission to reproduce this material, please contact PMI or any listed author.

©2006 Project Management Institute

Advertisement

Advertisement

Related Content

  • Project Management Journal

    Identifying Challenges and a Research Agenda for Flow in Software Project Management member content locked

    By Dennehy, Denis | Conboy, Kieran Flow and its associated tools and metrics are increasingly being reported as an approach used to achieve continuous deployment of software and delivery of value in software development projects. Yet…

  • PM Network

    Escaping Pilot Purgatory member content locked

    By Waity, C. J. Pilot projects can bridge the gap between a brilliant idea and a valuable product—but only if the bridge is successfully completed and built to scale. And in the age of disruption, that doesn't…

  • PM Network

    Hands-On member content locked

    By Karunaratne, Charmaine Although the software development life cycle (SDLC) is an important part of any software project, IT project managers rarely seem to raise the topic. Instead, they leave it to the development teams…

  • PM Network

    Best of Both member content locked

    By Graetsch, Ulrike Maria When leaders at rapidly growing organizations establish a project management office (PMO), they're often seeking better control over which projects are started, more oversight of projects in…

  • Project Management Journal

    How to Buffer the Family Costs of Project Citizenship Behavior member content locked

    By Zhong, Rui | Xia, Nini | Hu, Xiaowen | Wang, Xueqing | Tiong, Robert Previous studies have mainly concentrated on the desirable aspects of project citizenship behavior (PCB) but largely ignored its dark sides. We seek to fill in this gap by exploring whether and when…

Advertisement

Publishing or acceptance of an advertisement is neither a guarantee nor endorsement of the advertiser's product or service. View advertising policy.