Project management practice, generic or contextual

a reality check

Is project management a generic process or a contextual discipline? This article addresses this question via a study examining the commonalities and the variations involved in practicing project management, a study that involved the authors surveying 753 project managers about the tools and the techniques they most often use to manage different types of projects, a study that seeks to identify the similar and the different practices used to manage different types of projects in different types of contexts. In doing so, it lists the study's research questions and overviews the literature on the generic and the contextual natures of managing projects. It outlines the study's methodology and survey parameters; it discusses the key concerns--organizational support and autonomous use--in selecting and applying generic practices. It then analyzes the survey responses, listing the tools most often and least often used and identifying the significant contextual differences between project types.
member content locked

Become a PMI member to gain access

or Join

ABSTRACT img

The purpose of this research is to contribute to a better understanding of project management practice by investigating the use of project management tools and techniques and the levels of support provided by organizations for their use. The study examines both general levels of use and variations among project types and contexts. Many aspects of project management practice are common to most projects in most contexts, while others vary significantly among different types of projects and among projects in different contexts. The purpose of this paper is to present empirical results that show both the common elements and the significant variations. The paper is based on a survey of 750 project management practitioners. The use of tools and techniques is seen here as an indicator of the realities of practice. The study found some aspects of practice to be common across all types of projects and all contexts, but on this background of similar patterns of practice, several statistically significant differences have also been identified. The primary focus of this paper is on these variations in practice.

KEYWORDS: application area practices; body of knowledge; context; generic practices; professional practice; project types; tools and techniques

INTRODUCTION img

The present paper is based on a large-scale survey of 750 experienced project management practitioners. Some results of this investigation were presented in Besner and Hobbs (2006)—namely, an examination of the perceived value of project management practices and their potential contribution to project success. The focus is now on the examination of contextual differences in project management practice.

A clear understanding of the state and the evolution of professional practice are particularly important to the future development of the field of project management. Directly observing what project managers do, how they put in action their knowledge and competencies, is a means to understand their practice. These observations are a needed foundation material for the conceptualization of practice and theory building. As Blomquist and Nilsson (2006, p. 1) state, “The practice approach is not a substitute to present theorizing but rather a complement that brings substance.” The present paper examines an important and straightforwardly observable aspect of project management practice: the use of the tools and techniques that are specific to the field.

A rich array of project management tools and techniques has emerged from practice, as witnessed by the content of A Guide to the Project Management Body of Knowledge (PMBOK® Guide) (PMI, 2004). Identifying the set of project-specific tools and techniques is an important part of defining the frontiers of the profession.

The primary purpose of the PMBOK® Guide is to identify that subset of the Project Management Body of Knowledge that is generally recognized as good practice. “Identify” means to provide a general overview as opposed to an exhaustive description. “Generally recognized” means that the knowledge and practices described are applicable to most projects most of the time, and that there is widespread consensus about their value and usefulness. “Good practice” means that there is general agreement that the correct application of these skills, tools, and techniques can enhance the chances of success over a wide range of different projects. Good practice does not mean that the knowledge described should always be applied uniformly on all projects; the project management team is responsible for determining what is appropriate for any given project. (PMI, 2004, p. 3, emphasis added)

The PMBOK® Guide does not focus only on tools and techniques, but taking this focus, the PMBOK® Guide can be seen as providing an inventory of generally applicable and generally valued tools and techniques. This inventory is an important starting point for understanding project management practice. However, this inventory gives neither any indication of the relative importance of the different tools and techniques nor the ways in which use varies with context and project type.

Within the endeavor of “rethinking project management,” Morris, Crawford, Hodgson, Shepherd, and Thomas (2006) explored the development and updating of formal BOKs (PMBOK, APMBOK, P2M, etc.) and their contribution to project management professionalization and evolution. They point out that empirical evidence regarding the said “generally accepted” body of knowledge is missing. They conclude, “Research has a major role in challenging, shaping and populating such standards” (p. 718). Considering the current activity in generating new standards, and the upgrading of existing ones, they insist that the timing for additional research is “highly appropriate” (p. 718).

The present research addresses the following questions:

  • What is the extent of use of the different project management tools and techniques?
  • What is the level of support for their use provided by organizations?
  • How does the use of tools vary in different types of projects and different contexts?
  • What should be the priorities of practitioners and organizations when choosing to invest in the development of project management tools and techniques?

The examination of the use of tools and techniques by practitioners and their perceptions of these tools and techniques can be viewed as a means for studying the present state of professional practice. In an applied field such as project management, the examination of professional practice can be seen as an examination of the field itself. The results of the examination can, therefore, provide insights into the present state of the field and identify possible future developments, while at the same time providing guidance to practitioners and organizations.

The Literature Review

The reader is referred to Besner and Hobbs (2006) for relevant literature about the value of project management practices and about project management tools and techniques’ use and usefulness. The review here is limited to the most recent developments on those questions and to the literature specifically related to variations in practice by project type and by context.

Wirth (1992) argued that project management is largely generic, that is to say, applicable to many industries with little adaptation. The very existence of the PMBOK® Guide is itself an illustration of the generic nature of project management, although the PMBOK® Guide does emphasize that adaptation is required (PMI, 2004, p. 3). On the other hand, there has been an increasing interest in the study of variations in project practice across different types of projects and different contexts. The proliferation of Specific Interest Groups within PMI—and the publishing of the Government, Construction, and US DoD Extensions to the PMBOK® Guide (PMI, 2002, 2003a, 2003b)—are also clear indications of variations in project management by application area. This is often seen as a way to develop the field of project management beyond generic knowledge and practice. Payne and Turner (1999) and Shenhar (1998) have shown that project management practices do vary significantly from one type of project to the next. Crawford, Hobbs, and Turner (2005, 2006) have shown that organizations divide their projects into categories in order to apply different tools, techniques, and approaches to different types of projects. They showed that one of the primary reasons that organizations create systems for categorizing projects into different types is to adapt their project management methods to the specific requirements of each type of project. This adaptation is twofold, an operational aspect, as organizations seek to improve the performance on projects, and a strategic aspect that allows firms to differentiate themselves in competitive markets.

There is, therefore, widespread recognition of the variability of project management practice by project type and by application area and other contextual factors. The questions now deal with the extent of the variation, the project characteristics and project environments that are associated with the greatest variability, and the detailed identification of which practices vary in which contexts. The present paper contributes to the debate in two ways: (1) it demonstrates the existence of both generic project management practice and significant differences in practice and (2) it investigates the specific similarities and differences among project management practices on different types of projects and in different contexts.

There is considerable research studying project management tools and techniques or, more generally, project management practice. The vast majority of this research focuses essentially on one specific project management practice. In general, the selected aspect is of interest to the author and to the reader, who both seek to identify and understand its usefulness and value. However, this body of research on practices does not allow for comparative evaluation of the relative use and usefulness of those practices. Some research does compare a number of practices, most often in a specific context. Winch and Kelsey (2005), McMahon and Lane (2001), Raz and Michael (2001), Zeitoun (2000), Hargrave and Singley (1998), and Thamhain (1998) focused on specific application areas, or knowledge areas, or a specific aspect of the use of tools (e.g., impact of a specific knowledge area's set of tools in relation with project success, the barriers to use or implementation of practices). Very few adopt, as does the present research, a wider view and attempt to identify general use and usefulness of project management practices (Loo, 2002; Milosevic & Iewwongcharoen, 2004; White & Fortune, 2002). Looking at the larger picture allows the present research to analyze contextual differences in project management practice.

Methodology

Methodologically, the research is consistent with the approaches used by those that have investigated the use of project management tools empirically. Furthermore, by gathering information on the types of projects being managed and the contexts in which they are being carried out, this study is also able to identify both generic aspects of practice common across projects of different types and in different contexts and variability in practice across both project types and differing contexts.

Variations in the use of project management tools across project types and contexts will be considered here as a means of studying variation in project management practice. Use of project management tools is not project management practice. Yet, the use of tools and techniques is one important aspect of project management practice and one aspect of practice that is observable and measurable. This investigation was conducted using aWeb-based survey.

Design of the Questionnaire

The Web-based questionnaire first gathers demographic information on the respondents (position, education, level of experience, etc.) and then information on industry, organizational context, and project characteristics (Crawford, 2000). The last part of the questionnaire is composed of a series of questions designed to investigate the 70 tools and techniques chosen for the study. These questions measure the extent of actual use by practitioners and the extent of support by their organization. Each is measured on a 5-point Likert scale from no use or support to very extensive use or support.

The list of tools and techniques used in the survey was drawn from the PMBOK® Guide, the works cited earlier, and other sources. The authors selected what they feel are the tools and techniques that are identified with the practice of project management. The research reported in the literature review most often included both very general concepts and processes (e.g., training programs, performance measurement) and very specific tools (e.g., WBS, project charter) but do not make an explicit distinction between tools and processes. The present research investigates only tools and techniques that are project-specific and well known. It does not investigate general processes. Restricting the investigation to well-known tools and techniques specific to project management ensures that the questionnaire will be well understood by practitioners.

The authors considered tools and techniques to be those things that project management practitioners use to “do the job” to “execute a process.” Metaphorically, the tools are used to “execute the recipe” or to “play the partition.” They are concrete and specific means by which to apply rules and principles. The tools and techniques are closer to the day-to-day practice, closer to the things people do, closer to their tacit knowledge. Koskinen, Pihlanto, and Vanharanta (2003) have investigated the use of tacit knowledge in a project context and have concluded that “tacit knowledge equals practical know-how” (p. 281). An experienced cook can give details about his recipe, but it is only by observing the cook in the kitchen using her/his tools that one can really learn what has to be done.

A list of 70 project management tools and techniques was prepared in line with the approach described earlier. The tools were then sorted to approximately follow the project life cycle, but in order to help respondents make clear distinctions, tools with similar names or related meanings were placed next to each other in the list. For example, “critical path” was placed next to “critical chain.” The complete list is presented in Table 1. A definition of each of the tools and techniques was provided. The primary sources of definitions were the PMBOK® Guide and Max Wideman's Comprehensive Glossary of Project Management Terms (2003). The list was completed with definitions by the authors.

Soliciting Practitioners

The Web-based questionnaire was pretested to evaluate ease of understanding and the time required to complete. Practitioners were then invited to complete the questionnaire. The solicitation of practitioners was done in part through different networks to which the authors have access. The large majority of the questionnaires were completed by Project Management Professional (PMP®) credential holders responding to an invitation from the PMI Research Department to participate in the study. Practitioners—that is to say, those who are neither consultants, trainers, nor academics—were solicited. The 753 practitioners who completed the questionnaire had the following demographics:

  • Male (67%)
  • Aged 30–50 (74%)
  • Average work experience 7 to 8 years
  • Current primary role:
  • Project team member (8%)
  • Project manager (51%)
  • Program manager/director (24%)
  • Other (17%)

The respondents work on projects that produce different types of products, as indicated below. Data were gathered on both the sector of activity of the respondent's organization and the type of deliverable produced by the respondent's primary project. An individual working in the financial services or defense industries, for example, may be working on information technology (IT) projects. The percentages below pertain to project type, not industry.

  • Engineering and Construction 12.3%
  • Business Services                  11.8%
  • IT and Telecommunications    58.6%
  • Industrial Services                  4.2%
  • Other                                    13.1%

 

Activity list

Feasibility study

Project charter

Baseline plan

Financial measurement tools

Project communication room (war room)

Bid documents

Gantt chart

Project Web site

Bid/seller evaluation

Graphic presentation of risk information

Quality function deployment

Bidders conferences

Kick-off meeting

Quality inspection

Bottom-up estimating

Learning curve

Quality plan

Cause and effect diagram

Lesson learned/post-mortem

Ranking of risks

Change request

Life Cycle Cost (”LCC”)

Re-baselining

Client acceptance form

Milestone planning

Requirements analysis

Communication plan

Monte-Carlo analysis

Responsibility assignment matrix

Configuration review

Network diagram

Risk management documents

Contingency plans

Parametric estimating

Scope statement

Control charts

Pareto diagram

Self directed work teams

Cost/benefit analysis

PM software for cost estimating

Stakeholders analysis

Critical chain method and analysis

PM software for monitoring of cost

Statement of work

Critical path method and analysis

PM software for monitoring of schedule

Team building event

Customer satisfaction surveys

PM software for multi-project scheduling/leveling

Team member performance appraisal

Database for cost estimating

PM software for resource leveling

Top-down estimating

Database of historical data

PM software for resource scheduling

Trend chart or S-Curve

Database of lessons learned

PM software for simulation

Value analysis

Database of risks

PM software for task scheduling

Work authorization

Database of contractual commitment data

Probabilistic duration estimate (PERT Analysis)

Work Breakdown Structure

Decision tree

Product Breakdown Structure

 

Earned value

Progress report

 

Table 1: The 70 project management tools in alphabetical order.

Data Analysis

Two data analysis approaches were used extensively. First, tools were rankordered according to average levels of use both for the entire sample and for subpopulations divided using project characteristics and contextual variables. This approach identified the lists of most and least used tools in both the entire sample and in subpopulations. Second, statistically significant differences were sought between levels of use across different subpopulations. Statistical significance reported in this paper is from the results of t-tests used to verify differences between means and chi-square for contextual differences.

The Examination of Usage Levels for the Entire Sample

The rank-ordering of tools and techniques by decreasing levels of use produces the list presented in Table 2. An examination of Table 2 reveals large variations-in-use levels among the different tools in the project management toolbox. They are, for the most part, what one might expect. In fact, throughout the data analysis, the face validity of the results is confirmed by the very few counterintuitive results encountered. The labels used as headings in Table 2 are those used to qualify the levels of use in the questionnaire.

The interpretation of this table is rather straightforward; the tool used most extensively is the progress report, while the one used least often is the Monte Carlo analysis. In interpreting Table 2, it must be remembered that small changes in average use can change the rank-ordering. The exact position in this list is, therefore, not meaningful.

 

From Limited to Extensive Use

From Very Limited to Limited Use

Less than Very Limited Use

Progress report

Contingency plans

Life Cycle Cost (”LCC”)

Kick-off meeting

Re-baselining

Database of contractual commitment data

PM software for task scheduling

Cost/benefit analysis

Probabilistic duration estimate (PERT)

Gantt chart

Critical path method and analysis

Quality function deployment

Scope statement

Bottom-up estimating

Value analysis

Milestone planning

Team member performance appraisal

Database of risks

Change request

Team building event

Trend chart or S-Curve

Requirements analysis

Work authorization

Control charts

Work Breakdown Structure

Self directed work teams

Decision tree

Statement of work

Ranking of risks

Cause and effect diagram

Activity list

Financial measurement tools

Critical chain method and analysis

PM software for monitoring of schedule

Quality plan

Pareto diagram

Lesson learned/post-mortem

Bid documents

PM software for simulation

Baseline plan

Feasibility study

Monte-Carlo analysis

Client acceptance form

Configuration review

 

Quality inspection

Stakeholders analysis

 

PM software for resources scheduling

PM software for resources leveling

 

Project charter

PM software for monitoring of cost

 

Responsibility assignment matrix

Network diagram

 

Customer satisfaction surveys

Project communication room (war room)

 

Communication plan

Project Web site

 

Top-down estimating

Bid/seller evaluation

 

Risk management documents

Database of historical data

 

 

PM software multi-project scheduling/leveling

 

 

Earned value

 

 

PM software for cost estimating

 

 

Database for cost estimating

 

 

Database of lessons learned

 

 

Product Breakdown Structure

 

 

Bidders conferences

 

 

Learning curve

 

 

Parametric estimating

 

 

Graphic presentation of risk information

 

Table 2: The 70 tools in decreasing order of average use.

Project Management Software

For some members of the project management community, the expression “project management tool” refers to computer-based tools and specific software products. The term is used here in a broader sense analogous to the use found in the PMBOK® Guide. There has been considerable interest in the evaluation of these computer-based tools over the years (PMI, 1999). Research has also been conducted on the use and usefulness of these specific tools (Fox & Spence, 1998). From the point of view of the present research, these tools are among the many found in the project management toolbox and they are included in the investigation. Rather than investigate specific software products, the present study uses a more generic approach, identifying eight functionalities often served by project management software. These are presented in alphabetical order in Table 1 and are placed according to their level of use in Table 2.

The eight functionalities of project management software vary greatly in their frequency of use. Use for task scheduling is the third most frequently used tool. The very frequent use for task scheduling corresponds to the authors’ observations in many organizations. In fact, the three most frequent usages are related to scheduling. The uses for monitoring of schedule and for resource scheduling are further down the list of the most frequently used tools. Use for simulation, on the other hand, is near the very bottom of the list. The other four uses of project management software are in the middle list. Overall, the use levels of project management software decrease for more complex usages.

The Most Extensively Used Tools

The list of the most extensively used tools is composed of very well-known and widely used tools. There are few surprises here. However, a comparison with the content of the PMBOK® Guide reveals noteworthy discrepancies. The second most used tool, the kickoff meeting, is not found in the PMBOK® Guide. This briefing on the goals of the project involving stakeholders and participants (Wideman, 2003) was introduced into the project management field by Hamburger (1992) and others, early in the 1990s. It has now become one of the most widely used project management tools. This anomaly would be worth investigation by future PMBOK® Guide update teams. Future update teams should also consider including requirements analysis, as this too is among the most widely used tools, but it is not included in the current version of the PMBOK® Guide.

The Least Used Tools

The list of tools at the other end of the spectrum, those with “less than very limited use,” requires some interpretation. The use levels of these tools are very low. Several factors may explain the presence of a tool on this list.

Individuals can use some tools without any organizational investment or support. The use of a Gantt chart, for example, does not require any specialized resources. However, the use of databases does require significant organizational resources and support. All of the database tools are in the bottom half of the overall list, and two are in the last column. The difficulty of using these tools without organizational support may explain, at least in part, their low use levels.

All the tools on the list have been in wide circulation for over 10 years, with the exception of the critical chain method and quality function deployment (QFD). The relatively recent arrival of these tools on the project management scene may, at least partially, explain their low use levels.

Low-use scores may indicate that a tool is not applicable in a respondent's context. The survey asked the participants to indicate if a tool was not applicable to their work. Table 3 lists the tools that were identified as “not applicable” (N/A) by more than 15% of the respondents. Overall, the percentages are relatively low, showing that all tools are applicable most of the time. More specifically, the table shows that 24% to 32% of the respondents consider the bidding tools as not applicable. The bidding process is clearly not applicable in all project contexts and is used much more in specific industries. Further analysis reveals that bidding tools are indeed used more in engineering and construction projects, large projects, and projects with external customers.

One could argue that the cause for the very poor perception of these tools is ignorance, but the data suggest otherwise. The respondents were invited to indicate when they had insufficient knowledge of the tool or technique to have an opinion as to the effect of more extensive or better use. Because each participant did mention at least once his or her insufficient knowledge of a tool, it is reasonable to suppose that scores are a good indication of the participant's view and not an indication of perceived inadequate knowledge of the tool itself. Table 3 presents the percentage of respondents that indicated that they had no opinion. It presents only data for tools with “No Opinion” responses greater than 10%. The overall low percentage of respondents with no opinion confirms the choice of well-known tools and techniques as intended.

The Monte Carlo analysis is also high on this list, but its low use and lack of applicability are not explained by contextual factors. In all the different analyses of the data that were done, this tool is always the least used. A word search in the PMBOK® Guide revealed three references to this technique. Each mention implies typical and frequent use. It is also included in the glossary. Likewise, the PMBOK® Guide makes several references to decision trees, even devoting a subsection. Those references to decision trees imply frequent use.

 

Tools

% N/A

Tools

%

Bidders conferences

32

Monte-Carlo analysis

14

Bid/seller evaluation

30

Pareto diagram

14

Monte-Carlo analysis

26

Critical chain method and analysis

13

Bid documents

24

Control charts

13

Product Breakdown Structure

24

Decision tree

13

Trend chart or S-Curve

23

Value analysis

12

Pareto diagram

23

Trend chart or S-Curve

12

Life Cycle Cost (”LCC”)

22

Quality function deployment

12

Database of commitment data

22

Cause and effect diagram

12

Critical chain method and analysis

21

Self directed work teams

12

Control charts

21

Learning curve

11

Quality function deployment

21

Parametric estimating

11

Cause and effect diagram

21

 

 

Value analysis

20

 

 

PM software for simulation

19

 

 

Parametric estimating

19

 

 

Learning curve

19

 

 

Decision tree

19

 

 

PERT Analysis

18

 

 

Table 3: Left: Percentage of respondents indicating a tool is not applicable to their work. Right: Percentage of respondents with “no opinion.”

Four tools associated with the quality management movement show “less than very limited use” and are also on the list of tools perceived as nonapplicable in more than 15% of cases; these are the quality function deployment, control charts, cause and effect diagrams, and Pareto diagrams. Other tools associated with the quality movement are among the most extensively used: quality inspection and customer satisfaction surveys. The quality plan received a score just above the average. A summary analysis of the PMBOK® Guide reveals that the PMBOK® Guide has subsections devoted to the Pareto diagram, control chart, and cause and effect diagram. It also includes the first two in the glossary.

The results here question how typical and frequent the use of these techniques really is. Future PMBOK® Guide update teams might reevaluate the inclusion of these techniques in the PMBOK® Guide, or at least more appropriately qualify their frequency of use by project management practitioners.

The Tools in the Middle Column of Table 2

The center column of Table 2 contains a long list of tools whose use levels are neither very high nor very low. The vast majority are well known and often encountered tools. The discussion below focuses on examples of tools whose presence in this list might be surprising or out of line with the content of the PMBOK® Guide. Stakeholder analysis and earned value are mentioned very frequently in the PMBOK® Guide but are reported here as having respectively “limited” and “very limited use.”

The infrequent use of stakeholder analysis might be interpreted to mean that this type of analysis is only used in specific phases or certain project activities and not in others. It could be a very important analysis, the results of which are critical to project success, but be done infrequently. The results of a stakeholder analysis often include very sensitive information. For this reason, it may be unwise to document this information. It is possible that many practitioners do either informal or intuitive stakeholder analysis but do not document the results. These practitioners may not have reported use of this tool for this reason. As this is a tool that is most likely to produce sensitive information, this concern is less likely to apply to other tools.

The infrequent use of earned value does not seem to be explainable by the same logic. The infrequent use of earned value may indicate that this tool is not perceived as being as useful as its presence in the PMBOK® Guide might suggest. Further analysis found that earned value is associated with large projects and with engineering and construction projects. These are the types of projects where this technique was first developed. The results here indicate that it remains more prominent on these types of projects.

The feasibility study is another tool whose presence in this middle list merits some comment. There are seven mentions of feasibility studies in the PMBOK® Guide. However, in four of these, the PMBOK® Guide indicates that the feasibility study may be outside the project life cycle. The term is never associated with expressions such as “typical,” “usual,” or “common,” three terms that are used in the PMBOK® Guide to qualify Monte Carlo and decision tree analysis. The feasibility study is not included in the glossary. The results here indicate that feasibility studies, while not a frequently used tool, are used more frequently than many of the other tools and techniques prominent in the PMBOK® Guide. Here again, future PMBOK® Guide update teams should take this into consideration.

The Effects of Organizational Support

Table 4 presents the summary of the results relative to use and organizational support. The table presents the tools that received the highest and the lowest scores for each of these variables. It also reports an additional variable computed from the scores of the two main questions: the use of tools without organizational support or “autonomous use.” The analysis to this point has focused on the extent of use of the different tools. The following section of the paper examines the organizational support and autonomous use of project management tools.

The Level of Organizational Support for Use of Tools

Tools and techniques are used within an organizational context. The organization can provide support for the use of some tools and techniques, and often does. Considering critical factors to project success, Milosevic and Patanakul (2005) argued that the support of the organization in selecting the set of “mutually compatible” tools that will be used “consistently” by project teams can be essential, especially in the presence of lessexperienced project managers.

The level of support was measured by a specific question on support for each tool or technique and also measured by general questions on the organizational context, including questions related to the existence of welldocumented methodologies and the level of project management maturity within the organization. The maturity variable will be examined more closely below. The study evaluated the level of support for two reasons: (1) to gather information as to the perceived level of support being provided and (2) to control for the effects of support on use.

The correlation analysis showed a very strong relationship between use levels and the levels of organizational support (Pearson's correlation r = 0.97). This reveals that practitioners use tools for which their organizations provide support and that organizations support tools that practitioners use. There seems to be a healthy equilibrium between practitioners’ desire to use specific tools and the tools that organizations choose to support. Because of the strong correlation between organizational support and use levels, this relation is omnipresent in the different analyses. It is always part of the explanation for particular use levels. It will, therefore, not be highlighted in subsequent analysis but should be assumed to be present in all cases.

Autonomous Usage

At first sight, the organizational support column in Table 4 seems very similar to the use column, but some interesting differences exist. In order to highlight those differences, a derivative variable was calculated based on the difference between the level of organizational support and the level of use. This variable, called “autonomous use,” represents the use that would be observed with the organizational support removed. It is a difference in score, not a difference in rank; it can be expressed as follows:

img

If someone considers a tool intrinsically valuable, that person would try to use it even if it were not supported by the organization. However, not all tools are easy for individual practitioners to use without the support of their organizations. Individual practitioners can use all of the tools that score high for autonomous use with little or no organizational support. These are all tools with relatively high use levels.

Several factors can come into play to produce low levels of autonomous use. Some tools are very difficult for individual practitioners to use on their own initiative. These tools require significant organizational support. All five databases specified in the survey are in the list of tools with low autonomous use. Their level of use can be explained by the support they receive from the organization. Some other tools are almost never used and receive no organizational support; these include the following: the trend chart or S-curve, Pareto diagram, control chart, quality function deployment, Monte Carlo analysis, and project management software for simulation. Interestingly, the widely used customer satisfaction survey is also on the list of tools with the lowest scores for autonomous use. This indicates that practitioners use this tool a great deal but only because their organizations push them to do so. In fact, completing such surveys is very often an ISO requirement reinforced by organizational procedures and controls.

In conclusion, many different circumstances can produce low levels of autonomous use, depending on the characteristics of the tools and the practitioner perceptions of their usefulness. On the other hand, practitioners choosing to use tools they feel provide value to them in the management of their projects can best explain high levels of autonomous use.

Is Project Management Practice Generic or Specific to Different Contexts and Different Project Types?

If project management practice is generic, then the pattern of practice observed for the entire population will be similar to the patterns observed across differing contexts and different types of projects. If project management practice is context-specific or specific to differing project types, then statistically different usages will be found when comparing across contexts and project types. Somewhat paradoxically, the data gathered during this study lends support to both; on a background of similar patterns of practice, several statistically significant differences have been identified.

The highest and lowest scoring tools on use and organizational support

What Is Similar?

Splitting the sample in two, using contextual variables or project characteristics, and then rank-ordering the tools by the level of use produces lists that are surprisingly similar. Rank-ordering is not, however, the best measure of similarity and does not readily lend itself to statistical verification. Visual inspection of the different lists, nevertheless, indicates that the most often used and the least often used tools are virtually the same in the split and full samples. For example, the 10 most commonly used and least used tools in the overall sample are among the 15 most and least used tools in almost all the subpopulations. The few exceptions are related to project types and will be discussed in the sections below. We conclude, therefore, that there is a common pattern of use across the project management community, a commonality that spans differences in context and project type. The use levels that are shown in Table 2 are, therefore, typical of use levels in all contexts and all types of projects and constitute the generic pattern of practice.

What Are the Differences?

The investigation of differences in context and project type reveals several statistically significant and important differences. A t-test was used throughout to identify dissimilarities in use across subgroups of responses. Among the different subgroupings, two stand out as showing the greatest difference in use levels. Comparisons made across organizations of different levels of project management maturity reveal statistically significant differences in use for all project management tools. Comparisons on the basis of project size also identified large numbers of statistically significant differences in average use; 64 of the 70 tools showed higher use levels on larger projects. The following sections present the analyses of these and other variations in context and project type.

Relationships Among Contextual Variables

A relationship might exist between level of organizational maturity and project size. The authors checked for such a relationship using cross-tabulations and Chi-square statistics. The association between size and maturity level is statistically significant but relatively weak.1 The same concern exists with respect to relationships between any of the contextual variables in this study. These relationships have been examined and will be reported where they are noteworthy. Unless it is stated otherwise, the reader should assume that the influences of different contextual variables reported here are independent of each other.

Variation by Organizational Project Management Maturity Level

The primary objective of this paper is not the study of organizational project management maturity. However, some characteristics of mature organizations have been identified. As noted earlier, mature organizations tend to have larger projects. Mature organizations also tend to be larger. Organizations with external customers tend to be more mature, as well. As would be expected, they also have better defined projects.

Respondents rated the level of maturity of their organizations on a scale of one to five, similar to the Software Engineering Institute's Capability Maturity Model (CMM) scale. The responses were recoded into two groups—those reporting maturity levels one and two and those reporting level three and above—thus dividing the sample into almost equal groups (56% and 44%). The test of the differences in the mean use revealed statistically significant differences for all of the tools, all tools being used more often on projects in the context of a mature organization than in less mature organizations. For 66 tools, the level of statistical significance is p < 0.001; for four tools, the level is between p = 0.001 and p = 0.013.

There are, therefore, very significant differences in the practice of project management in mature and less mature organizations. However, it is especially interesting to observe that the most frequently used and the least frequently used tools in both mature and less mature organizations are virtually the same. The generic practice of project management identified in Table 2 is present in both mature and immature organizations. The difference between project management practice in mature and less mature organizations is first and foremost in the overall frequency of use of project management tools.

Comparison of the levels of autonomous use shows that 18 of the 70 tools are used with significantly more autonomy in the low maturity group. No tools are used with significantly more autonomy in mature organizations. Low maturity organizations offer less support. In order to do their jobs, project managers must, therefore, take more individual initiative to use project management tools beyond the level supported by the organization. This is again a perfectly understandable result, which confirms the reliability of the data set.

 

Scope planning and control and contract management

Statement of work

Work Breakdown Structure

Milestone planning

Baseline plan

Re-baselining

Work authorization

Change request

Configuration review

Client acceptance form

Bid documents

Database or spreadsheet of contractual commitment data

Cost estimating

Top-down estimating

Parametric estimating

Bottom-up estimating

Database for cost estimating

PM software for cost estimating

Historical data

Database of lessons learned

Database of historical data

Cost and schedule control

Trend chart or S-Curve

Earned value

PM software for monitoring of cost

Quality control

Quality plan

Quality inspection

Customer satisfaction surveys

Risk management

Database of risks

Risk management documents

Table 5: Tools used more on projects for external customers.

Project Size Makes a Difference

Project size is the second variable for which marked differences in use were observed for a very large number of tools. Dollar value was used as a metric. This is not a perfect measure of size, but it is an approximation of size for which data can easily be gathered. The responses were divided into two groups, less than and greater than $1,000,000, which produced a 46%/54% split. Analysis of this grouping revealed statistically significant differences in average use for 64 of the 70 tools, with larger projects using tools more often than smaller projects in all cases. The level of significance of the differences was p < 0.001 for 50 tools and p < 0.038 and > 0.001 for 14 other tools. Here again, the same pattern of most often and least often used tools found in the entire sample, and shown in Table 2, was found on both large and small projects, indicating the same overlying pattern of generic practice.

Data on project duration was also collected. A large number of tools was found to be used more often on projects lasting over one year. In total, 42 statistically significant differences in use were found. As might be expected, a strong relationship was found between project duration and dollar value. Both are measures of project scope and, as such, produce a similar effect on project management practice, part of which is an increased use of a large number of project management tools.

The top 10 significant differences in tools use between large and small projects include four tools for monitoring and controlling, three functionalities of project management software, and two tools to manage risks. The greater use of tools by large projects is in part explained by the fact that some tools are considered nonapplicable (N/A) in small projects. The proportion of N/A in small projects is significantly greater for 41 tools, while none show a greater proportion of N/A for large projects. In conclusion, the project management toolbox of 70 tools and techniques examined in this study is much better adapted to the needs of larger projects.

Projects for Internal or External Customers

Projects for internal customers are managed quite differently from those for external customers. The study identified 27 tools for which use levels are significantly higher in projects for external customers. These more extensively used tools are presented in Table 5. The issues of scope planning and control, contract management, cost estimating, cost, schedule, and quality control and risk management are more prevalent in this type of project, as shown by the specific set of tools that are used more extensively in this type of project.

Of the 70 tools in this study, cost/benefit analysis is the only one that is used more extensively on projects for internal customers. This tool is used in project evaluation activities typical of the customer perspective during the identification phase of the project life cycle. These activities are more prevalent in internal projects and are often carried out in the customer organization before the project is let out to an external supplier. Overall, the project management toolbox seems to be much better adapted to the specific requirements of external projects.

The Level of Uncertainty in the Project Definition

The study also investigated differences in project management practice between well-defined and ill-defined projects. Only three tools showed significantly different levels of use. These are the project charter and databases for cost estimating and databases of lessons learned. All are used more on well-defined projects. The charter is usually established at the outset of the project. It is more difficult to write and gain approval of a project charter on ill-defined projects. This may explain the lower level of use. It may also indicate that the use of the project charter is better adapted to well-defined projects.

The cost of a well-defined project can be estimated using a detailed analysis of the project content. This analysis can draw upon historical data. This may explain the more extensive use of databases for cost estimating in this type of project. Databases of lessons learned are often structured using a detailed set of project characteristics. If the project is ill defined, these may be more difficult to use. Another plausible explanation for the less intensive use of databases of lessons learned in ill-defined projects may be that the key issue is more a process of uncovering what needs to be done than a question of finding the best way to do it. Lessons learned deal more with finding better ways of doing things than with the identification of what needs to be done. No tool was found to be used more on ill-defined projects. The traditional project management toolbox does not seem to contain tools that are especially well adapted to the needs of managers of this type of project.

Project Familiarity and Similarity

Some organizations manage projects that are quite similar to each other, while other organizations manage projects that differ significantly from each other. The study identified three tools that are used more in contexts where projects differ one from another. These are the project charter, the ranking of risks, and the decision tree. The latter is used infrequently in all contexts but is used more extensively in contexts where the projects differ. In such contexts, the project charter is used more extensively to better define the specific requirements of each project, in particular for projects that can be well defined, as presented earlier. The more extensive use of the ranking of risks is indicative of more focus on risk identification in projects that are less familiar. The decision tree is likely to be used in situations where more analysis is required to define the project. This is the case in contexts where projects are less familiar.

Differences Across Product Types

As was indicated earlier, the sample is weighted toward IT projects but includes sufficient respondents reporting on engineering and construction and business services projects to allow comparisons among the three. This was done using three separate tests, each of which compared one type of project against the rest of the entire sample. This produced three sets of comparisons: engineering and construction (E&C) against the rest of the population, IT against the rest, and business services (BuS) against the rest. Each of the comparisons identified statistically significant differences in use for several tools. In each case, some of the tools were used more often and others were used less often. The comparisons among the three domains show complex relationships.

Projects that deliver different types of products have several characteristics that discriminate among them. E&C projects are of higher average monetary value and longer duration than the other types of projects. The IT projects, on the other hand, are of significantly shorter durations: 75% last under one year, compared to 59% for the rest of the sample. As would be expected, E&C projects were more often for external customers, while BuS projects were mostly for internal customers. The project managers of business services projects work on a wider variety of projects than the others.

The results can be summarized in several ways. Table 6 presents the number of significant differences in use that were identified for each type of project. Some of the differences involve using a particular tool more often than the rest of the sample and others, less often. Only tools with usages that differ with a significance level of p < 0.05 are presented and discussed.

The much larger sample size for IT projects may explain the larger number of tools for which statistically significant differences in use were identified. One might expect that the tools that are used more in IT would be used less on the other product types and vice versa. However, the relationship is more complex.

Comparisons between two types of projects are much easier to present and understand. Comparing three very different sets of practices across a large number of variables at the same time is more difficult. Comparing E&C projects with IT projects highlights a few similarities and many differences. The BuS projects are, in turn, quite different from either of these other two types. The comparisons will, therefore, be made first between E&C projects and IT projects, which will be followed by comments on BuS projects and how they differ from the other two types. The comparisons among the three are presented in Table 7.

 

Number of Tools Used

E&C

IT

BuS

More often

20

22

5

Less often

4

9

9

Total differences

24

31

14

Note. E &C = engineering and construction, IT = information techonology, BuS = business services.

Table 6: Comparison of the number of tools used significantly more and less often.

Tools and Techniques

E&C

IT

BuS

Scope and requirements definition

 

 

 

Scope statement

more *

more *

 

Requirements analysis

less *

more *

 

Project charter

less

 

more *

Cost/benefit analysis

 

 

more *

Stakeholders analysis

 

 

more

Contract award

 

 

 

Bid documents

more *

less

 

Bidders conferences

more

less

 

Bid/seller evaluation

more *

less

 

Organizing

 

 

 

Communication plan

less

more *

 

Project communication room (war room)

 

more

 

Project Web site

 

more

 

Kick-off meeting

 

more *

 

Responsibility assignment matrix

 

more *

 

Self directed work teams

 

less

 

Team building event

less

 

more *

Planning and control metrics

 

 

 

Financial measurement tools

more *

less

more *

Estimating cost

 

 

 

Database for cost estimating

more *

less

 

Top-down estimating

more *

 

 

Parametric estimating

more

 

 

PM software for cost estimating

more

 

 

Planning

 

 

 

Quality plan

more *

more

less

Baseline plan

 

more *

 

PM software for task scheduling

 

more *

less *

PM software for resources scheduling

 

more *

 

PM software for resources leveling

 

more

 

Critical path method and analysis

more *

 

less

Control

 

 

 

Progress report

 

more *

 

Change request

 

more *

 

Configuration review

 

more

less

PM software for monitoring of schedule

 

more *

less *

PM software for monitoring of cost

more

 

less

Earned value

more

less

 

Trend chart or S-Curve

more

less

 

Quality inspection

more *

 

less

Control charts

more

 

 

Work authorization

more *

 

 

Client acceptance form

 

more *

 

Rebaselining

 

 

less

Design

 

 

 

Quality function deployment

more

 

 

Value analysis

more

less

 

Risk

 

 

 

Risk management documents

 

more *

 

Contingency plans

 

more *

 

Ranking of risks

 

more

 

Note. E &C = engineering and construction, IT = information technology, BuS = business services. The * indicates tools that are among the most frequently used on each type of project. Those without a * remain at lower use levels.

Table 7: Significant differences in usage across three types of projects

Care must be exercised in interpreting the results portrayed in Table 7. One must avoid confusion between the identification of the more frequently used tools and the identification of tools for which significant differences in use levels have been identified. Table 7 focuses on the latter. The fact that a tool is not mentioned does not mean that it is not used. It means that its use is neither greater nor less than the use observed in the overall sample. The “Gantt chart,” for example, is not mentioned in Table 7. This does not mean that it is not used. Quite to the contrary, it remains among the most frequently used tools on all types of projects. The same is true for five other tools on the list of the most frequently used presented in Table 2—milestone planning, statement of work, activity list, lessons learned/post-mortem, and customer satisfaction survey. Tools appear in Table 7 because significant differences in their use have been identified. A small star has been inserted in the table to identify the tools that are among the most frequently used on each type of project. Those that do not have a small star remain at lower use levels. Thus, a particular tool may be used relatively more frequently and still show limited use, indicated by “more” without a star. Likewise, a tool used relatively less frequently can still be among the most frequently used tools, as indicated by the note “less *.”

Comparisons Between Engineering and Construction Projects and IT Projects

Contrasting use between E&C projects and IT projects can be observed for 10 tools. This is to say that these tools are used significantly more on one type of project than in the rest of the total sample and significantly less on the other type. Requirement analysis and the tools related to bidding are examples of this phenomenon. Contrasting differences highlight very important distinctions between the two types of projects.

In Table 7, the tools have been grouped according to the purposes they serve in managing projects. Five tools have been grouped under “scope and requirements definition”; three of them concern E&C and IT. Both E&C and IT projects use the scope statement more often than the rest of the sample. Additional analysis reveals that this tool is used significantly more often on IT projects than in E&C (p = 0.005; this information is not shown in Table 7).

However, the two types of projects contrast in their use of requirements analysis, IT using it more often than the rest of the sample and E&C, less often. In E&C projects, requirements are often established and documented before the project is initiated. Requirements also have a tendency to remain stable throughout the project life cycle. In this type of project, the mandate is to meet the prespecified requirements. This contrasts with IT projects, in which analysis to develop the requirements is often a long and complex process with several iterations. Identification, elaboration, validation, reevaluation, and revalidation of the requirements are important aspects in managing the project. These take place over time, as functionality is progressively defined.

The project charter is another tool used to define project scope. This tool is used less in E&C than in the rest of the sample. E&C projects rely more heavily on contracts to specify the mandate and less on project charters, as can be seen in the next group of tools.

Awarding contracts by competitive bidding is a very important aspect of project management in E&C and very much less so in IT projects. The three tools in the survey dealing with bidding show contrasting usages between the two types of projects. These tools were identified as being used significantly more often in E&C and less often in IT projects. Contracts provide a powerful coordinating mechanism in E&C projects.

IT projects seem to rely more on tools for communicating and for organizing to ensure coordination. Communication plans show contrasting use, with more use in IT and less in E&C than the overall sample. The communication, or war, room and the project Web site are also powerful communication tools, the use of which is greater in IT projects. The responsibility assignment matrix and kickoff meeting are both communication and organizational tools. Both are also used more in IT projects. The increased use of organizational and communication tools may be an adaptation to more complex organizational environments and the association of IT projects with organizational change. The indication that self-directed teams are used less on IT projects than in the rest of the sample is an unexpected result. This is one of the few such results encountered during data analysis.

Both IT and E&C projects make extensive use of tools for planning and control. However, the tools that show markedly greater use are different for each type of project. Only the quality plan is common to both. This is consistent with the concern for managing scope, as seen with the use of the scope statement.

An examination of the entire set of tools for planning and control reveals that E&C projects are more centered on cost in general and estimating in particular, while IT projects are more focused on schedule and resource allocation. The contrasting usages of financial measures and of databases for cost estimating, as well as the opposition between the use of project management software for monitoring cost in E&C and monitoring schedule in IT, highlight this difference. Earned value and trend charts, or S-curves, are also cost-control-oriented tools associated in practice with each other. Both of these tools are associated with the management of tangible deliverables, such as those found in E&C, which may explain the contrast in use between the two types of projects. The greater use of the client acceptance form in IT projects is consistent with the progressive definition of requirements and multiple partial deliveries found in these projects. Meanwhile, the greater use of work authorizations on E&C projects is consistent with the preplanned work and contractual relations found on this type of project. The fact that people on IT projects are more familiar with and more at ease with IT products does not seem to influence the use of project management software and database tools, as they are used on both types of projects. The contrast is rather a contrast between use for planning and controlling cost on the one hand and planning and controlling schedule and resources on the other.

Risk management is an area where IT projects show a greater use of tools and techniques, with three tools being used more often than in E&C. Inversely, E&C shows greater use of value analysis and QFD, tools historically associated with the design of tangible products. The use of value analysis contrasts between the two.

Both E&C and IT projects make extensive use of project management tools and techniques. However, there are marked differences when the two types of projects are compared with each other. Further analysis revealed that significantly fewer N/A responses were reported for 39 tools from those working in IT projects, indicating that the content of the project management toolbox is perceived as applicable more broadly to IT projects.

Comparisons With Business Services Projects

E&C and IT projects call for significantly greater use of a relatively large number of project management tools, only two of which are common to both, and many of which show contrasting use. The comparison between these two types of projects on the one hand and business services projects on the other reveals significant differences in project management practices. Only five tools are used more on business services projects than on projects of other types. Only one, the use of financial measurement, is common with increased use in E&C, and none are common with IT projects. Furthermore, no tools that are specific to planning and control are used more often on business services projects than on other types of projects, and nine such tools are used less often. Overall, business services projects make less use of project management planning and control tools than do other types of projects. Further analysis reveals that a significant portion of the content of the project management toolbox is less applicable to business services projects; significantly greater numbers of N/A responses were reported for 19 tools from this group.

However, business services projects make greater use of stakeholder analysis to analyze their complex organizational environments. Cost-benefit analysis is used in project evaluation and selection, which is consistent with increased use of financial measures. The documentation of the project's mandate using the project charter is more important in this type of project. The team and team-building activities are also more important in business services projects.

The examination of the three types of projects for which the sample here provides sufficient data has revealed significantly different patterns of practice in each. This is strong support for the idea that different types of projects are managed differently. The analysis presented here adds a great deal of empirically founded detail in the comparison among project management practices observed in these three types of projects. As can be seen in Table 7, comparisons of project management practice across sectors are complex. It is easy to state that E&C projects are different from IT projects, and this is true. However, these two types of projects share significant characteristics that simplistic contrasts do not bring to light.

Conclusion

Project management has been recognized as a specialized field within the area of management. The identification of that which defines project management in terms of practices is an important aspect of the establishment of the field. As project management matures, the field is going beyond the uniform generic description of project management practice, often by evoking differences among different project types and contexts. The results presented here support both the image of project management as a field with relatively uniform generic practices and significant differences across different project types and contexts.

This research has identified a common pattern of practice across the project management community, a commonality that spans differences in context and project characteristics or type. The most often and the least often used tools are almost invariably the same regardless of project characteristics and contexts. Project management practice, therefore, has a strong generic component.

The applicability of the generic project management toolbox to different types of projects, particularly IT projects, has been called into question in recent years. The results of this study tend to refute the claim that the project management toolbox is not applicable to IT projects. The respondents from this type of project reported the tools were relevant and showed high use levels. This is not to say that their importance and level of use are adequately represented in the PMBOK® Guide.

Significant variability in project management practice has also been identified. Large projects and projects in mature organizations make much greater use of traditional project management tools and techniques. The examination of project management practices across three types of projects (E&C, IT, and BuS) has revealed very different patterns of practice in each. However, no simple relationship was found among use levels in the three types of projects. Quite to the contrary, the comparisons across project types are multifaceted and complex. The differences in use levels of tools are indicative of important differences in practice. For example, the way project scope and requirements are managed varies considerably between E&C projects on the one hand and IT projects on the other. How requirements are established, at what stage in the project, and their stability over the life cycle are quite different in each of these two types of projects. (See Besner and Hobbs [2006] for an analysis of variation in practice across project phases.) Furthermore, the focus on the planning and control of cost in E&C projects is replaced by focus on resource allocation and schedule in IT projects. The management of business service projects is quite different from both of these. Here, the traditional project management tools for planning and control are used less often, and more emphasis is placed on the strategic front-end and on team building. Sorting what is similar and what is different requires an in-depth analysis. We should, therefore, be wary of summary claims of similarity and difference.

The investigation showed that project management tools are generally used less on small projects. There may be a need to develop new tools or adapt the old ones for use on smaller projects. Cost/benefit analysis is the only tool that was shown to be used more on internal projects, and no tool was shown to be used more on ill-defined projects. The toolbox of well-known project management techniques is clearly oriented toward large, well-defined projects with external clients. If the sample here is indicative of the present state of the field of project management, there are as many small, ill-defined and/or internal projects as there are large, well-defined projects with external clients. Project management practice needs to be better adapted to their specific needs.

The present study has made a distinction among eight different functionalities of project management software. It has shown that the use of the different functionalities varies enormously. It is, therefore, inappropriate to consider project management software as a single tool with homogeneous use. The decision to implement or support the use of project management software should take an approach that discriminates among these varied uses.

The results of this investigation support the general approach taken by PMI's Standards Department in producing a generic standard with extensions to cover what is specific in different contexts and on different project types. However, these results also call into question some of the content of the PMBOK® Guide, which uses the criteria of “generally recognized” for inclusion, meaning “applicable to most projects most of the time, and that there is widespread consensus about their value and usefulness” (PMI, 2004, p. 3). The results of this survey show that several well-known tools receive less than “very limited use.” Given that the vast majority of respondents are PMP credential holders and that the low use levels were also found in mature organizations, these low use levels call into question the relevance of some of these tools, and the place they occupy within the literature in general and the PMBOK® Guide in particular. Likewise, one might expect that tools with high use scores would have prominence within the PMBOK® Guide. Most of the highly used tools identified in this study do have a prominent position. Exceptions, however, merit further examination.

This paper has presented an empirical examination of project management practice. This type of research can provide the discipline of a “reality check” in a field dominated by professional opinion, by providing a means of validating or calling into question some “accepted truths.” It can also add rich detail to general ideas such as the variation of project management practice by project type. The aim has been to contribute to a better understanding of the reality of project management practice and to its future development. img

References

Besner, C., & Hobbs, J. B. (2006). The perceived value and potential contribution of project management practices to project success. Project Management Journal, 37(3), 37–49.

Blomquist, T., & Nilsson, A. (2006). Project as practice: Making project research matter. Proceedings of the VII IRNOP Research Conference, Xian, China (pp. 540–549).

Crawford, L. H. (2000). Profiling the competent project manager. Proceedings of PMI 1st Research Conference, Paris, France (pp. 3–15).Crawford, L. H., Hobbs, J. B., & Turner, R. (2005). Project categorizations systems: Aligning capability with strategy for better results. Newtown Square, PA: Project Management Institute.

Crawford, L. H., Hobbs, J. B., & Turner, R. (2006). Aligning capability with strategy: Categorizing projects to do the right projects and to do them right. Project Management Journal, 37(2), 38–51.

Fox, T. L., & Spence, J. W. (1998). Tools of the trade: A survey of project management tools. Project Management Journal, 28(3), 20–27.

Hamburger, D. (1992). Project kick-off: Getting the project off on the right foot. International Journal of Project Management, 10(2), 115–122.

Hargrave, B. L., & Singley, J. (1998). PMBOK: A guide for project management in the next century. Proceedings of the 29th Annual PMI Seminars and Symposiums, Long Beach, CA.

Koskinen, K. U., Pihlanto, P., & Vanharanta, H. (2003). Tacit knowledge acquisition and sharing in a project work context. International Journal of Project Management, 21, 281–290.

Loo, R. (2002). Working towards best practices in project management: A Canadian study. International Journal of Project Management, 20(2), 93–98.

McMahon, P., & Lane, J. D. (2001). Quality tools/techniques and the project manager. Proceedings of the 33rd Annual PMI Seminars and Symposiums, Nashville, TN.

Milosevic, D. Z., & Iewwongcharoen, B. (2004). Management tools and techniques: The contingency use and their impacts on project success. Proceedings of the 3rd PMI Research Conference, London, England.

Milosevic, D. Z., & Patanakul, P. (2005). Standardized project management may increase development projects success. International Journal of Project Management, 23, 181–192.

Morris, P. W. G., Crawford, L., Hodgson, D., Shepherd, M. M., & Thomas, J. (2006). Exploring the role of formal bodies of knowledge in defining a profession—The case of project management. International Journal of Project Management, 24, 710–721.

Payne, J. H., & Turner, J. R. (1999). Company-wide project management: The planning and control of programmes of projects of different types. International Journal of Project Management, 17, 55–60.

Project Management Institute (PMI). (1999). Project management software survey. Newtown Square, PA: Author.

Project Management Institute (PMI). (2002). Government extension to a guide to the project management body of knowledge (PMBOK® guide) (2000 ed.). Newtown Square, PA: Author.

Project Management Institute (PMI). (2003a). Construction extension to a guide to the project management body of knowledge (PMBOK® guide) (2000 ed.). Newtown Square, PA: Author.

Project Management Institute (PMI). (2003b). US DoD extension to a guide to the project management body of knowledge (PMBOK® guide) (2000 ed.). Fort Belvoir, VA: Defense Acquisition University Press.

Project Management Institute (PMI). (2004). A guide to the project management body of knowledge (PMBOK® guide) (3rd ed.). Newtown Square, PA: Author.

Raz, T., & Michael, E. (2001). Use and benefits of tools for project risk management. International Journal of Project Management, 19, 9–17.

Shenhar, A. J. (1998). From theory to practice: Toward a typology of project management styles. IEEE Transactions on Engineering Management, 25(1), 33–48.

Thamhain, H. J. (1998). Integrating project management tools with the project team. Proceedings of the 29th Annual PMI Seminars and Symposiums, Long Beach, CA.

White, D., & Fortune, J. (2002). Current practice in project management—An empirical study. International Journal of Project Management, 20(1), 1–11.

Wideman, M. (2003). Comprehensive glossary of project management terms. Retrieved February 1, 2003, from www.maxwideman.com/pmglossary/

Winch, G. M., & Kelsey, J. (2005). What do construction project planners do? International Journal of Project Management, 23, 141–149.

Wirth, I. (1992). Project-management education: Current issues and future trends. International Journal of Project Management, 10, 49–54.

Zeitoun, A. (2000). Raising the bar in project management awareness and application. Proceedings of the 31st Annual PMI Seminars and Symposiums, Houston, TX.

Claude Besner, PMP, holds a degree in architecture, an MBA, and a PhD in management. He is a professor in the Management and Technology Department at the University of Quebec at Montreal (UQAM). He is a past director of the UQAM Master's Program in Project Management; this program is accredited by PMI's Global Accreditation Center. He also has more than 15 years of experience as a consultant and project director. He is active internationally in the project management community and has presented papers at both research and professional conferences organized by project management organizations worldwide. He is responsible for research on the subjects of processes, practices, and new project management tools within the UQAM project management research chair.

Brian Hobbs, PMP, holds an MBA and PhD in industrial engineering and is a Project Management Professional (PMP) certified by the Project Management Institute (PMI). He has been a professor at the University of Quebec at Montreal (UQAM) in the Master's Program in Project Management for more than 20 years. He is very active internationally in both the project management professional and research communities. Heserved a 3-year term on PMI's Standards Members Advisory Group (MAG) ending in 2002 and joined the Research MAG in July 2006. He is a reviewer for both the Project Management Journal and the International Journal of Project Management. He has presented many papers at both research and professional conferences worldwide.

1The large sample size in this study allows for statistical treatment of many relationships, which is a significant asset. However, care must be taken when interpreting the statistical results. In studies with large sample sizes, a relatively weak relationship can be statistically significant. For example, high maturity organizations execute more large projects (the split is 60/40), while low maturity organizations execute an equal number of large and small projects (50/50). A statistically significant relationship exists between project size and level of organizational maturity, larger projects being associated with more mature organizations. The statistic for this relation is p = 0.006, meaning that the statement that there is a relationship between these two variables has a 6/1,000 chance of being wrong. The relationship is, therefore, statistically very strong. But as this example shows, some significant relations can be unimportant when considering an individual case, because the relation between maturity and size will not be present in each individual situation. Nevertheless, the large sample size facilitates identification of important and strong tendencies and the analysis of more detailed questions.

This material has been reproduced with the permission of the copyright owner. Unauthorized reproduction of this material is strictly prohibited. For permission to reproduce this material, please contact PMI.

Project Management Journal, Vol. 39, No. 1, 16–33
© 2008 by the Project Management Institute

Advertisement

Advertisement

Related Content

Advertisement