Profiling the competent project manager
A major concern of the field of project management and a recurring theme in the literature is that of project success. There are two major strands to this concern—how success is judged (success criteria), and the factors that contribute to the success of projects (success factors). Closely associated with this is concern for the competence of the project manager. On the one hand, the competence of the project manager is in itself a factor in successful delivery of projects, and on the other, the project manager needs to have competence in those areas that have the most impact on successful outcomes.
The importance of the project manager in the delivery of successful projects has generated a considerable amount of rhetoric and a smaller body of research-based literature dealing with the knowledge, skills and personal attributes required of an effective project manager. With a few notable exceptions, findings have been based on opinions, primarily of project managers.
At the same time, concern for the competence of project managers has fuelled interest in the development of standards and certification processes that can be used for assessment, for recognition and as a guide for development of project management competence. Standards include those relating primarily to what project managers are expected to know, such as A Guide to the Project Management Body of Knowledge (PMBOK® Guide) (1996), the IPMA's Competence Baseline (1999), and the APMBoK (1996); and standards that address what project managers are expected to be able to do, such as the occupational or performance based competency standards of Australia and the United Kingdom. The process for development of these standards has primarily involved extensive consultation with industry and participation of experienced project personnel in identifying what they consider project managers need to know and to be able to do in order to be effective in delivering successful project outcomes. Some attempt has been made in the standards (ICB and APMBoK) to identify personal characteristics of effective project managers but this has played only a minor role, with the major attention being given to required knowledge and skills rather than personality characteristics and behaviors.
This paper presents a review and analysis of research-based literature concerning the criteria by which project success is judged, the factors that contribute to the success of projects; and the knowledge, skills and personal attributes of project managers that are expected to lead to achievement of successful project outcomes. Analysis of data on the project management practices and perceived performance ratings of over 350 project personnel from three countries are then presented. Analysis suggests that there is little direct relationship between perceived performance in the workplace and the level of project management knowledge and experience reported against either project management standards (PMBOK® Guide and Australian National Competency Standards for Project Management) or previous research findings.
Project Success Criteria
There is a considerable volume of literature dealing with project success, and this tends to fall into three categories—that dealing primarily with the criteria by which project success is judged; that primarily concerned with the factors contributing to the achievement of success and those that confuse the two.
Although not strongly supported by empirical research, there are many articles that address the issue of project success criteria, including a number of papers presented at the PMI® Seminar/Symposium held in Montreal, Canada in 1986, which focused on this theme.
These papers tend to agree on a number of issues. Firstly, that project success is an important project management issue; secondly, that it is one of the most frequently discussed topics; and thirdly, that there is a lack of agreement concerning the criteria by which success is judged (Pinto & Slevin, 1988; Freeman & Beale, 1992; Shenhar, Levy et al., 1997; Baccarini, 1999). A review of the literature further reveals that there is in fact a high level of agreement with the definition provided by Baker, Murphy and Fisher (1988, p. 902), that project success is a matter of perception and that a project will be most likely to be perceived to be an “overall success” (Baker, Murphy et al., 1988) if:
Exhibit 1. Project Success Factors Identified in the Literature—Ranked by Frequency of Mention
NB Not all 24 literature-derived factors are listed here.
“…the project meets the technical performance specifications and/or mission to be performed, and if there is a high level of satisfaction concerning the project outcome among key people on the project team, and key users or clientele of the project effort” (p. 902).
Furthermore, there is general agreement that although schedule and budget performance alone are considered inadequate as measures of project success, they are still important components of the overall construct. Quality appears intertwined with issues of technical performance, specifications, and achievement of functional objectives and it is achievement against these criteria that will be most subject to variation in perception by multiple project stakeholders.
Project Success Factors
The work of Murphy, Baker, and Fisher (1974), using a sample of 650 completed aerospace, construction and other projects with data provided primarily by project managers, remains the most extensive and authoritative research on the factors contributing to project success. Their work has been cited and used in the majority of subsequent research papers concerning project success. Ten factors were found to be strongly linearly related to both perceived success and perceived failure of projects, while twenty three project management characteristics were identified as being necessary but not sufficient conditions for perceived success (Baker, Murphy et al., 1988).
Important work was conducted on project success factors in the 1980s, notably by Pinto and Slevin (1987, 1988) and Morris and Hough (1993). Both studies draw on the research of Murphy, Baker and Fisher (1974) and have been regularly cited in later work. While Morris and Hough (1993) drew primarily on literature and case study analysis of major projects, Pinto and Slevin (1987, 1988) based their findings on the opinions of a usable sample of 418 PMI members responding to questions asking them to rate the relevance to project implementation success of 10 critical success factors (Slevin & Pinto, 1986) and four additional external factors.
Further studies aimed at identifying factors contributing to the success and in some cases, the failure, of projects (Ashley, Lurie et al., 1987; Geddes, 1990; Jiang, Klein et al., 1996; Zimmerer & Yasin, 1998; Lechler, 1998; The Standish Group, 2000; Whittaker, 1999; Clarke, 1995, 1999) used methodologies similar to that of Pinto and Slevin, with findings based on ratings or in some cases rankings of success factors by project personnel, general managers or other professionals. Beale and Freeman (1991) identified 14 variables that affect project success from a review of 29 papers. Wateridge (1996) identified eight most often mentioned success factors from a review of literature reporting results of empirical research relating to IS/IT projects.
Using the 10 critical and 23 necessary success factors identified by Baker, Murphy, and Fisher (1988) as the starting point, the findings of the 12 other studies listed above were analyzed and compared. Similar factors were grouped and then ranked according to the number of times they were mentioned across the 13 studies. Factors receiving the least number of mentions were progressively grouped with the most directly related factor receiving a higher number of mentions, and the factors re-ranked. This procedure was conducted iteratively, resulting in the emergence of 24 success factors. Rankings were based on the number of mentions identified over all 13 studies, and calculated separately for those studies relating primarily to engineering and construction projects (n=7) vs. IS/IT projects (n=6) and for those studies conducted pre-1995 (n=6) and post-1995 (n=7). This was done to see whether there was any change in the results concerning the most mentioned project success factors across industries, and with the development and more widespread adoption of project management. 1995 was adopted as the break point as reports published prior to that date primarily related to studies conducted in the 1980s. The results of this analysis are shown in Exhibit 1.
In conducting the analysis, the importance of Planning and Monitoring and Controlling at the integrative level, rather than the detailed levels of specialist scope, time, cost, risk and quality planning was a strong and interesting result, with Monitoring and Controlling of risk being the only specialist area to be mentioned within the top three ranking categories. Stakeholder Management (Other) encompasses stakeholder issues external to the parent and client organizations, including environmental and political issues, and it seems intuitively correct that this would rank highly for the success of Engineering and Construction projects. The increase in mention of Communication, Strategic Direction and Team Selection and decrease in importance of Technical Performance, post-1995, are of interest and appear attributable, at least in part, to the application of project management beyond its strong Engineering and Construction origins.
Exhibit 2. Project Manager Competence Identified in the Literature—Ranked by Frequency of Mention
NB Not all 24 literature-derived constructed are listed here
With the possible exception of Organizational Support, Organization Structure and Team Selection, the factors identified in Exhibit 1 call directly upon the competence of the project manager. Although Organizational Support is a factor that can be addressed by people other than the project manager, a competent project manager could be expected to understand that support of the organization is required to enhance the likelihood of project success and use interpersonal and other skills to achieve it. Similarly, the competent project manager can exert influence over the way in which the project team is structured and how it relates to the structure of the parent organization and others. Team Selection draws together factors relating to capability and experience of the project manager and team for the project and is therefore a factor that is directly concerned with project management competence.
This review of research-based literature concerning project success factors therefore clearly demonstrates agreement that the competence, or knowledge, skills and attributes of the project manager, are critical to project success.
Project Manager Competence
Interest in the role of the project manager and aspects of competence in that role can be traced back to an article by Gaddis in the Harvard Business Review of 1959 (Gaddis, 1959) and another Harvard Business Review article, by Lawrence and Lorsch, in 1967 on the “New management job: the integrator.” Since then, much has been written in project management texts (Kerzner, 1998; Meredith & Mantel, 1995; Dinsmore, 1993; Turner, 1993; Pinto, 1998) magazine (Dewhirst, 1996) and journal articles (Dewhirst, 1996; Einsiedel, 1987) about what it takes to be an effective project manager, culminating with Frame's work on Project Management Competence published in 1999.
The primary research based reports on the subject began to appear in the early to mid 1970‘s based on the investigations of Thamhain, Gemmill and Wilemon into the skills and performance of project managers (Cleland & King, 1988; Gemmill, 1974; Thamhain & Gemmill, 1974; Thamhain & Wilemon, 1977; Thamhain & Wilemon, 1978). This research plus work by Posner (1987) in the 1980s, Gadeken in the early 1990s (1990, 1991), Ford and McLaughlin (1992) and more recently by Zimmerer and Yasin (1998), and a major literature review based study by Pettersen (1991) constitute the primary research contributions to understanding of project management competence.
As for studies concerning project success factors, research based literature on aspects of project management competence draws primarily upon the opinions of project managers and others concerning the knowledge, skills and personal attributes required by effective project personnel (Posner, 1987; Thamhain, 1991; Ford & McLaughlin, 1992; Wateridge, 1996; Zimmerer & Yasin, 1998).
Gadeken's work (Gadeken & Cullen, 1990; Gadeken, 1991) is based on critical incident interviews with 60 U.S. and 15 U.K. project managers from Army, Navy and Air Force acquisition commands. The findings relate solely to personal attributes with identification of six behavioral competencies that distinguished outstanding project managers from their peers; five demonstrated at a slightly lower level of significance; and seven that were demonstrated but with no significant differences indicated between outstanding and average performers. This remains the most important work on behavioral competencies of project managers but the results should be addressed with some caution due to the focus on both acquisition and the armed forces.
Pettersen (1991) conducted a major literature review concentrating on American texts to develop a list of predictors, defined in task-related terms, intended for use in selection of project managers.
Morris (2000) reports on the work of the Centre for Research in the Management of Projects at UMIST, on behalf of the Association for Project Management and a number of leading U.K. companies, which focuses on the knowledge required by project managers. Findings are based on interviews and data collection in over 117 companies, seeking their opinion as to the topics they considered project management professionals should know and understand in order to be considered competent.
The same process as outlined for analysis of the research-based literature concerning project success factors was applied to the eight studies mentioned above. Respecting the strong links between project success factors and project manager competence, the 10 critical and 23 necessary success factors identified by Baker, Murphy and Fisher (1988) were again used as the starting point. The same 24 categories or concepts that emerged from the analysis of success factors, emerged from the analysis of findings concerning the knowledge, skills and personal attributes identified as important to effective project management performance. Only one change was made. Organizational Support was renamed Stakeholder Management (Parent Organization) in the list of project manager competence factors.
The project management competence factors were ranked according to the number of mentions identified over the eight selected studies, and separately for those studies conducted pre-1995 (n=4) and post-1995 (n=4). A breakdown for Engineering and Construction vs. IS/IT is not provided as there were only two studies that related directly to IS/IT. 1995 was adopted as a break point as reports published prior to that date primarily related to studies conducted in the 1980s or very early 1990s. The results of this analysis are shown in Exhibit 2.
It is interesting to note that Leadership, a factor that relates almost exclusively to personality characteristics or personal attributes, appears consistently in the highest ranking category amongst Project Manager Competence factors, whereas it appeared no higher than the second ranking category for Project Success Factors. Similarly, Team Development appears consistently in the first ranking category for Project Manager Competence factors, but fell as far as fourth ranking in one case for Project Success factors. Communication and Technical Performance are consistently stronger for Project Manager Competence than for Project Success factors. Planning (Integrative) is clearly a strong factor, as it appears consistently in the first ranking for both Project Success Factors and Project Manager Competence factors. It is interesting to note that the increased ranking of Monitoring and Controlling (Integrative) that appears in the post 1995 studies of Project Success Factors is supported by post 1995 studies of Project Manager Competence factors indicating an increased concern for control.
Project Management Standards
Concern for the competence of project managers in successfully delivering projects is evidenced not only through research into project success and various aspects of project management competence, but also through the development of standards that can be used to guide the development and assessment of project personnel.
Exhibit 3. Units in the Australian National Competency Standards for Project Management
Nb level 4 of the standards does not include the unit relating to integrative processes,
Standards relating to aspects of project management competence fall into two main areas—those relating to what project managers are expected to know, represented by project management body of knowledge guides; and those relating to what project managers are expected to be able to do, which primarily take the form of performance based or occupational competency standards.
There are three widely accepted Project Management Knowledge Standards
• PMBOK® Guide—Project Management Institute (1996)
• ICB: IPMA Competence Baseline—International Project Management Association (IPMA, 1999)
• APM BoK / CRMP BoK—Association for Project Management (U.K.) (APM, 1996; Morris, 2000)
Of these, the PMBOK® Guide is the most widely recognized and accepted, with nearly 300,000 copies distributed worldwide. It was approved as an American National Standard (ANSI/PMI 99-001-1999 on 21st September 1999). It defines nine knowledge areas within project management, and claims to identify and describe “that subset of the PMBOK® that is generally accepted” (PMI, 1996, p. 3), providing a “consistent structure” for the professional development programs of the Project Management Institute including (p. 4):
• Certification of Project Management Professionals (PMPs)
• Accreditation of degree-granting educational programs in project management.
The PMBOK® Guide, which in its current form was published in 1996, was developed through a process of consultation, “written and reviewed by a global network of project management practitioners, working as volunteers” (PMI, 1999).
Performance based competency standards describe what people can be expected to do in their working roles, as well as the knowledge and understanding of their occupation that is needed to underpin these roles at a specific level of competence.
The first generic performance-based competency standards for project management were the Australian National Competency Standards for Project Management, which were developed through the efforts of the Australian Institute of Project Management and endorsed by the Australian Government on 1st July 1996. In the United Kingdom, the Occupational Standards Council for Engineering produced standards for Project Controls (OSCEng, 1996) which were endorsed in December, 1996 and for Project Management (OSCEng, 1997) which were endorsed in early 1997. The Construction Industry Standing Conference (CISC), the Management Charter Initiative (MCI) and what was then called the Engineering Services Standing Conference (ESSC), now the Occupational Standards Council for Engineering (OSCEng), developed Level 5 NVQ/SVQ competency standards for Construction Project Management. A section of the Management Charter Initiative Management Standards, titled Manage Projects (MCI, 1997) provides a further set of competency standards for project management but in this case, within the general management framework.
Of these standards, the Australian National Competency Standards for Project Management and the OSCEng standards for both Project Management and Project Controls have attracted the most interest. However, the Australian standards, which follow the same structure as the PMBOK® Guide and use the Guide as a knowledge base, have attracted the most global interest.
The Australian Standards were developed over a three-year period, commencing in 1993, culminating in the endorsement of the standards by the Australian Government in 1996. Development was carried out by a consultant working under the guidance of a Steering Committee and Reference Group representing over 50 Australian organizations. The standards development process is well documented (Gonczi, Hager et al., 1990; Heywood, Gonczi et al., 1992) and requires the examination of existing information about the occupation and analysis of the purpose and functions of the profession and the roles and activities of its members (Heywood, Gonczi et al., 1992, p. 46) in order to derive the Units and Elements of Competency that provide the structure for the standards. There are nine Units in the standards, described at Levels 4, 5 and 6 as shown in Exhibit 3.
As outlined above, both the research based literature on project management competence, and standards that have been developed to define aspects of project management competence, have been developed primarily from the collective opinion of project management practitioners and others as to what project personnel need to know and what they need to be able to do in order to be considered competent.
The assumption behind the development and use of project management standards is that the standards describe the requirement for effective performance of project management in the workplace and that those who meet the standards will therefore perform, or be perceived to perform, more effectively than those whose performance does not satisfy the standards. To date, no research has been conducted to validate this assumed positive relationship between project management competence as described in the literature and assessed against standards, and perceptions of effective performance in the workplace.
Using two recognized project management standards:
• PMBOK® Guide (PMI, 1996) and
• The Australian National Competency Standards for Project Management (AIPM, 1996) data was collected to explore the relationship between performance against project management standards, and perceived performance in the workplace.
Five instruments were used in data collection. One instrument was used to gather general demographic information about respondents and their project management role. Two instruments were used to collect information on project management knowledge and practices of participants:
1. A knowledge test, using multiple-choice questions drawn from sample PMP® exams, with five questions for each of the nine PMBOK® Guide knowledge areas
2. A self-assessment against the nine units of the Australian National Competency Standards for Project Management with responses against a five-point scale from 1—I have never done or participated in doing this, to 5—I have done or managed this across multiple projects or subprojects.
Two instruments were used to gather information on perceived effectiveness of project management performance:
1. A self-rating questionnaire
2. A supervisor-rating questionnaire
Based on review of the project success criteria literature, the self and supervisor rating questionnaires had two sections. The first was intended to address the issue of perceived success according to differing stakeholder perspectives, seeking ratings of the participating project personnel on a 1 to 5 scale of perceived value to clients, value to their organization, effectiveness of relationships in achieving project goals, and ability to inspire and encourage the performance of others. The second section comprised five questions on project completion, requiring ratings on a 1 (Always) to 5 (Never) scale according to perceptions of completion of projects on time, on budget, achieving goals, satisfying end users and using recognized project management methodologies.
A deficiency of the perceived performance rating instruments was that information was only sought from the participant and their supervisor (or suitable equivalent). Ideally feedback would also have been sought from other stakeholders, such as clients, but this was beyond the achievable scope of the study. Another problem encountered was in securing completed rating forms from supervisors. Subsequently, although the supervisor rating appears to be a more reliable indicator than the self-rating form, responses are not available for all participants. Difficulties in obtaining supervisor ratings highlighted an interesting dimension of the project manager role. In a number of cases, organizations claimed that there was no one in a position to rate the performance of the participating project personnel.
Exhibit 4. Demographics of Sample—Age vs. Gender
Exhibit 5. Industry Sector of Organization by Region
It must also be noted that the standards do not directly address two important factors identified in the literature review, namely Leadership, drawing primarily on personality characteristics, and Technical Performance, which tends to be application area specific and is therefore not directly addressed in generic standards.
The sample for the study was obtained by asking organizations in Australia, the United Kingdom and the United States to identify between five and 20 of their project personnel to participate in the study. Data collection was conducted on the premises of participating organizations
The demographics of the study group are summarized in the table of age vs. gender (see Exhibit 4). Reluctance on the part of respondents to provide either age or gender or both has reduced the usefulness of these variables for analysis.
The industry sector of organization and regional location of the study group is shown in Exhibit 5. As the research was funded by an Australian Research Council grant in Australia, the Australian sample is the largest and best distributed. Engineering and Construction organizations in the United States were reluctant to participate in the study, largely due to the time commitment required.
Self and Supervisor Ratings—Perceived Performance
Competence is a socially constructed concept (Burgoyne, 1993) and studies that have endeavored to identify high-performance competencies (Boyatzis, 1982; Schroder, 1989; Gadeken & Cullen, 1990; Cockerill, 1989) have encountered difficulties in identifying the best or most effective performers.
As a measure of job performance, Boyatzis (1982), in his study of general managers, suggests three types of performance or criterion measure:
• Supervisory nominations or ratings
• Peer nominations or ratings
• Work-output measures.
Supervisory ratings are, of course subjective, and it is recognized that subjective measures are excessively prone to contamination, especially by supervisor bias (Campbell, 1990). However, Nathan and Alexander (1988) conclude that objective measures are not more predictive than subjective measures. The most important issues appear to be awareness of the potential construct validity threats of any measure used (Bommer, Johnson et al., 1995) and recognition that performance is not a single construct (Campbell, 1990).
In the field of project management competence and effectiveness, some studies have used a supervisor's subjective rating of the degree of effectiveness of the project participant under examination. Thamhain and Wilemon (1977) asked superiors to rate project managers relative to their peers on overall project performance on a 0-100 percent scale.
Gadeken (1991) having experienced the problem on a previous occasion in his studies of Defense Systems procurement program managers (Gadeken & Cullen, 1990) has explained the dilemma of identifying project management effectiveness:
The first and most difficult step in the job competency assessment process is to identify truly outstanding performers to study. For project managers, this is problematic because there are no clearly objective performance measures that can be applied. Overall assessment of project success and hence project manager success is difficult because of the complexity and extended time duration of most projects. Also many projects are significantly affected by external funding and political factors. Since projects usually involve several project managers over their duration, the current project manager may benefit or suffer from the efforts of his predecessors. Consequently, the only reasonable and acceptable approach for this study was to ask for nominations from senior officials (Gadeken, 1991, p. 7).
For the purposes of this study, using the results of the self (n=346) and supervisor (n=206) rating instruments, scores were constructed that distinguished individuals who consistently rated themselves to be above the middle on the 1 to 5 rating scale and those whose supervisors consistently rated them in the upper range. Using this scoring, and relating self and supervisor ratings (n=206), 5.83% of the sample were rated as having low perceived performance by both themselves and their supervisors. The majority of the sample, 54.85% rated their performance as high, and this was supported by supervisor ratings. 29.13% of the sample rated themselves high but their performance was considered low by supervisors, and 10.19% of the samples were rated more highly by their supervisors than they rated themselves.
Project Management Knowledge as a Predictor of Perceived Effective Performance
Investigation of the results on the multiple-choice knowledge test using the nine knowledge areas identified in the PMBOK® Guide and related to perceived performance as outlined above, provided no real evidence that more knowledge in these areas lead to greater perceived effectiveness of performance. The study team has plans for further investigation of this relationship. Discussion in this paper will focus on practices.
Project Management Practices as a Predictor of Perceived Effective Performance
Three levels of analysis were applied to the results collected in the self-assessment against the Australian National Competency Standards for Project Management.
• Summary scores were calculated for each of the nine Units in the Australian National Competency Standards, identifying patterns of use of the practices and then exploring their predictive value relative to perceived effective performance.
• Summary scores were calculated for literature-derived constructs.
• Factor analysis was conducted using the 94 items at Performance Criteria level.
Analysis at Unit Level
Each of the nine Units in the Australian National Competency Standards for Project Management is made up of a number of elements and performance criteria—Integration (11), Scope (8), Time (9), Cost (9), Quality (11), Human Resource Management (12), Communications (11), Risk (9) and Procurement (14). Data were collected by asking each respondent to indicate, on a 5-point scale, their level of experience in each of the 94 Performance Criteria. The nine Units have good reliability with Cronbach's Alpha ranging from 0.8384 for Integration to 0.9494 for Procurement. From investigation of patterns of use of practices, the spread for use of Integration, Scope, Time, Cost and Communications practices is fairly evenly distributed across the sample.
Quality, Human Resource Management and Risk practices are not evenly distributed, with fewer people using practices in these Units, than those who do. Procurement is particularly interesting. There are some who use procurement practices extensively, some who don't appear to become involved in procurement practices at all and others in the middle range.
Exploration indicated that use of practices grouped in these nine Units does not provide a good predictor for distinguishing between those who will be rated as performing effectively or less effectively. A logistic regression model was fitted to investigate further but there was no significant prediction found from any variable.
Quality is the only one of the nine items for which there is an apparent distinction between the perceived high and low performers. Those perceived as less effective performers have slightly more experience in use of Quality practices. Quality results also provide an opportunity to demonstrate an overall tendency for the supervisor scores to be different between countries. The distinction is particularly marked between the U.K. and the rest, interpretable as a cultural difference where in the U.K. being understated and ‘objective’ is valued more highly. The boxplot below (see Exhibit 6) displays this difference, which is apparent in two ways. Firstly, the U.K. supervisors split the group almost equally into good and poor performers, whereas in the other countries there are many fewer assessed poor performers. Secondly, the effect of higher Quality scores associated with being assessed as a poor performer is almost entirely a result of the U.K. results.
Exhibit 6. Quality Scores by Region
Analysis Based on Literature-Derived Constructs
Each of the 94 performance criteria against which data were collected were grouped according to the literature-derived constructs presented in Exhibit 2. A reliability analysis was conducted, identifying those constructs that had a reasonably high value (above 0.80) for Cronbach's alpha. Only a few of the more interesting constructs will be discussed here.
Planning (Integrative) (alpha=0.8319)
Use of this group of practices, considered in the first rank of importance according to the literature analysis (Exhibit 2), shows that there is a tendency for all self-assessed high performing individuals to have a higher rating on the Planning (Integrative) scale. However, there is a reverse tendency for U.K. supervisors to rate individuals worse if they have a high rating on this scale.
Monitoring and Controlling (Integrative) (alpha=0.9005)
This variable represents a consistent construct, based on 11 items. While most people are engaged with it, the variable does not appear to make consistent distinction between those perceived as good or poor performers.
Monitoring and Controlling (Risk) (alpha=0.9024)
This is a very reliable construct, even though it is only based on four items. However, the sample is divided between those who use risk management practices and those who don't. There is a tendency for higher scores on this item to be associated with higher self-rating of success. On the other hand, it does not appear to be associated with higher supervisor ratings.
Team Development (alpha=0.8974)
There is a very small group scoring highly on this scale, and quite a large group scoring well below 2.5 indicating that there are many people who are not using these practices. This is interesting given the high ranking this construct received based on the literature. Again, cultural differences emerge, with a tendency for those in the USA who self assess poorly, to be higher uses of Team Development practices. This lends itself to speculation that those who actively engage in Team Development practices are more willing to critically self assess their performance. In the U.S. higher supervisor scores on this item are connected with higher use of practices, but this is not the case in Australia and the United Kingdom.
Lessons Learned (alpha=0.9351)
Despite considerable rhetoric concerning the importance of capturing, sharing and utilizing lessons learned, this construct does not appear in the results of the literature review. This may be a result of focus, in research and literature, on management of single projects. More recently interest has been widened to consideration of organizational project management and multi-project contexts (Engwall & Kallqvist, 1999; Turner & Keegan, 2000) and further research in this area may justify inclusion of lessons learned as a required area of project management competence. As it stands, there are a number of performance criteria in the Australian National Competency Standards for Project Management that relate to lessons learned and which could not be grouped with variables identified from the research based literature. It is interesting however, that this is a very reliable construct, based on 14 items, but that most people score low on this scale. In other words, most project management personnel in the sample do not use lessons learned practices. Furthermore, this construct is not consistently associated with higher self-rating although there is some evidence that it is more highly valued in Australia and the United Kingdom than in the United States.
Literature-Derived Constructs as Predictors of Perceived Effective Performance
Models were fitted, using combinations of the variables derived from the literature review, and logistic regression. The variables involved in the best prediction were Monitoring and Controlling (Integrative), and Stakeholder Management (Parent Organization) (alpha=0.7258). The effect of Monitoring and Controlling (Integrative) is that a higher score in use of practices increases the odds in favor, while a higher score in Stakeholder Management (Parent Organization) decreases the odds in favor of self-assessment as an effective performer.
Using supervisor ratings, the literature-derived ratings involved in the best predictive model were Communication and Quality. The effect of a higher score on Communication is to slightly increase the odds in favor of perceived effective performance, while the effect of a higher score on Quality is to reduce the odds of being perceived as an effective performer.
The difference between Regions was also highly significant in the model using supervisor ratings. If you came from the United States, you were over twice as likely to be assessed as good by your supervisor than if you came from the United Kingdom. There was much less difference between Australia and the United Kingdom.
Factor Analysis Using the 94 Items at Performance Criteria Level
Factor analysis extracted 15 factors (with an eigenvalue greater than 1) from the 94 Performance Criteria in the Australian National Competency Standards for Project Management. Of the 10 that were easily interpretable, all except two essentially replicated the basic structure of the competency standards. Two, however, were different and one is equivalent to Lessons Learned. In summary:
Factor 1: Grouped all of the Performance Criteria relating to Cost with those relating to Procurement
Factor 2: All items in Human Resource Management, plus Item 9.1.1, which relates to identification of procurement requirements in association with stakeholders and higher project authors.
Factor 3: All items relating to Lessons Learned.
Factor 4: A variety of Performance Criteria, strongly suggestive of the Planning (Integrative) construct.
Factor 5: All Performance Criteria in the Unit: Risk.
Factor 6: All items in the Unit: Quality
Factor 7: All items in the Unit: Time
Factor 8: All items in the Unit: Cost
Factor 9: Most items in the Unit: Communications except 7.3.3, which deals with customer relationships
Factor 10: Those items of the Unit: Scope, not included under Factor 4. This factor leans toward Monitoring and Controlling.
The competence of project managers is clearly a vital factor in the success of projects, yet it remains a quality that is difficult to quantify. The majority of research and standards development conducted to date relating to project management competence is based on the opinions of project management practitioners and others. The research reported here has attempted to approach the profiling of the competent project manager from a potentially more objective viewpoint, by gathering data on project management knowledge and practices, using established project management standards, and then relating this to separately derived ratings of perceived workplace performance. Analysis suggests, however, that there is little direct relationship between perceived workplace performance and performance against either project management standards or previous research findings. An interesting area for further research is the effect of cultural differences and other contextual issues on perceptions of performance. Personality characteristics and application area specific technical issues are not addressed by the standards, suggesting a need for further investigation in these areas.
APM (U.K.). (1996). Body of Knowledge. Version 3.0 Association of Project Managers.
Ashley, David B., Lurie, Clive S., & Jaselskis, Edward J. (1987). Determinants of construction project success. Project Management Journal XVIII (2), 69–79.
Baccarini, David. (1999). The logical framework method for defining project success. Project Management Journal 30 (4), 25–32.
Baker, Bruce N., Murphy, David C., & Fisher, Dalmar. (1988). Factors affecting project success. In Cleland, David I. & King, William R. (Eds.), Project management handbook, second edition (pp. 902-919). New York: Van Nostrand Reinhold.
Bommer, William H., Johnson, Jonathan L., Rich, Gregory A., Podsakoff, Philip M., & MacKenzie, Scott B. (1995). On the interchangeability of objective and subjective measures of employee performance: A meta-analysis. Personnel Psychology 48 (3), 587–606.
Boyatzis, Richard. E. (1982). The Competent Manager: A model for effective performance. New York: John Wiley & Sons.
Burgoyne, John G. (1993). The competence movement: Issues, stakeholders and prospects. Personnel Review, 22 (6), 6–13.
Campbell, J.P. (1990). Modeling the performance prediction problem in industrial and organizational psychology, 2nd ed. (p. 1). Palo Alto, CA: Consulting Psychologists Press.
Clarke, Angela. (1995). The key success factors in project management. Proceedings of a Teaching Company Seminar. London: The Teaching Company.
Clarke, Angela. (1999). A practical use of key success factors to improve the effectiveness of project management. International Journal of Project Management 17 (3), 139–145.
Cleland, David I., & King, William R. (1988). Project management handbook, second ed. New York: Van Nostrand Reinhold.
Cockerill, Tony. (1989). Managerial Competencies as a Determinant of Organizational Performance. London University.
Dewhirst, Dudley. (1996, Nov.). Needed: A new model for training project managers. PM Network, 25–27.
Dinsmore, Paul C. (1993). The AMA handbook of project management. New York: AMACOM.
Einsiedel, Albert A. (1987). Profile of Effective Project Managers. Project Management Journal XVII (5), 51–56.
Engwall, Mats, & Kallqvist, Anna S. (1999). The Multiproject Matrix: A Neglected Phenomenon. In Crawford, Lynn & Clarke, Cecelia F. (Eds.), IRNOPIV Conference—Paradoxes of Project Collaboration in the Global Economy: Interdependence, Complexity and Ambiguity, Sydney, Australia: University of Technology, Sydney.
Ford, Robert C., & McLaughlin, Frank S. (1992). 10 questions and answers on managing MIS projects. Project Management Journal, XXIII (3), 21–28.
Freeman, Mark, & Beale, Peter. (1992). Measuring project success. Project Management Journal, XXIII (1), 8–17.
Gaddis, P.O. (1959). The project manager. Harvard Business Review 37 (3), 89–97.
Gadeken, Owen. C., & Cullen, B.J. (1990). A Competency Model of Project Managers in the DOD Acquisition Process. Defense Systems Management College.
Gadeken, Owen. C. (1991). Competencies of Project Managers in the MOD Procurement Executive. Royal Military College of Science.
Geddes, M. (1990). Project leadership and the involvement of users in IT projects. International Journal of Project Management 8 (4), 214–216.
Gemmill, Gary R. (1974). The effectiveness of different power styles of project managers in gaining project support. Project Management Quarterly 5 (1).
Gonczi, A, Hager, P., & Oliver, L. (Eds.) (1990). Establishing competency standards in the professions. Canberra: Australian Government Publishing Service.
Heywood, L., Gonczi, A., & Hager, P. (1992). A Guide to Development of Competency Standards for Professions. Canberra: Australian Government Publishing Service.
IPMA. (1999). ICB: IPMA Competence Baseline. Caupin, G, Knopfel, H, Morris, P, Motzel, E. & Pannenbacker, O., (Eds.), Germany: International Project Management Association.
Jiang, James J., Klein, Gary, & Balloun, Joseph. (1996). Ranking of system implementation success factors. Project Management Journal XXVII (4), 49–53.
Kerzner, Harold. (1998). Project management: a systems approach to planning, scheduling and controlling, Second ed. USA: Van Nostrand Reinhold.
Lechler, Thomas. (1998). When it comes to project management, it's the people that matter: an empirical analysis of project management in Germany. In Hartman, Francis, Jergeas, George & Thomas, Janice (Eds.), IRNOP III—The nature and role of projects in the next 20 years: Research issues and problems (pp. 205–215). Calgary: Project Management Specialization, University of Calgary.
MCI. (1997). Manage Projects: Management Standards—Key Role G, London: Management Charter Initiative.
Meredith, Jack R., & Mantel, Samuel J. Jr. (1995). Project management: a managerial approach, Third ed. New York: John Wiley & Sons, Inc.
Morris, Peter W.G. (2000). Benchmarking project management bodies of knowledge. In Crawford, Lynn & Clarke, Cecelia F. (Eds.), IRNOP IV Conference—Paradoxes of Project Collaboration in the Global Economy: Interdependence, Complexity and Ambiguity, Sydney, Australia: University of Technology, Sydney.
Murphy, David C., Baker, Bruce N., & Fisher, Dalmar. (1974). Determinants of project success, Boston: Boston College, National Aeronautics and Space Administration.
Nathan, B.R., & Alexander, R.A. (1988). A comparison of criteria for test validation: A meta-analytic investigation. Personnel Psychology 41 517–535.
OSCEng. (1996). OSCEng Level 4. NVQ/SVQ in Project Controls. England: Occupational Standards Council for Engineering.
OSCEng. (1997). OSCEng Levels 4 and 5: NVQ/SVQ in (generic) project management. Occupational Standards Council for Engineering.
Pettersen, Normand. (1991). Selecting project managers: An integrated list of predictors. Project Management Journal XXII (2), 21–25.
Pinto, Jeffrey K., & Slevin, Denis P. (1987). Critical factors in successful project implementation. IEEE Transactions on Engineering Management EM-34 (1), 22–27.
Pinto, Jeffrey K., & Slevin, Denis P. (1988). Project Success: Definitions and Measurement Techniques. Project Management Journal XIX (1), 67–72.
Pinto, Jeffrey K. (1998). The Project Management Institute Project Management Handbook. San Francisco: Jossey-Bass Publishers.
Posner, Barry Z. (1987, March). What it takes to be a good project manager. Project Management Journal, 51–54.
PMI. (1996). A guide to the project management body of knowledge. Upper Darby, PA: Project Management Institute.
Approved by ANSI as an American National Standard [Web Page]. Available at http://www.pmi.org/news/releases/pmbok.htm. (Accessed 13th April 2000).
Schroder, Harold M. (1989). Managerial competence: The key to excellence. Iowa: Kendall Hunt.
Shenhar, Aaron J., Levy, Ofer, & Dvir, Dov. (1997). Mapping the dimensions of project success. Project Management Journal 28 (2), 5–13.
Slevin, Denis P., & Pinto, Jeffrey K. (1986). The project implementation profile: New tool for project managers. Project Management Journal XVII (4), 57–70.
Thamhain, Hans J., & Wilemon, David. (1977). Leadership effectiveness in program management. Project Management Quarterly (June), 25–31.
Thamhain, Hans J. (1991). Developing project management skills. Project Management Journal XXII (3), 39–44.
Thamhain, Hans J., & Gemmill, Gary R. (1974). Influence styles of project managers: Some project performance correlates. Academy of Management Journal 17 (2), 216–224.
Thamhain, Hans J. & Wilemon, David L. (1978). Skill requirements of engineering program managers. Convention Record, 26th Joint Engineering Management Conference.
The Standish Group. (2000). Chaos [Web Page]. Available at http://www.standishgroup.com/chaos.html. (Accessed 19th February 2000).
Turner, J.R. (1993). The Handbook of Project-Based Management. Maidenhead: McGraw-Hill.
Turner, J.R., & Keegan, Anne. (2000). The management of operations in the project-based organization. Journal of Change Management.
Wateridge, J.F. (1996). Delivering successful IS/IT projects: Eight key elements from success criteria to review via appropriate management, methodologies and teams. Henley Management College, Brunel University. Henley Management College Library.
Whittaker, Brenda. (1999). What went wrong? Unsuccessful information technology projects. Information Management & Computer Security 7(1), 23–29.
Zimmerer, Thomas W., & Yasin, Mahmoud M. (1998). A leadership profile of American project managers. Project Management Journal 29(1), 31–38.
Proceedings of PMI Research Conference 2000