Project Management Institute

Success

getting traction in a turbulent business climate

Connie L. Delisle, Ph.D., M.Sc., BA, B.A. (Ed.), Centre for Innovative Management, Athabasca University
Janice L. Thomas, Ph.D., MBA, B.Sc., Centre for Innovation Management, Athabasca University, Project
Management Specialization, University of Calgary

Studies on business value focus on the question “What is Success?” Common arguments are made that organizations succeed if they make profits, or simply “satisfy the customer” when considering project success in terms of the product. Shenhar and Wideman (2000) conclude that there does not appear to be any agreed upon understanding of the concept of success in either the business or project management literatures. As well, Cooke-Davies (2002) notes that decades of individual and collective efforts by project management researchers since the 1960s have not led to the discovery of a definitive set of factors leading to project success. As well, the management literature aimed at information systems professionals shows considerable variation in explaining success from an outcome versus process point of view (Claudie and DeLuryea 1995–6; DeCoitis and Dyer 1979). Overall, as thinking about the concept of success evolves, so must research approaches used to study success. In turn, researchers’ understanding of value of project delivery has to evolve to include both a wider continuum of efficiency criteria and criteria measuring innovative delivery of products/services and creation of new markets.

This success issue is further complicated by the development of new organizational forms and ways of doing business. Success seems different in a turbulent business climate marked by the emergence of “virtual” organizations. Thus, virtual organizations are a consequence of industry disintegration that creates an ability to compete on a much broader value scale through projectized product and service delivery. In a constantly changing business climate the conceptual understanding of success seems more of an evolving, ongoing process rather than an input-output procedure. Organizations—virtual or not—that do not see the broader picture of success as a dynamic process that is linked to value creation often fall into what Bass and Christensen (2000) call the “sufficiency trap.” Rather than assessing and responding to competitive pressures to innovate on other market trajectories, these projects and sometimes their organizations stay the course and fail.

This paper presents a summary of a research study designed to investigate the nature of success within the context of virtual project teams. The study aimed to identify what influences success in virtual projects; what criteria allow for the best judgment of the success of projects, and what relationship exists between success, value, and communication, and project outcomes. The first section of this paper describes the evolution of the success concept. The following section presents a summary of the themes that guide success research and show its relationship to the overall progression of project management research. An understanding of these dominant themes makes it easier to understand the research approaches that have led us to the current understanding of success. The third section describes the research study. Some of the key findings and conclusions from the research study conclude this paper.

Evolution of Success Research

This section provides insights into the themes that guided success research and the research approaches utilized in the past. It concludes with a new conceptual understanding of success that serves as a synthesis of existing approaches and acts as the theoretical framework for this study

Themes that Guide Success Research

A historical look at the project management body of knowledge offers some insight into the epistemology (ways of knowing) about success research. As well, although not generally made explicit, conceptual or theoretical foundations influence research approaches. By looking at success over time and from a theoretical perspective, we can see that of success in the project management literature roughly falls into three themes, “planning and control,” “contingency,” and “sensemaking” (Exhibit 1).

Exhibit 1.Themes and Theory Relationship Over Time

Themes and Theory Relationship Over Time

Planning and control discussion and study focuses on correcting ineffective application of planning and control methods and reducing irrational decision-making (Thomas 2000; Morris, Jones, and Wearne 1998; Urli and Urli 2000). Studies of this nature appear grounded in cybernetic theory, which necessitates a view of success as a function of gaining more information and using tighter management controls (Thamhain 1996). Thus, the delivery of successful projects depends on measuring the right things. Planning and control research roughly fits within what Morris, Jones, and Wearne (1998) call the “formation period” (1955–1970), emphasizing areas of scheduling, earned value, risk management, life cycle, responsibility charting, and partnering.

Contingency-based success research aims to identify overlooked variables whether technical, social, or managerial in nature that explain or even predict success (McKeen, Guimareas, and Wethrerbe 1994). From a theoretical point of view, contingency research appears most connected to behavioral (i.e., Skinner, Watson, Bandura) and cognitive (rational) theories (e.g., Piaget, Luria, and Rogers). Contingency studies seem most characteristic of what Morris, Jones, and Wearne (1998) call the “expansion period” (1970s to mid-1980s) and the first part of the “holistic revolution” (mid-1980s to present). Contingency studies in the mid to later 1990s appear to start to examine what causes cancelled or failed projects and cost overruns.

Thomas (2000) acknowledges the overall contribution of contingency research, but notes that this type of research seeks to answer questions about how to improve project management within traditional boundaries. Sensemaking research as presented by Thomas (2000) and Tjäder (1999) emphasizes the dynamic emergence of “grounded theory” building that answers “why” questions as research unfolds, and a relationship is established between researcher and study participants. Sensemaking attends to the role of people's cognitive and emotive processes in shaping experiences.

Sensemaking research in project management emerged within the last five years of the “holistic revolution.” This period marks a time when businesses began to question and identify the underlying reasons why projects fail in relation to the overall business objectives. In keeping with the growth of the professional bodies of knowledge by organizations like APM and the Project Management Institute (PMI®), and the “recognition of the discipline as a genuine cognate area” (Morris, Jones, and Wearne 1998, 5), success research has a reason to move beyond finding contingent solutions to identifying core processes underlying the delivery of successful projects.

Research Approaches to the Study of Success

Knowing that success has been studied from different angles over time, what kind of methods do researchers employ in studying success? Baccarini (1999) concludes that an accepted methodology does not exist for measuring success. Said (1983) suggests that all we have is “scraps” and “fragments” that practitioners and researchers tend to treat as whole frameworks or methodologies. In business practice, methodologies refer to process frameworks for standardizing the act of managing a project. Specifically, these methodologies provide knowledge about what to do, using what process, to produce what product or service (Wateridge 1995). In a research capacity, methodology refers to the processes of sampling, design, collection, and analysis of the data (Cooper and Emory 1995).

Inconsistencies in research approaches and a steady stream of new definitions about success make it difficult to compare terminology and create a shared meaning of success (Baccarini 1999; Wideman 2002). Thus, researchers tend to use planning and control and contingency based approaches to study success, with the underlying goal of finding the “one best” list or framework. This has resulted in a proliferation of different lists (some ranked), the development of contingency models and typologies, and most recently, category frameworks and behavioral questioning. The following section briefly describes each of these approaches.

Shopping Lists of Success Factors

Most of the planning and control based studies focus on examining independent variables that influence the successful outcome of projects. Study respondents may be asked to pick the success “factors” from a list and rank them, asked opened ended questions about what success factors are most important, or asked to determine the level of agreement of the importance of success factors using a ranking scale. The difficulty in a “reflexive” type of study lay in knowing if predetermined lists are complete. To this end, in developing Survey 2, a review of the literature revealed twelve success factors which appeared most frequently in foundational conference papers, research studies, descriptive case studies, and opinion-based conceptual papers about success.

The most commonly cited success factors include, “clear goals, client/user participation, communication, management commitment/support, planning and control, capable resources/team, task/technical orientation, trouble shooting/solving, monitoring/feedback, team leader characteristics, life cycle, funding/cost, understanding the business” (Delisle 2001, 66). The list closely reflects Pinto and Slevin's well-known list of success factors; however, this list was not generated initially from their work alone. Most empirical project success research includes at least half of the same twelve factors appear on a consistent basis. Communication and clear business goals appear as the most frequently included factors, whereas top management commitment, user participation, and capable team/resources appear as the second most often included factors (Delisle 2001).

Finally, in relation to project outcomes, the concept of success has been examined without due consideration of value or important contextual factors like internal and stakeholder politics. Traditional “iron triangle” or efficiency related dimensions such as being on time, meeting the budget, and quality considerations also serve as outcome success measures without consideration of what success means to the project team, stakeholders, and the organization, what each believes success to look like, and without knowing who has the ultimate power to declare success or failure of the project. As well, very few studies speak about future opportunity, new market development, or what IBM Global Services (1999) refer to as “innovation” types of value related to opening new product/services lines or creating new markets.

Contingency Models and Typologies

Contingency and typology-based research helps to ground success research in the project context, for example, by industry (Hartman, Ashrafi, and Jergeas 1998), by organizational structure (Guss and Hartman 1998), and by type of project (Belassi and Tukel 1996; Shenhar, Renier, and Wideman 1996; Shenhar et al 1997, 2000; Shenhar and Wideman 2000). Many contingency studies test hypothesized models of less than ten independent variables against one or two dependent variables (cause of the outcome of project). The difficulty in using contingency frameworks lie in justification of what to include and what to exclude, as discussed in the previous section. Thus, the resulting 2x2 or grid-type post hoc models of success are not widely adopted. More often, contingency models appear in information systems (IS) research, as shown by a review of 180 studies by DeLone and McLean (1992). The IS body of knowledge mostly agrees that client or user satisfaction defines project success as an outcome (Claudie and DeLuryea 1995–6; DeCoitis and Dyer 1979).

As well, classification efforts to date have not justified use of a number of differing project characteristics in a way that has gained a critical mass of acceptance or “buy in”by those in the profession. The appropriate selection of axis dimensions has been somewhat overshadowed the treatment of success as something people try to achieve, not what projects do per se as an entity. Since people do project management, the notion that “project success varies by project type” (Shenhar et al 1977) draws the focus to picking the right typology or classification axes when attention needs to be focused on success as a dynamic human initiated system that is driven foremost by behavior and shaped by perceptions. Thus, the underlying premise of project success and typologies that “for a project to be successful, different types of project work, associated with different types of product, need to be managed differently” (Shenhar and Wideman 2000, 2) needs rethinking. In particular, typologies need to express what people are paying attention to, what they agree to judge the success of a project by, what stakeholders will determine the overall outcome success of the project (processes/product-service), and how stakeholder expectations will be aligned at the front end of the project and managed throughout the project life cycle to achieve success.

Category Frameworks

Category approaches share some of the problems with planning and control and contingency-based research. However, they overcome the mutual exclusivity of ranked lists by allowing nonexclusive membership. Categorization methods move the profession one step closer to understanding the influence of context in success, but this method should not be an end in itself (Delisle 2001). Belassi and Tukel (1996) and Beale and Freeman's (1991) are the most prevalent examples of success research by using categorization. Belassi and Tuckel (1966) suggest that five categories (project organization, team, environment, leadership, and the project itself) adequately represent the overall context of the internal and external project environment. In this approach, participants are asked to determine which success indicators or success criteria (or both) are most critical to each of the five categories. For example, “trust” may be success indicator within the project and the organization level or category.

Exhibit 2. Research Questions and Hypotheses

Research Questions and Hypotheses

The value of a categorization model lies in its ability to clearly show the complexity of success research by considering the internal and external context of the project. The major concerns appear to be about their lack of ability to capture the variability people exhibit in how they judge the typicality of category members. As well, over time, categorization does not allow for analysis of the strength of relationships between variables in by category.

Behavioral-Cognitive Framework

Historical and theoretical influences, approaches to success research, and consideration of significant findings from key studies helped to first disentangle success concepts and then build a research framework for this study. Most notably, Cooke-Davis (2002) makes a partial distinction between success components, using a question-based approach to investigate the factors important in project management success, individual project success, and factors leading to consistently successful projects. However, connections between indicators, criteria, and outcomes as explained below cannot be easily drawn. This research subdivides success into three interrelated components. Critical Success Indicators (CSIs) refer to the processes and markers teams agree to heed as a way to increase their chances of delivering successful projects. Critical Success Criteria (CSCs) refer to dimensions that a project will be judged against as being “successful.” And finally, Outcome Success (OS) refers to the outcomes (i.e., products/services, and project management or project team processes), typically determined in relation to meeting the business goals/objectives.

Success metrics refer to the actual measurements (i.e., number of lines of code to measure the success criteria of “meeting technical specifications”) used to measure an aspect of success. Metrics may be designed and used to measure success indicators, criteria, and/or outcomes. Typically, organizations state that they pay attention to CSIs such as “top management support,” but they do not generate metrics to measure and track it over the project lifecycle. As well, it is difficult to establish if a CSI such as “communication” has any predictive power in helping a project team “meet the budget” (CSC), and contribute to the overall outcome success.

The resulting framework consists of set of three questions that identify the behavioral actions (intention, behavior, and outcome) related to each of the three success components. To understand the intentions of the project team and its stakeholders, we ask “what CSIs do project teams/stakeholders pay attention to in efforts to enhance chances of success?” Next, to understand behaviors, we ask “what CSC dimensions do they use to judge the success? And finally, to understand success outcomes, we ask “what outcomes (product/service or process related) do those who have voting power determine as the overall success of the project?” A clear separation of concepts helps to minimize what Anderson, Smyth, Knott, Bergan, Bergan, and Alty (1994) call “conceptual baggage or misunderstanding of terminology and definitions created by confusing interrelated concepts.” Thus, this question-driven research framework is able to take differences in the perceptions of respondents into account in efforts to minimize conceptual baggage while disentangling success concepts (CSIs, CSCs and OS).

Research Methodology

This research study was designed to examine the concept of project success in general and the nature of project success on virtual teams in particular. Data was collected using a mixed-method approach (paper and electronic data collection). The study involved three surveys issued during 1997–2000. Surveys 1 and 2 N = 90, and Survey 3 N = 50). The study, completed in July 2001, explored three areas of interest, examining a total of nine research questions. This section of the paper provides background on the overall research scope, questions, and hypotheses, and the methods used to collect and analyze the data.

Research Scope, Research Questions, and Hypotheses

Little data have been generated about virtual project teams; thus, claims about success in this environment are relatively unsubstantiated. Because no previous comparative empirical studies exist, we do not have a consensual basis for understanding what constitutes “traditional” and “virtual” project teams. Thus, this study considers findings in the general project management, and management literature as a baseline for comparison of traditional project teams.

This study employed three surveys to examine three major themes (success, communication, and virtual project teams). However, this paper reports on selected data from these three surveys that help differentiate between perceptions of success in virtual and traditional projects. The two questions of interest to this paper and their corresponding hypotheses appear in Exhibit 2.

The research addresses whether the understanding of success and its relationship to value is different in virtual project teams, whom the literature claims to employ more innovative strategies to take advantage of opening or creating new products and markets, and so on. The empirical evidence from testing the study hypotheses serves as starting place to examine if the virtual context has changed or broadened our understanding of success.

Data Collection and Analysis

Project management research does not tend to address the full methodological complexity inherent in studying teams. Because teams nest themselves within companies (main effect), and members nest within teams, this research uses a nonrandom cluster sample method suggested by Simon (1999). Thus, data was collected on a team member basis but one data point exists per company to represent the average across all teams and all members. The research used a standard confidence level of 95 percent and a worst-case percentage (50 percent) to serve as the percentage of the sample size needed for a given level of accuracy in questions that use proportions.

Survey 1 replicated the core questions from the Project Implementation Profile (PIP), a reliable and valid diagnostic instrument to examine success developed by Pinto and Slevin (1992). Participants used a seven-point Likert scale (strongly agree to strongly disagree) to record how strongly they agreed or disagreed with each of five statements for ten individual Critical Success Factors.

Survey 2 required pretesting a combination of new and existing survey questions from published sources because no single instrument met the needs of this research study. Participants viewed a list of twenty-nine success indicators most commonly appearing in parts of previous research literature, and grouped them into five nonmutually exclusive categories (project, environment, project team, project leadership, and organization). For example, “fun” as a success indicator could be grouped in one or all of the five categories identified by Belassi and Tukel (1996). Respondents categorized success indicators according to the model introduced by Belassi and Belassi and Tukel (1996, 47) because collection of data in this manner “provides more complete and reliable information.”

Survey 3 examined success and communication in greater depth, building on the findings from the first two surveys. Rogers (1989) Communication Openness (COM) instrument comprised of twelve questions formed part of the survey. The COM provided a reliable and valid instrument that has been used to measure Communication Openness and relate it to perceptions about project success. The next part of the survey asked participants to rate their level of agreement on a 5-point Likert scale (1 = not successful, 5 = very successful) on three questions: the importance of success indicators (same list from Survey 2 plus three more CSIs); and how successful the project was in meeting a list of twelve success criteria. These dimensions (shown in Exhibit 7) appear in part in foundational research by Shenhar, Levy, and Dvir (1997); Cooper and Klienschmidt (1987); Dvir and Shenhar (1992); and Pinto and Slevin (1987). Finally, they rated how successful the project “experience” was overall (outcome success). Finally, participants responded to two short answer questions asking; 1) if they identified what to pay attention to (success indicators) in the planning phase of the project, and 2) if the team identified what they would use to judge the success of a project by either at the front end, during, or after the last phase of the project.

A total of ninety-one surveys were returned by mixed method (email, fax, mail) from the initial sample frame of 724. Survey 2 yielded forty-one responses, while fifty participated in Survey 1. A total of fifty surveys were returned from the Survey 3 sample frame of approximately 4,000 eligible participants who could only access the survey through the Gantthead.com website. A simple response rate calculation was used due to use of the Internet format, resulting in a 4.2 percent response estimated eligible (30 percent attrition or bounced emails).

Data from each survey was imported from Microsoft® Excel® to SPSS statistical software for analysis. The data analysis included basic descriptive statistics as well as inferential tests. Correlations show the relationships between success variables. Factor analysis show patterns among the variables that reveal underlying combinations of the variables that account for the variance in explaining a concept (Cooper and Emory 1995). Principal Component Analysis (PCA) was used to transform a set of variables into composites (or models) whose components do not vary together. For example, one variable does not cause the other to increase or decrease. The first linear combination of variables from the PCA builds a factor model that accounts for the greatest percentage of the variance. The remaining components explain the variance not accounted for by the first component (typically, the top three models account for the majority of the variance, and the remaining variance appears as residuals [error] or unexplained variance).

Probability and significance criteria met the standard social science research probability level of no less than 0.05. Standard statistical tables in Cooper and Emory (1995) were used to select an appropriate level for correlations to exceed the value the researcher required to reject the null hypothesis. However, all survey instrument scales that could be tested met Cronbach's Alpha of no less than 0.70. The most important consideration in exploratory research may be to treat validity as cumulative and inferred from an accumulation of empirical and conceptual evidence (Cooper and Emory 1995).

Exhibit 3. Survey 1 CSF Rank Comparison

Survey 1 CSF Rank Comparison

Key Findings and Discussion

This section presents the study findings related to the hypotheses explored in this paper. Key findings from the factor analysis are grouped into three sections. These sections include Critical Success Indicators, Critical Success Criteria, and Outcome Success determinants. The source of the data in each section (whether it comes from Surveys 1–3) is also noted. Overall, the breakdown of survey results in three sections helps to clearly present findings and the patterns of interrelationships to tie in the discussion of success and value.

Critical Success Indicators

The results from Survey 2 and 3 bring validation to the notion that “the common notion of measuring project success by evaluating the implementation process alone is no longer valid” (p. 10). The study findings show some support for the second part of hypothesis “b” (virtual projects teams perceive success criteria and success indicators differently from traditional project teams). Exhibit 3 presents the net agreement in ranking of the importance of Pinto and Slevin's (1992) ten CSFs (called Critical Success Indicators in this study).

The left side of the exhibit of shows Pinto and Slevin's (1992) “original” factor ranking (1 being most to 10 least important) and the right hand side displays the rank from the data collected in Survey 1. Most dramatic changes are that “top management support” dropped significantly in rank from second to ninth position, “personnel” dropped from fifth to tenth rank, and “monitoring and feedback” increased in rank from eighth to third. These results suggest that virtual project teams either secure the support of top management much earlier in the project cycle or work more on informal and ongoing monitoring and feedback processes as people shift in and out of the core team rather than following formal policy or procedures from the top down.

Next, the category approach in Survey 2 enabled the collection of data by differing contexts. Thus, the changes in perception of the importance of success indicators can be clearly seen. Participants grouped CSIs into five predetermined levels, thus, frequency counts rather than the strength of relationships was measured. Exhibit 4 presents the top five success indicators for each of the five levels.

The data from Survey 2 show that five of ten CSIs identified by Belassi and Tukel (1996) and Beale and Freeman's (1991) at the level of the “project team” to be statistically significant, as shown in Exhibit 4. Belassi and Tukel and Beale and Freeman (1999) do not include “top management commitment” as an indicator in their models in any of the levels except the “organization.” Thus, there is no comparative data collected by a categorization method showing whether “top management commitment” differs between virtual and traditional project teams at levels other than the organizational level.

However, Engkavanish (1999) finds that control or administrative efforts by the virtual organization have a negative influence on project success. Results from a dissertation by Engkavanish (1999) suggest that virtual organizations work on a trust basis, building relationships without traditional directing mechanisms required for information sharing. Furthermore, virtual project teams may distance themselves from the bureaucracy of organization or organizations that act as project owners/sponsors. Thus, the potential for fragmentation in a virtual environment may make it more difficult for top management to keep track of a project team. As well, it may appear pointless from the perspective of top management to attempt to gain “control” of the team in a fragmented work environment. Overall, CSI related to the project level appear internally focused, whereas alignment and business issues appear more of an organizational and project concern. The CSI identified at the project and environment levels appear less frequently agreed on than the other levels, again suggesting the variability of the context in making a project a unique undertaking.

Exhibit 4. Survey 2 CSI Comparison by Level

Survey 2 CSI Comparison by Level

Exhibit 5. Survey 2 CSI Models

Survey 2 CSI Models

Survey 3 used a similar approach, although the intent was to conduct a factor analysis of the CSIs at the level of the project team only (the other four levels were not tested). Factor analysis revealed three models accounting for 75.89 percent of the total variance or ability to explain what success indicators are critical at the level of the project team. Factor loadings (shown in brackets), exceed a stringent cut off point of 0.60, recommended by Cooper and Emory (1995). These three components presented in Exhibit 5 show high factor loadings, accounting for a large part of the variance in explaining what virtual project team need to pay attention to in increasing their chances of success.

Exhibit 6. Critical Success Criteria

Critical Success Criteria

Exhibit 7. Success Dimension Correlations

Success Dimension Correlations

The success indicators in Model 1 pertain more to the “internal” functioning of the project team members that allow them to focus on the external issues such as planning. Model 2 suggests that “trust” and “fun” rather than rules-based policy and procedures provide the basis for the team to quickly bond as a “tribe” so that the team may confidently shift roles and attend to the business aspects of the project (Delisle 2001). Similarly, Hartman (2000) notes, successful “traditional” project teams he studied work more on a trust basis such that the overall business direction is a shared responsibility. Finally, Model 3 comprised of one indicator, suggests that responses were based on an understanding of commitment from a trust perspective where a high level of personal commitment exists from decisions made internally, not imposed externally.

To summarize, the findings suggest that virtual project teams place similar importance on some CSIs and less importance on others when considered against historic data. Consistency is found in the top five CSIs mostly related to internal rather than external issues about the project team as shown in Survey 2 and validated in Survey 3. Further research to test the remaining four levels needs to be done.

Critical Success Criteria

Survey 2 asked participants to define project success in accordance with what their organization deemed important in judging the success of a project. However, the short answer question format only produced a total of seven different CSCs reflecting the efficiency theme prevalent in the literature (Exhibit 6).

Survey 2 respondents listed success criteria related to efficiency or “iron triangle” dimensions (time, cost, quality), as well as an effectiveness or client satisfaction dimension. Research by Toney and Powers (1997) shows similar results for nonvirtual project teams where the most common success outcome measurements include “on budget” (58 percent), “on time” (81 percent), and “quality” (69 percent). Only 19 percent of respondents listed “customer satisfaction,” although 15 percent do mention “market success.”

To help anchor responses in a specific context and avoid responding on the basis of preconceived notions, Survey 3 respondents were asked to rate how successful they are (or would be) in achieving success on twelve different dimensions as well as reporting their level of success on meeting four success outcomes (Exhibit 7). A contingency approach allowed for the examination of relationships between success dimensions and outcome success rather than simply producing a ranked list of individual dimensions. Exhibit 7 presents correlational data meeting a standard of The boxed numbers are those that met acceptable statistical rigor of 0.500 (out of 1.00) and p values of < 0.01 as per advice from Cooper and Emory (1995).

The majority of the highest correlations are between the same “types” of dimensions. For example, effectiveness dimensions “product or service being used by the customer” and “satisfying the customer” are highly correlated at 0.712. As well, efficiency dimensions “meeting technical specifications” and “meeting the budget” (0.601) show a high correlation. As well, high correlations occur between the innovation dimensions, “obtaining commercial success” and generating a large market share” (0.674). The most notable cross-correlation appears between solving the business problem (effectiveness dimension) and obtaining commercial success (innovation dimension) at 0.628. Overall, these findings suggest that success dimensions are considered either at the project or operational level or the strategic level such that no strong relationship connects them. This may reflect difficulty in making convincing arguments about how iron triangle dimensions relate to the business case at the strategic level.

Next, PAC allows for the assessment of the strength of correlations by partitioning the variance by groupings of dimensions that together account for the greatest percentage of the variance. The results in Exhibit 8 present the four distinct components (all < 0.500 loadings) resulting from this analysis. These components break out in much the same way as Dvir and Shenhar (1992) and Shenhar, Levy, and Dvir's (1997). However, the assigned labels for their four components include, “customer satisfaction,” “budget and schedule,” “business success,” and “future potential.” In this study, the PAC revealed one component “market potential” which essentially combines Shenhar, Levy, and Dvir's components “business success” and “future potential.” As well, the “customer satisfaction” component is a combination of this study's “efficiency” and “effectiveness” components.

Exhibit 8. Factor Loadings for 12 Success Dimensions

Factor Loadings for 12 Success Dimensions

Interestingly, 8.38 percent of the variance relates to the “budget and schedule” component that is very similar to results from Shenhar, Levy, and Dvir (1997) who find 10.8 percent for the same component. Although the literature generally discusses budget or schedule considerations in terms of efficiency, the presence of a combined component may point to evolution of these concepts. Fleischer (2001) and Singer (2001) suggest that the concept of schedule has changed for virtual project teams. Fleischer (2001, 34) reports that for his project, “the schedule was basically a task list without dates—anything else would drive you crazy. We used milestones for tracking because of the need for clear project markers.” Singer (2001, 40) also advises that scheduling for virtual project teams often consists of tracking work products-outputs of any task or activity—as opposed to deliverables that refer “to a work product seen by the customer.”

These findings also raise another interesting question, “why do organizations continue to focus on efficiency related dimensions in judging the success of projects when effectiveness or customer related dimensions account for a much higher percentage of the overall variance?” For example, Survey 3 data shows that 28 percent of respondents report being mostly to very successful in “meeting technical specifications,” 27 percent report being mostly to very successful in “meeting operational specifications,” 21 percent “meeting the budget,” and 23 percent “meeting the schedule.” Furthermore, Thomas, Delisle, and Jugdev (in press) find that 26 percent of respondents report mostly to strongly agree to the statement “projects are consistently completed on schedule,” and 23 percent mostly to strongly agree to the statement that “projects are consistently completed on budget” (N = 1,867). In contrast, this research finds that almost 30 percent are mostly to very successful in meeting efficiency success criteria, and approximately 15 percent met market potential (listed in Exhibit 5). Most striking, over a third actually regard innovation criteria as “not applicable.” The major difference appears that virtual project teams more heavily emphasize customer-related criteria that fall under the effectiveness dimension. Whether virtual or not, Wateridge (1996, 155) suggests that the project management is focusing on the “wrong factors or simply applying the right factors badly” in dealing with success.

Overall the results support hypothesis “c” (virtual project teams do not consider innovation success dimensions as critical as efficiency dimensions). It may be that virtual teams are used on projects expected to deliver on innovative value components but the team skills and abilities are still focused on efficiency goals. Generally, regardless of the type of team the findings concur with the literature in that efficiency rather that innovation dimensions are most prominent on the organization's radar screen.

Success Outcomes

Survey 3 also collected data about overall outcomes related to success. Respondents rated their level (or expected level) of success on four measures. The net percentage of success (very to mostly successful) is shown in brackets as follows: Overall success of the project (i.e., product) (75 percent)

Exhibit 9. CSCs That Predict Success Outcomes

CSCs That Predict Success Outcomes

• Overall success of the project management methods (61 percent)

• Overall success of the project team communications (core or main) (72 percent)

• Overall success of the project team communications (entire team) (62 percent).

There appears to be little difference between the importance participants place on any one of the four outcomes. Only one statistically significant strong correlation appears between the “overall success of the project management methods” and “solving the business problem” (0.503), p < .05 when comparing the twelve success dimensions and the four outcome measures. This finding has high face validity in that the literature constantly discusses project management as a way to solve business problems.

Finally, how do the twelve dimensions of success (as independent variables) predict the four success outcomes (as dependent variables)? When regressed, only three of twelve success dimensions have statistically significant predictive power (p < 0.001, df = 12) for two of the outcomes measures (Exhibit 9).

Overall, three of the twelve success dimensions, “solving the business problem,”“meeting the schedule,” and “opening a new market” account for 51.68 percent of the variance in explaining the overall success of the project management methods. Also, “meeting the schedule” has predictive power for the outcome success measure “overall communication success of the core team.” When regressed, the results show that “meeting the schedule” has the strongest relationship (B = 0.428), accounting for 48.70 percent of the variance.

In summary, these findings underscore the inability of any one single measure of outcome success to determine the success of a project. At best, using the CSCs presented in Exhibit 6 to judge the success of the project accounts for about half of the variance in explaining the successful outcome of the project. The current focus on efficiency dimensions does not adequately explain the success of project management methods. The results indicate that, regardless of the type of team, efficiency appears more central in forming perceptions of success. Innovation dimensions, although recognized appear to play more of a supporting or secondary role in explaining success as it is currently viewed (supporting hypothesis “c”).

Linking Success Indicators, Criteria, and Value

Considering the relatively large body of knowledge about success, few studies have found more than a tentative relationship between success indicators and criteria (Wateridge 1995; Erling and Svien 2000). The data from the open ended questions in Survey 3, show that only 45 percent of respondents identify CSIs or CSIs, and only 20 percent of respondents report that their teams define and measure both. Thus, how tightly are these concepts actually linked? Wateridge (1995) presents what seems to be the only empirical research that attempts to expressly link CSIs and CSC although he combines selected CSIs from the literature and from the results of his dissertation survey to produce a nonstatistically derived matrix of CSIs and CSCs thought to of primary, secondary, or tertiary importance.

Second, the data collected in Survey 3 (top seven CSIs at the level of the project team and twelve CSCs) was regressed to establish a statistical relationship to test for any linkages Although these findings do not appear statistically significant, the effect sizes suggest that a relationship does exists although more indirect than assumed. On this basis, the results of the regression analysis provide an overall picture of the potential amount of variance (total possible variance equals 100 percent), or the ability of each CSI to explain each CSC (Exhibit 10). The highest level heading of Exhibit 10 shows the breakout of the components from the factor analysis (efficiency, budget and schedule, effectiveness, and market potential in relation to the success dimensions). The success criteria are shown in the far left column. The key at the bottom on the exhibit shows the results breakdown by percent of the variance accounted for in ranges (Significant (SI) = 10 percent | Critical (C) = 9.99 percent–5 percent | Primary (P) = 4.99 percent–3 percent | Secondary (S) = 2.99 percent–1 percent | Tertiary (T) = <0.1 percent).

The CSI “fun” appears to have the highest predictive power (SI = 10 percent of the variance) in explaining each of two success dimensions “meeting the budget” and “satisfying the customer.” The CSI “trust” has the widest predictive power (C = 9.99 percent–5 percent of the variance) of five success dimensions (meeting technical specifications, being used by the customer, generating a large market share, opening a new market, and opening a new line of products). These results imply that trust and fun may be important mediating forces in achieving open communication (the respondent's most frequently mentioned CSI, Exhibit 3) to enable the team to effectively address CSCs (Hartman 2000).

Exhibit 10. Success Matrix

Success Matrix

Exhibit 10 may also be examined with respect to value outcomes generated by a project by paying attention to CSIs and judging the success of a project by certain CSCs. Conceptually, efficiency criteria represent generation of foundational value on the far left and market potential criteria represent innovation on value on the far right. EIU and IBM Global Services (1999) provide a full explanation of value referred to in this context. The major premise stems from successful organizations knowing when to apply the most applicable type of value, not that movement from left to right through a “continuum” is more correct. Considering the variance of “C” and “SI” category (bolded in Exhibit 10), the data indicate the CSI “fun” might have more influence on selling strategic or innovation type of value of a project, whereas “trust” appears more critical to both foundation and innovation value. Furthermore, planning and communication skills of the project team appear more critical in achieving innovation value.

Conclusions

The concept of success continues to change over time. Success as a multidimensional construct appears sensitive to contextual factors and research approaches whether examining success indicators, criteria, or outcomes. For example, differences in importance are placed on CSIs when examined by categories as opposed to ranked lists. Overall, virtual project teams focus on internal CSIs (i.e., trust, open communication, commitment, fun, and communication skills) while previous results from planning and control studies indicate the project teams pay attention to more external type of indicators like top management support, and client consultation.

As well, when asked to list success criteria, participants typically refer to efficiency-based “iron-triangle” criteria. When asked to rate their success on achieving various success dimensions, participants consider a wider value spectrum. Virtual project teams respond very similarly to traditional counterparts, with only slightly more focus on customer or effectiveness related success criteria. Regardless of type of team, innovation type of success criteria does not appear as important.

With respect to outcomes, respondents report being between 61 percent and 75 percent (net agree) successful in achieving any of four outcomes (i.e., of the project, project management methods, project team communications of the core or entire team). Thus, consideration must be given to understanding the relationship between CSIs, CSCs, and outcomes. Overall, a much more tentative link appears to exist between CSIs and CSCs than conventionally believed.

The behavioral-cognitive based framework appears useful in examining success from a people rather than technical perspective. That is, at the front end of the project, we need to understand what intentions (what CSIs do project teams/stakeholders pay attention to in efforts to enhance chances of success?), behaviors (what CSC dimensions will they use to judge the success), and what outcomes (product/service or process related) will those who have voting power consider in determining the “overall” success of the project. Once established at the front-end, perceptions of these different aspects of success may change as contextual factors such as politics and the market shift in the process of jointly realizing the project and business goals. In shifting business trajectories (i.e., considering efficiency as well as innovative dimensions of success) in anticipation of competitive pressures, organizations can avoid the “sufficiency trap” of staying the course of action that has continues to be a familiar approach.

Overall, this research is valuable at an academic, practical, and professional level. It makes a theoretical contribution by disentangling success constructs, questions the conceptual and theoretical foundation on which our thinking about success is anchored, and adds to the empirical basis for understanding success in virtual projects. This research makes a practical contribution by disentangling success concepts, testing assumptions about relationships between success concepts and results in two practical models (not reported in this paper). On a professional level, this research points to the absence of basic, empirical research about the project success in consideration of the virtual business context. Finally, critical research into the relationship between success, value, and virtual projects and their teams will help establish credible evidence to tie success at the project level to success at the business (or strategic) level.

References

Anderson, B., M. Smyth, R. P. Knott, M. Bergan, J. Bergan, and J. L. Alty. (1994). Minimising conceptual baggage: making choices about metaphor. In G. Cockton, S. Draper, and G. Weir (Eds.), Conference proceedings of People and Computers IX, HCI’94 (pp. 179–194). Cambridge University Press.

Baccarini, D. (1999, December). The logical framework method for defining project success. Project Management Journal 30: 25–32.

Bass, Michael J., and Clayton M. Christensen. (2000). Seeing beyond Moore's Law: Value beyond performance and cost/performance. Working Paper 01-046. Harvard Business School Division of Research.

Beale, P., and M. Freeman. (1991). Successful project execution: A model. Project Management Journal.

Belassi, W., and O. I. Tukel. (1996). A new framework for determining critical success/failure factors in projects. International Journal of Project Management 14 (3): 141–151.

Claudie, F.,E.Thomas, and Robert E.DeLuryea. (1995–6). Client/Server journey: Critical success factors. Accessed at http://www.csc.ibm.com/journey/docs/csjourney/docs/csjourn2.html.

Cooke-Davis, Terri. (2002). The “real” success factors on projects. International Journal of Project Management 20: 185–190.

Cooper, D. R., and W. C. Emory. (1995). Business Research Methods. Boston: Irwin.

Cooper, R. G., and E. J. Kleinschmidt. (1995, November). Benchmarking Firm's Critical Success Factors in New Product Development. School of Business, McMaster University. Hamilton Ontario. Email [email protected]

DeCotiis, T. A., and L. Dyer. (1979). Defining and measuring project performance. Research Management: 17–22.

De Lone, W., and E. R. McLean. (1992, March). Information systems success: The quest for the dependent variable. Information Systems Research 3 (1): 61–93.

Delisle, Connie L., and David Olson. (In Press). Would the real project management language please stand up? International Journal of Project Management.

Delisle, Connie. (2001, July). Success and communication in virtual project teams. Unpublished Doctoral Dissertation, Dept. of Civil Engineering, Project Management Specialization, The University Of Calgary. Calgary, Alberta.

Dvir, D., and A. Shenhar. (1992). Measuring the success of technology-based strategic business units. Engineering Management Journal 4 (4): 33–38.

EIU and IBM Global Services. (1999). Assessing the Strategic Value of Information Technology. IBM Global Services: 1–98.

Engkavanish, S. (1999, May). Analysis of the effectiveness of communication and information sharing in virtual project organizations. Unpublished Doctoral Dissertation, School of Engineering and Applied Science, George Washington University. Seattle, Washington,.

Erling, S. Anderson, and Arne Svien. (2000). Project evaluation scheme: A tool for evaluation project status and predicting project results. Project Management 6 (1): 61–69.

Fleischer, Kay. (2001). Internet project Kosovo. How do you manage-yourself and your project-in an upside down environment. PM Network 15 (4): 32–34.

Guss, C. L., and F. Hartman. (1998). Project success 2020—Looking onward not into the sand. Alternate Paper. In Project Management Institute (Ed.), Proceedings of Project Management Institute Symposium in Long Beach, California.

Hartman, F. T. (2000). Don't Park Your Brain Outside: A Practical Guide to Improving Shareholder Value with SMART Management. Newton Square, PA: Project Management Institute.

Hartman, F. T., Rafi Ashrafi, and George Jergeas. (1998). Project management in the live entertainment industry: What is different? International Journal of Project Management 16 (5): 269–281.

Lechler, T. (1998). When if comes to project management, it's the people that matter. An empirical analysis of project management in Germany. In F. Hartman, G. J. Thomas Jergeas (Eds.), Proceedings from the International Research Network on Organizing by Projects (IRNOP III) in Calgary, Alberta, Canada.

McKeen, J., D. Tor Guimaraes, and James C. Wetherbe. (1994, December). The relationship between user participation and user satisfaction: An investigation of four contingency factors. MIS Quarterly: 27–451.

Morris, P. W., Ian Jones, and S. H Wearne. (1998). Current research directions in the management of projects at UMIST. In F. Hartman, and G. J. Thomas Jergeas (Eds.), Proceedings from the International Research Network on Organizing by Projects IRNOP III in Calgary, Alberta, Canada.

Pinto, J. K., and D. P. Slevin. 1(987, February). Critical factors in successful project implementation. IEEE Transactions on Engineering Management EM 34 (1): 22–27.

______. (1992). The project implementation profile: New tool for project managers. Project Management Journal 17 (4): 57–70.

Said, E. W. (1983). Opponents, audiences, constituencies, and community. In W. J. T. Mitchell (Ed.), The Politics of Interpretation. Chicago: University of Chicago Press.

Shaw, M., and B. R. Gaines. (1989). Comparing conceptual structure: Consensus, conflict, correspondence and contrast. The Knowledge Science Institute, The University of Calgary. Calgary Alberta. Accessed at http://ksi.cpsc.ucalgary.ca/articles/KBS/COCO.

Shaw, M. L. G., and B. R. Gaines. (1992). Kelly's Geometry of Psychology Space and it's Significance for Cognitive Modeling. Knowledge Science Institute, University of Calgary.

Shenhar, A. J., Ofer Levy, and Dov Dvir. (1997). Mapping the dimensions of project success. Project Management Journal 28 (2): 5–13.

Shenhar, A. J., and M. Wideman. (2000). Optimizing Project Success by Matching PM Style with Project Type. Accessed at http://www.pmforum.org/pmwt/papers00-11.htm.

Shenhar, Aaron J., James J. Renier, and Max Wideman. (2001, May). Project Management: From Genesis to Classification. Accessed at http://www.maxwideman.com/papers/genesis/genesis.pdf. The original version of this paper was presented to the Classification INFORMSConference in Washington, DC in May 1996.

Simon, S. (1999, November 29). Email Note On Sample and Design. Accessed at http://www.cmh.edu/.[email protected].

Singer, Carl. (2001). Leveraging a worldwide project team. PM Network 15 (4): 36–40.

Thamhain, H. J. (1996, December). Best practices for controlling technology-based projects. Project Management Journal: 37–47.

Thomas, Janice L. (2000). Making sense of project management. Unpublished Doctoral Dissertation, Faculty of Management, The University of Alberta. Edmonton, Alberta.

Thomas, Janice, Connie Delisle, and Kam Jugdev. (In Press). Selling Project Management to Senior Executives: Important Planning Actions. Newtown Square, PA: Project Management Institute.

Tjäder, J. (1998). Making sense of project management. In F. Hartman, G. J. Thomas Jergeas (Eds.), Proceedings from the International Research Network on Organizing by Projects (IRNOP III) in Calgary, Alberta, Canada.

Toney, F., and Ray Powers. (1997). Best practices of project management groups in large functional organizations: Results of the Fortune 500 project management benchmarking forum: 1–167.

Tukel, O. I., and W. O. Rom. (1995 revised in 1997). Analysis of the characteristics of projects in diverse industries. Working Paper, Cleveland University. Clevland, OH.

Urli, Bruno, and Didier Urli. (2000). Project management in North America, stability of the concepts. Project Management Journal 3: 33–43.

Wateridge, J. (1995). Delivering successful IT projects: Eight key elements from success criteria to review via appropriate management, methodologies and teams. Unpublished Doctoral Dissertation, Dept. of Management, Canfield School of Business. Milton Keynes.

Wideman, R. Max. (2002). Comparative glossary of common project management terms. Accessed at http://www.maxwideman.com.

This material has been reproduced with the permission of the copyright owner. Unauthorized reproduction of this material is strictly prohibited. For permission to reproduce this material, please contact PMI or any listed author.

Advertisement

Advertisement

Related Content

Advertisement