Why information systems development projects are always late
Yetton, Philip, Australian School of Business, University of New South Wales, NSW 2032, Australia. [email protected]
*Liu, li, University of Sydney, Building J05, Sydney 2006, Australia. [email protected]
Information systems development projects (ISD) are characterized by high rates of project overruns. To address this challenge, a novel explanation, which explores the effect on performance of task variance and task dependence, has been developed. The findings suggest that project overruns are a function of the interactive effect of the two variables, driving both the level of and variance in, project performance against schedule. To address this problem, the paper provides a number of suggestions for improving performance and also offers an explanation for why learning to improve performance in this domain has been slow.
Keywords: project schedule, task variance, task dependency, project delay, information technology project, project performance.
The capacity to execute information systems development (ISD) projects on time, in budget, of quality, and to scope is critical in today's economy, in which a significant portion of new investment is driven by information technology and communications. Unfortunately, performance has remained low, despite a tradition of research into ISD failure that dates back to the 1970s and that has been in continuous development ever since (Brooks, 1975; Lucas, 1975; Lyytinen and Hirschheim, 1987; Avison and Fitzgerald, 1995; Sauer, 1999; Sambamurthy and Kirsch, 2000; Johnston, Boucher, Connors, & Robinson, 2001; Standish Group, 2009).
In a review of the empirical literature, Sauer (1999) found that, starting in the 1970s through the 1990s, failure rates varied between 30% and 70%. In a subsequent survey, Sauer and Cuthbertsno (2003) reported that 12% of projects in the United Kingdom were successful, 75% were challenged, and 9% failed. In its global survey, KPMG (2005) reported that 49% of the organizations surveyed had experienced at least one failed information systems (IS) project in the previous year. Finally, in a series of surveys, beginning in 1994, the Standish Group reported that success rates were higher and failure rates were lower in 2004 than in 1994; however, success rates were only 29% in 2004, and 18% of projects still failed, with an average schedule overrun of 84% (Rubinstein, 2007). The success/failure rates also fluctuate over the years, for example, the CHAOS Summary 2009 reported project success rates of 32%, lower than the 34% reported for 2004 and the 35% reported for 2006; 44% of projects were challenged compared with 52.7% in 1994, 51% in 2004, and 46% in 2006; and, 24% failed compared with the failure rates of 31% in 1994, 15% in 2004, and 19% in 2006 (Anonymous, 2004; Rubinstein, 2007; Standish Group, 2009).
These surveys identify three performance characteristics. First, they show that performance on IS projects is poor and produces low success rates. These findings are well recognized in the literature. Second, the Standish surveys also show that improvements in success rates were slow between 1994 and 2009, but this has not been a central focus of the literature. Third, the surveys report that the variance in performance across projects is very high, and the findings from Johnston (2006) illustrate this. Success rates are 29%, overruns are less than 20%, and the average schedule overrun is 84%.
Most ISD research has focused on reducing failures rates and/or the magnitude of schedule overruns. Little prior research focused on schedule variance. Elsewhere, in total quality management (TQM) (Deming, 1986; Juran and Gryna, 1988), the focus on task cycle variance led to significant improvements in production efficiency. Drawing on the TQM literature, the premise here is that a reduction in task variance would lead to a significant reduction in information systems schedule overruns.
In addressing this issue, the paper's contribution is threefold. First, it develops a framework to explain the joint influence of task variance and task dependence on the project schedule and to do this, both the factors that influence performance and the processes by which those factors affect the schedule are modeled. Second, the study accounts for an unexplored component of the continued poor performance of IS projects, namely, the variance in schedule overruns. Prior studies focused on the expected rates of overrun, with limited interest in the range of those overruns; here, schedule variance is considered to be a critical dependent variable, rather than being treated simply as a measure of residual error. Third, the model developed here explains why learning and improvements in success rates have been slow, and identifies the interventions used to improve performance.
The balance of this paper is broken down into four sections. First, the background literature is reviewed and the theoretical framework presented; second, the methodology is described; third, the framework is validated using literal replication (Yin, 1984) across four case studies; and, finally, the findings are discussed and the implications for research and practice identified.
Traditionally, research in ISD has been categorized as either factor or process based (Markus and Robey, 1988; Newman and Robey, 1992; Sambamurthy and Kirsch, 2000). A factor approach identifies potential predictor variables of successful performance (Newman and Robey, 1992). Examples include user involvement, management support, and project planning (Yetton, Martin, Sharma, & Johnston, 2000). In contrast, a process approach defines a project as a sequence of tasks over time (Newman and Robey, 1992). Performance is explained in terms of the relationships among those tasks; examples include the learning, conflict, political, and garbage-can models (Newman and Noble, 1990).
Newman and Robey (1992) argue that insights from factor and process models can be complementary, with factor models establishing the relationship between predictors and performance, whereas process models examine the activities that generate those associations. Sabherwal and Robey (1995) show that, although the ontological assumptions of factor and process models are different, their epistemological assumptions are the same; therefore, they conclude that, although the models consider different types of data, they have similar research goals.
Following this guidance, the framework developed in this paper considers both process and factor approaches. Specifying the process by which the factors influence performance significantly increases the explanatory power of the framework. To do this, a project is treated as an ordered sequence of tasks. The process description in this paper, which is a function of the dependencies across tasks within a project, is explained by two propositions (P1 and P2) and an assumption (A1). These propositions model how task variance and task dependence influence both the level of, and variance in, schedules. Before describing this process, the definition of a project adopted in this paper is outlined.
A project is a temporary undertaking to create a unique product (PMI, 2008). Project scheduling is a technique used to allocate and monitor tasks in a sequence (Schwalbe, 2002). Gantt charts provide a standard format for displaying project schedule information as a bar chart that plots tasks on the vertical axis against time on the horizontal axis (PMI, 2008). Figure 1 illustrates a Gantt chart with eight tasks.
In developing a project schedule, a project manager considers the tasks to be completed, their schedules, and their dependencies. Project network techniques estimate the total project schedule subject to those constraints (Turner, 1993). The techniques include the project evaluation and review technique (PERT), critical path method (CPM), graphic evaluation and review technique (GERT), and critical chain project management (CCPM), among others. PERT and CPM, developed as separate approaches in the late-1950s (Turner, 1993), help project managers to specify the critical paths for projects (Pich Loch, & De Meyer, 2002). These were the first techniques that moved scheduling away from a simple bar chart (Lockyer, 1969). With the introduction of GERT in the late-1960s, both researchers and managers could account for probabilistic project outcomes and network loops (Clayton and Moore, 1972).
Central to the above techniques is the so-called “critical path.” The “critical path” on a Gantt chart is defined as the series of tasks that determine the earliest possible completion date for a project (Schwalbe, 2002). For example, in Figure 1 the critical path is 12 months long, and the sequence is A-B-C-D-E-G-H. Task F is not on the critical path. The expected total project time is the sum of the time required for each task on the critical path. When tasks on the critical path overlap, they are reclassified to remove overlaps.
In practice, the time for each task varies within a range: the task schedule variance. So, in Figure 1, the possible completion dates for tasks on the critical path fall within certain ranges. These variations in completion dates for tasks lead to variations in completion dates for projects (project schedule variance). Although task schedule variance has a direct effect on project schedule variance, its effect on the level of ISD project schedule overruns is more complex and is explored in the next section.
Task Variance and Project Performance
Once a project schedule is developed, it is constrained by its dependencies. Task dependency is defined here as the extent to which the completion of a project task on the critical path depends on the status of other tasks. Dependency takes on various forms. Task dependencies limit the degree to which negative task variances, when a task is completed ahead of its scheduled delivery, offset positive schedule variances, when a task is completed after its scheduled delivery. In Figure 1, consider the hypothetical case in which task E starts one and a half months early. The flow on effect is that tasks G and H could also start early; however, task G cannot start until both tasks E and F are complete, and because the completion date for task F has not changed, there is little flexibility in the start time for task G. In this case, the dependencies across tasks decrease the likelihood that tasks on the critical path can start early.
This effect is similar to the merging effect, when multiple project paths merge with the critical path. Leach (2003) shows that when the number of merged paths reaches 10, the path length increases by 60% more than the expected. In contrast, the early completion of a precedent path has little impact on whether a subsequent task can start early, as the task may need to wait until a number of precedent tasks have been completed. Therefore, the variance introduced from merging is always positive with little likelihood of being early, and adding a 50% feeding buffer could significantly constrain the magnitude of positive bias introduced (Leach, 2003).
Studies have shown that other types of dependencies also contribute to the positive schedule bias. Leach (2003) finds that queuing—defined as “the build up of a line of work waiting to be performed by resources”—could lead to a very long wait time and thus delay the task significantly. The problem could be further exacerbated by organizational policies used to maintain high “billability.” Because there is no such thing as a negative queue, a positive schedule bias is introduced due to the sharing of common resources.
Multitasking is also known to introduce positive schedule bias because of the wait time for resources, task switching efficiency loss, and the network delay of multitasking (Leach, 2003).
The first proposition is therefore:
(P1) When task dependencies are high, the distribution of task schedule variances for tasks on the critical path and, as a result, the project schedule variance are skewed toward overrun.
One way of overcoming dependencies has been to introduce float/slack (Schwalbe, 2002). Although project teams are experienced at managing small task overruns using tools such as float/slack, these strategies have only limited applicability in environments in which task schedule variances are high and there are three reasons for this. First, in high-dependency situations, only small levels of float/slack can be included before they have an impact on the critical path. For example, for task F in Figure 1, the maximum float/slack that can be included is one week, assuming task F cannot start early. Second, in case of high dependency and high task variance, it is likely the large buffer sizes will be needed to cushion the effects (Leach, 2003); however, politically, it is going to be extremely difficult to convince the management and client that a 100% or even 200% contingency is justified. Third, although actual task times frequently do not match the planned schedule, the issue is not whether this happens but whether the experienced positive variances were identified ex ante and allowed for in the schedule. In practice, this is not the case or the schedule would have been set up to accommodate those variances and there would be no disruption to the schedule, because the task schedules would match the task actuals. Consequently, large positive schedule variances are, by definition, unplanned. Formally, it is assumed that:
(A1) Large positive task variances are stochastic.
Combining P1 and A1, when task schedule variances are large and there are multiple task dependencies, the likelihood of a project team being able to adjust the schedule to accommodate a large positive task schedule variance is low. This is because the overruns are unplanned (A1), and task dependencies limit a project team's ability to adjust the schedule when required (P1). Therefore, it follows that:
(P2) When both task dependence and task schedule variances are high, project schedule performance is both poor (relative to the sum of the estimated scheduled times for the tasks on the critical path) and unreliable (high variance in schedule overrun).
Proposition 2 is similar in form to the core proposition in TQM in which there is variance driven performance (Deming, 1986; Juran and Gryna, 1988). Essentially, here the project critical path is treated as if it were equivalent to a manufacturing production line. TQM shows that, when expected task cycle times are held constant, increases (or decreases) in task cycle variances increase (or reduce) production run times for the line. Therefore, estimating a project's critical path as the sum of the scheduled task times is subject to the same threat of variance-driven performance identified by TQM; of course, here, there is only one run down the production line.
TQM delivered significant reductions in production run times by reducing the variances in task cycle times. The opportunity exists here to deliver the same benefit for ISD projects. Instead, current ISD methodologies focus on reducing the expected task cycle times and fast tracking projects rather than reducing task schedule variances to improve performance. Indeed, as discussed later, goal setting theory predicts that setting high challenging goals (reducing the expected cycle times) increases performance variance (task cycle variances), thereby increasing reported schedule overruns.
Although difficult to quantify, Leach (2003) argues that errors and omissions introduce positive variances to a project's schedule. Similarly, studies have shown that optimism bias or over confidence and deception in order to get the project funded, could introduce significant positive schedule variances to projects (Lovallo and Kahneman, 2003; Flyvbjerg Garbuio, & Lovallo, 2009; Leach, 2003). Student syndrome (waiting to start a task until the due date is close) is also believed to contribute to positive schedule variance (Leach, 2003). Further, failure to report rework requirements and company policies that encourage pursuing of low-risk estimates also contributes to positive schedule variance (Leach, 2003).
The influence of task schedule variances and task dependence on performance variance and performance level for ISD projects is presented in Figure 2. The two factors, task schedule variances and task dependence, interact with each other to influence both the performance level within a project and project schedule performance variances across projects. The methodology section below describes how A1, P1, and P2 are validated.
Four case studies are used to validate the framework. The value of case study research is that it allows researchers to study IS in a natural setting for a previously little studied area (Benbasat, Goldstein, & Mead, 1987). The cases are analyzed using a literal replication logic methodology designed to investigate whether similar results are repeated in different cases (Yin, 1984). The inclusion of each additional case strengthens the findings that are replicated.
The unit of analysis for this study is the project, holding the industry and organizational contexts constant. If the projects were from different organizations, there would be a validity threat over whether the failure to replicate was a function of project, organizational, or industry differences, so the projects were selected from one organization.
The focal organization is a corporatized utility, WaterCo,1 which is responsible for the delivery of local water and wastewater services to 1.4 million people. Maintenance and operations are outsourced. Although a local geographical monopoly, the organization experiences considerable pressure from its customers, via the state government, to control cost and improve value for the money spent.
The PRINCE2 methodology was adopted by the information systems (IS) division. This project governance structure is presented in Figure 3.
Four projects were selected. To satisfy the literal replication logic, the projects had to be relatively recent or current, either completed or underway for some time, and have had a significant influence on the operation of the business (Yin, 1984). The four projects selected satisfied those criteria and, in addition, had different goals, were located in different parts of the organization, and had separate management teams. The project characteristics are summarized in Table 1. Due to space constraints here, further details will be supplied upon request.
|Project description||Development of new system for gathering water samples.||Development of new work management system for pipe maintenance||Major upgrade of laboratory information management system (LIMS)||Rollout of a new work management system for equipment maintenance|
|Client||Water operations and laboratory||Water operations and outsourced water operations partner |
|Project cost||$US1 million||$US2.4 millionb||$US0.35 million||$US0.3 million|
|Planned durationa||15 months||24 monthsb||5 months||6 months|
|Actual durationa||21 months||27 monthsb||10 months||8 months|
|Project methodology||PRINCE2||Consultant proprietary system||PRINCE2||PRINCE2|
a) Project duration is defined as the elapsed time from the commencement of preliminary design through to closeout
b) Estimate to closeout
Two key variables are examined in this study: task schedule variance and task dependence. Task schedule variance is a measure of the scatter or dispersion of the actual against the scheduled task elapsed times, here measured as the differences between the scheduled task completion times and the actual task completion times. In addition, this study is only concerned with large task variances, which are interpreted here to mean tasks that vary from their schedule by more than one week or 25% of the scheduled task cycle time, whichever is smaller. This is consistent with the control metrics used in WaterCo, namely, that differences from schedules of more than one week were reported at the weekly project meetings. It is assumed that the project team managed smaller variances as part of their routine activities.
Task dependence refers to the degree to which the work performed on one task influences other tasks. In this study, the focus is on the critical path for each project. By definition, tasks on the critical path are highly dependent; each task is reliant on the previous task to be completed before it can begin.
Data collection includes a review of documentation, interviews, and observations. The primary source of information is documentation. Multiple sources of documentation were reviewed, including manuals, procedures, plans, guidelines, project status reports, schedules, meeting agendas, meeting minutes, memos, and lessons learned lists. This information covers the duration of each project and is well documented in accordance with the specified methodology. Some of the interviewees provided additional documentation.
The documentation is complemented by interview transcripts. The objective of the interviews was to establish leads to be followed and confirm the account of tasks as described in the documentation. The interviewees were chosen to represent the principal stakeholders on each project, including senior management, sponsors, users, IS staff, vendors, consultants, and project managers. Twenty-eight interviews were conducted, with three people interviewed twice. The typical length of an interview was one to two hours.
The final source of data is observations. The objective of the observation data is to appreciate how the individuals interacted and, in particular, to confirm that the documentation produced at meetings was a true account of the events as observed. Meetings attended included the IS executive committee (ISEC), the project governance office (PGO), project boards, and project team meetings.
The use of multiple sources of data triangulates the findings and thus enhances the validity of the findings (Miles and Huberman, 1994). The other procedure recommended by Yin (1984) is used to establish a chain of events. In this study, the chain is the sequence of tasks making up the project schedule. Following Yin's (1984) recommendations, the reliability of the approach is achieved by making the way observations are made and recorded as explicit as possible.
The analysis is designed to validate the two propositions (P1 and P2) and the assumption (A1) by first examining the projects individually and then comparing projects for similarities. There are five steps to the analysis. First, a timeline is constructed for each case, identifying the scope, schedule, and any changes over time, with a specific focus on the critical path. This step identifies the number of tasks on the critical path that started before their scheduled start times (Proposition 1). Each project has changes to its original scope. Because of the difficulties in quantifying the changes' impacts on the schedule, we have not made adjustments to the actual schedules for any of the four case projects.
Second, a catalogue of tasks is developed for those tasks in which the start or completion time diverged by more than one week, or more than 25%, from the planned schedule. Third, each task variance is assigned to a category on the checklist of 14 groups2 developed by Schmidt, Lyytinen, Keil, & Cule (2001). This process reduces the errors of omission caused by overlooking a particular risk (Schmidt, Lyytinen, Keil, & Cule, 2001).
Fourth, comparisons across projects are made. Steps 2 and 3 identify and classify the positive task schedule variances. These variances are inspected to identify whether they are the outcomes of similar or different (stochastic) events across the four projects (Assumption 1). The total number of such variances, and both the maximum number of potential replications and the actual number of replications across projects, are reported.
Finally, Proposition 2 states that project schedule performance is both poor (relative to the sum of the estimated scheduled times for the tasks on the critical path) and unreliable (high variance in schedule overruns across projects). Poor performance is interpreted here as an overrun greater than 20% of the total project schedule. Unreliable is defined as the difference between the largest and smallest percentage project overruns exceeding 50%.
Tables 2 (a) and 2 (b) summarize the case findings. Details for the four cases can be obtained from the first author upon request. The individual overruns for each task are listed in these tables. The total of these overruns will not necessarily equal the total project overruns because some tasks overlap. In this section, the two propositions and the assumption are validated by cross-referencing characteristics from the tabulated results and this is achieved by comparing the results in the tables directly for commonalities and differences.
Proposition 1: Task schedule variances and the project schedule variance are skewed toward overrun.
There are two conditions to be satisfied to validate P1. One is that task dependencies are high. In this study, the critical path is being examined and, therefore, by definition, task dependencies are high. Take the SampProj, for example (see Table 2[a]): all the tasks on the critical path are highly dependent on the timely approval for the project, minimal change requests, system stability, smooth data migration, sufficient resourcing for key tasks, smooth systems integration, and a number of external dependencies. Any delays in the above are likely to result in delays in tasks on the critical path (reported in Table 2[a]) and the overall project schedule, which is 25 weeks long.
The other condition is that tasks do not start before their scheduled start times and, as a result, are likely to be completed late. This is the case for all four projects and there are no exceptions to this finding: In no case did a task start more than one week earlier than it was initially scheduled. (In some cases, tasks on the critical path started a day or two early but this did not affect the schedule.) Thus, consistent with P1: when task dependence is high, tasks on the critical path do not start before their scheduled start times. In contrast, there are 33 instances in which tasks finished late, more than one week over the scheduled completion date (see Tables 2[a] and 2[b]).
|Characteristic from Schmidt et al.||SampProj||OpProj|
|No.||Characteristic||Reason for Overrun||Weeks||Reason for Overrun||Weeks|
|1||Corporate environment|| ||9|
|2||Sponsorship/ownership|| ||16|| ||5|
|5||Scope|| ||14|| ||6|
|6||Requirements|| || |
|7||Funding|| ||20|| ||4|
|11||Staffing|| || |
| || |
|13||External dependencies|| || |
|Characteristic from Schmidt et al.||LabProj||EquipProj|
|No.||Characteristic||Reason for Overrun||Weeks||Reason for Overrun||Weeks|
|3.0||Relationship management|| ||4|
|5.0||Scope|| || |
|6.0||Requirements|| || |
| || |
|11.0||Staffing|| || |
|13.0||External dependencies|| ||6|| || |
Assumption 1: Large positive task variances are unplanned.
The issue here is whether the task overruns in Tables 2(a) and 2(b) could have been planned when developing the initial project schedules. Ex-post, it is easy to recognize potential problems. The question is, “Should/could they have been predicted ex-ante?” For example, on SampProj there was a software integration problem (Table 2[a]: 12.0 Technology). This issue was the result of problems integrating three of the core vendor software products that make up the core system: geographical IS, reporting software, and Microsoft NET. The interview data corroborate that, at the time the project plan was developed, the three vendors expected little difficulty in integration and the SampProj team proceeded based on that advice.
Inspecting task variances reported for the four cases in Tables 2(a) and 2(b) identifies two critical observations; one is that major overruns are not replicated across projects. Even where there are similarities across the projects, the critical causal drivers of the overruns do not replicate. For example, consider system stability at testing, which caused overruns on three of the projects: SampProj, OpProj, and LabProj (Tables 2[a] and 2[b]: 6.0 Requirements). Although initially this may appear to be a similar issue for all the projects, detailed investigation reveals that the reasons for the overruns are different. For SampProj, the problem was the compatibility of new software; for OpProj, it was the compatibility of new hardware to new software; and for LabProj, it was the compatibility of legacy systems with new systems. In addition, EquipProj did not experience a schedule overrun on its testing stage.
In aggregate, there are 33 significant positive task schedule variances. Across the four projects, there are a maximum of 41 replicated pair-wise comparisons, and no replications, common positive task schedule variances across two projects, were observed. The cause and content of overruns were unique to each project and, hence, not predictable ex-ante across projects.
The second observation is that, even when a threat of a task schedule variance has been identified, its magnitude is difficult to estimate. For example, on LabProj, there was a failure of the initial database conversion (Table 2[b]: 6.0 Requirements). When the system was switched over from the old to the new, a number of unexpected problems emerged. Although the final overrun was only two weeks, the interview data show that, when the problem was first recognized, the time it would take to rectify the problem was unclear to the project team.
Another example is the resolution of the WaterCo to WaterPartnerCo interface on OpProj (Table 2[a]: 2.0 Sponsorship/Ownership). The two organizations disagreed on the extent of transparency that should be allowed at the interface. Although the project team appreciated that this would take time to resolve, again the interviewees revealed that they did not realize that it would take five weeks. The magnitude of the variance depended on the outcomes of future negotiations.
Proposition 2: Poor and unreliable performance
The data in Tables 2(a) and 2(b) support the proposition. The overruns reported on the four projects are 25 weeks (SampProj), 12 weeks (OpProj), 23 weeks (LabProj), and 7 weeks (EquipProj). All projects were late. Schedule overruns exceeded the planned schedule by more than 20% on all projects. In addition, the magnitudes of the overruns vary. As a percentage of the total planned schedule, the percentages are 76% (SampProj), 26% (OpProj), 105% (LabProj), and 33% (EquipProj).3
Discussion and Implications
Managing ISD Projects
The objective of this paper is to explain the continued poor performance of ISD projects and, in particular, the influence of task variance. One reason for the poor performance, which is commonly mentioned in the press and other literature, is that it is the result of poor planning. The framework presented in this paper offers an alternative and very different explanation in which poor and unreliable ISD project schedule performance is a result of high-task dependence and high-task schedule variances.
It is the combined influence of high-task dependence and high-task schedule variance that drives performance. When a task's completion time varies from its schedule, there are repercussions only when it affects the critical path through the dependencies; otherwise, it has little influence on schedule performance. For example, on SampProj, data conversion was separated out from the core project at an early stage, so the variance in those task outcomes would not affect the rest of the project. In contrast, there were a number of software development dependencies for SampProj, which caused significant overruns.
Similarly, dependencies are not a concern unless tasks do not meet their planned schedule times. Tasks that are on time or are completed early satisfy the schedule requirements and so do not impact the schedule performance. Furthermore, although many tasks do not fulfill their initial requirements, problems arise only when there is a large overrun. In general, small overruns are accommodated by variations in the project schedule.
It is easy to suggest that the problems that occurred on the projects should have been predicted and that better-performing projects are just better managed; however, all the projects had strong management teams, followed by rigorous methodologies. Also, it is reasonable to assume that project teams want to achieve their objectives, and so positive task and schedule variances are unintended.
The analysis above focuses on positive task schedule variances and their consequences. Tables 2(a) and 2(b) report no task schedule variances on four of the risk characteristics. There are three potential explanations for this: The first is that risks in these four categories are not major causes of schedule overruns; the second is that the project teams know how to manage those risks; and, the third is that their absence was a chance event. Task schedule variances are stochastic (Assumption 1).
Given the comprehensive nature of the study by Schmidt, Lyytinen, Keil, & Cule (2001), it is not reasonable to suggest that the first option (that they are not major risks) is appropriate, because those risks had significant performance impacts elsewhere. The second explanation, that the project teams knew how to manage these risks better than other project teams, is similarly unlikely. The teams, although competent, did not demonstrate any particular joint characteristics that would suggest this to be case. The third option, that they were chance events, is possible. This is consistent with A1 in that the risks that occur are a random outcome from a set and there are differences between projects. So, of the three options, the third, chance events, is accepted here.
Thus, overall, consistent with the framework presented in Figure 2, the combined influence of task variance and task dependence has a significant influence on project schedule performance. Task dependence restricts the start time of tasks, and large task variances are random events that drive schedule overruns. Also, the average overrun reported for the four projects in this study is 60%, which is the same order of magnitude as the 63% (Johnson, Boucher, Connors, & Robinson, 2001), and 84% (Rubinstein, 2007) overruns.
Improving Poor Project Performance
The motivation for this paper is to explain the poor performance of ISD projects. Because the framework has been developed (see Figure 2), the reason for the poor performance can be investigated further. In particular, one of the common initiatives that project teams use to accelerate a project is fast-tracking; in other words, the project schedule is collapsed, eliminating the discretionary dependencies, on the assumption that the project will be completed early.
From an organizational perspective, there are two reasons why projects are fast-tracked. One is the perceived strategic importance of IS to an organization (Keen, 1991; Venkatraman, 1994). IS is seen as a key driver of competitive advantage, as highlighted by examples such as Dell Computer Corporation (Margretta, 1998) and Baxter Healthcare (Short and Venkatramen, 1992). The other reason is the increasing focus on time-based competition (Stalk and Hout, 1990; Brown and Eisenhardt, 1997; Barkema, Baum, & Mannix, 2002) and first mover advantage (Lieberman and Montgomery, 1998; Porter, 2001). Thus, not surprisingly, with IS as a key driver of strategy and time so important, many ISD projects are fast-tracked. As described by Perlow (1999), the work environment for software engineers is characterized as “fast-paced, high-pressure, and crisis-filled.”
Yourdon (1995) refers to fast-tracking as instigating a death march; that is, it creates a situation that is destined to fail because of the pressure it places on the project team. The framework developed in this paper explains why this is the case. There are two components to the explanation. First, there is an increase in task dependence. Fast-tracking increases task dependencies because it compresses the schedule, removing some of the float/slack that would be present otherwise. Further, merging and queuing for resources, which are common causes for performance bias are likely to happen due to the apparent temptation to multitask. Perlow (1999) found that time pressure could lead to project team dysfunction and loss in productivity. Second, there is an increase in task schedule variance. By compressing a schedule, more pressure is placed on a project team to meet deadlines; hence, goal difficulty increases. As goal difficulty increases, the variance in performance increases (Locke, 1982; Erez and Zidon, 1984; Locke, Chah, Harrison, & Lustgarten, 1989). Combining these two components leads to a cumulative effect, resulting in a decrease in project performance.
This insight is consistent with recommendations in the accounting literature. In particular, Merchant and Manzoni (1989), in their study of budget targets in profit centers, found that organizations set targets that are achieved eight or nine years out of every ten. They explain that the corporate head office reduced risk by trading off the performance gain from setting challenging goals in favor of reduced performance variance and, therefore, setting targets that were achievable rather than challenging. The implicit corporate intent is to reduce performance variance. Typically, for ISD, very challenging goals are set; implicitly, but not explicitly, high project risks and high failure rates are to be expected, if not accepted.
Performance on a Portfolio of Projects
In addition to explaining the performance of individual projects, part of the motivation for this study was to ask why poor performance has persisted over many years (from the 1970s and through the 1990s). The expectation was that, over time, organizations would learn from their experiences and so improve performance. Based on the reported statistics (Sauer, 1999; Keil, Mann, & Rai, 2000; Johnson, Boucher, Connors, & Robinson, 2001), that has only occurred to a limited extent. The Standish Group's Chaos report series has shown that the IS project success rate doubled between 1994 and 2006 period, while the failure rate also dropped (Rubinstein, 2007); however, the failure rate still hovers at about 24%, whereas 44% projects are challenged (Standish Group, 2009). Applying the logic from this study helps explain how the improvement has been made and why the failure rate is still stubbornly high.
First, let's consider how the improvements have been made. The Chaos reports found that the period of performance improvement occurred when there was a significant reduction in average project size (total project cost) (Anonymous, 2004). Between 1994 and 2006, while project success rate increased, the average project size was almost halved. Our findings explain the effect of project size reduction in terms of two mediating variables, task schedule variance and task dependence.
Tasks or work packages can be made smaller to reduce task variance; however, reduced task size under the assumption of unchanged project size will increase the total number of tasks and thus the likelihood of increased task dependencies. Therefore, the effect on project schedule performance due to reduced task variance may be more than cancelled by the negative effect on performance from increases in task dependencies. Furthermore, if the project size has been reduced, reductions in task size and thus task variance are not necessarily associated with an increase in task dependencies; therefore, reduction in project size should be expected to result in improvement in project performance.
There is a limit on how small a project can be, which is dependent on the business objectives. Hence, fundamental improvement in IS project performance needs to come from initiatives other than the reduction in project size, which can reduce task variances and dependencies. Reducing task schedule variance and task dependence, without resorting to task size reduction, requires the learning of similar tasks and projects over time, which is difficult for typical IS projects. Consider the following explanation:
A fast-tracked project, which includes a number of tasks with high performance variance, incurs a major overrun. This outcome is in the expected range; that is, it is only one possible result from a large random set. Although it should be expected, the overrun would typically be interpreted as the result of inadequate project management. Actions would be put in place in the next project to improve performance on tasks similar to those with high overruns in the last project. Of course, the action would be ”successful” on the next project, because a similar type of mistake would not be repeated. However, if task schedule variance remains high, other tasks would exhibit significant overruns. An example from WaterCo's projects is that three of the projects had problems during testing, but the problems were different; generally, the risk had been identified, but the actual risks experienced were project specific.
Thus, under conditions of high task schedule variances and high task dependence, learning would be slow because attempts to learn would often be in response to errors or “noise” in project performance. As a result, a fundamental cause of poor performance—the process loss due to high task schedule variance combined with high task dependence—has not been identified and addressed. Improving performance over time requires that these issues be addressed.
This interpretation is supported by the feedback literature. In a meta-analysis of the use of feedback interventions, Kluger and De Nisi (1996) found that not all feedback is valuable and at least one third actually decreases performance. Tetlock and Boettger (1989) go even further. In their study, which examined the performance of undergraduate students in examinations, they concluded that, “subjects incorrectly assumed that the information that they had been given must be useful and made a valiant effort to figure out how it was useful” (p. 397). Similarly, after each project, the project audit identifies the specific learning to be included in the next project. The strong assumption is that the specific feedback from poor performance on the last project is critical to success on the next, however this assumption is not valid for the model developed here.
Initiatives Used to Improve Project Performance
This study suggests that, to improve performance, there are two factors that should be examined: task schedule variance and task dependence. Reducing either one of these can facilitate improved performance. Table 3 identifies examples from the cases to illustrate interventions based on the model developed here. For example, staging projects reduces task dependencies, and adopting packaged software reduces task schedule variances. Furthermore, the interventions from TQM to reduce task cycle time variations can be explored to identify mechanisms for reducing task schedule variance, thereby improving project performance.
Not all ideas presented in Table 3 are new. For example, managing expectations was outlined in the ISD literature by Lyytinen (1988), and staging of projects has been recommended on numerous occasions (Kapur, 1999; Radosevich, 1999; Keen, 2000). The reduction in project size has been associated with the increase in IS project success rate and reduction in project failure rate (Anonymous, 2004). Applying the logic from this paper can explain why this is the case. Table 3 shows how the framework developed in this paper consolidates the ideas that have been proposed in various areas and integrates them within a single model.
Implications for Research
The key implications for research are based on three critical aspects of the reframing embedded in the factor-process framework. First, project performance has two components, the level of performance (the schedule overrun in this case) and the variance in that performance. Second, task variance and task dependence jointly influence both forms of performance. Third, the process sub-model in Figure 2, by explaining the influence of the factors on performance, specifies the form of that relationship.
First, the two dimensions of project performance (the level of and variance in) are not independent dimensions explained by different variables; rather, they are directly linked with schedule overruns and high project schedule variance to produce a joint outcome. Typically, the research on project performance has examined the level of performance, while considering the variance as a measure of residual error. For example, the relationship between user involvement and level of success has been established by a number of authors, with little consideration for the variability in outcomes (See for example Baroudi, Olson, & Ives, 1986; Franz and Robey, 1986). Not surprisingly, the research fails to account for the variances in outcomes and this study addresses this shortcoming.
|Variable||Initiative||Explanation||Example From the Case Studies|
|Task variance||Increase task transparency||Allows for better planning||Organization: PRINCE2 adopted as a standard reporting structure|
|Increase user participation||Ensures that the product delivered meets the user needs||OpProj: Conducted numerous workshops|
|Reduce project size||Ensures that estimates for tasks are more accurate||SampProj: Project de-scoped to accommodate late delivery of LabProj|
|Manage expectations, e.g., set realistic goals by drawing from “outside views”||Mitigate optimism bias and misrepresentation||SampProj: The personal digital assistant requirements were scaled back to include mandatory requirements only|
|Use packaged software||Provides a standard within which to develop the system||OpProj: Choice of vendor package that would need little customization|
|Task dependence||De-scope||Reduces the number of dependencies||SampProj: Project de-scoped to accommodate late delivery of LabProj|
|Improve requirements definition||Ensures that there is no confusion over what is to be developed and when||LabProj: A full specification of what was thought to be in use was sent to the key users for confirmation before the start of development|
|Reduce task coupling||If task links are reduced then dependencies exert less influence||EquipProj: Each workshop implementation was undertaken as a sub-project|
(incremental development or iterative development)
|Reduce delay bias by minimizing multitasking, merging, queuing (i.e., reduce the dependencies)||LabProj: Emergent schedule with three stages|
Second, to explain project performance, the framework developed in this paper identifies two factors, task variance and task dependence, on which there is limited research. These variables are embedded in the project schedule and by making them explicit, their influence on outcomes can be understood. For example, in this paper, variables are used to explain why fast-tracked projects are usually late and why learning to improve performance has been slow. It is the interaction of these two variables that is important, not their independent effects.
Finally, the paper not only identifies the factors driving project performance but also specifies the form of that relationship. Doing so allows for the development of initiatives that address the inputs and processes to achieve an improved project schedule outcome. The result would be increased confidence in those interventions.
This study is subject to several limitations and three of them are reviewed here. First, one organization was selected, which limits generalizability. Although this is unusual for multiple case study research projects, it is integral to the research design. The unit of analysis is the project, controlling for organizational and industry effects. In addition, the organization has a typical centralized IS division structure, serving multiple business units, and the findings should be generalized only to similarly structured organizations. Future research should analyze other organizational forms and industries.
Second, much of the study data were based on retrospective documented accounts and these may have been “edited.” However, this study also triangulated data across several sources, including interviews, observations, and documents and this increases confidence in the findings. Future research might consider a longitudinal study to validate the findings.
Finally, in analyzing the data, the task schedule variances were categorized into groups. This classification could reflect interviewer bias. Although all precautions were taken to ensure the categories reflected the actual tasks, some concerns remain. Future research could consider other classification schema to validate the findings.
The objective of this paper was to determine how task schedule variances influence the level of, and variance in, ISD project schedule performance. Factor and process approaches were combined to model project performance. Investigating four case studies, the two propositions and one assumption are supported. The findings show that the interaction between the two variables—task schedule variance and task dependence—has a significant influence on project schedules. Such a framework provides an alternative explanation for project performance and, therefore, how it might be improved. Research to date has focused on the level of performance, with little regard to the variance in performance with different projects. Companies should focus on activities that reduce both task schedule variance and task dependencies in order to significantly improve the performance of their projects and maintain their schedules.
Anonymous, (2004). Standish: Project success rates improved over 10 years. Software magazine, January 15. Retrieved December 14, 2009 from Software Magazine: http://www.softwaremag.com/L.cfm?doc=newsletter/2004-01-15/Standish.
Avison, D.E., & Fitzgerald, G. (1995). Information systems development: Methodologies, techniques and tools (2nd ed.). London, UK: McGraw-Hill.
Barkema, H.G., Baum, J.A.C., & Mannix, E.A. (2002). Management challenges in a new time. Academy of Management Journal, 45(5), 916-930.
Baroudi, J.J., Olson, M.H., & Ives, B. (1986). An empirical study of the impact of user involvement on systems usage and information satisfaction. Communications of the ACM, 29(3), 232-238.
Benbasat, I., Goldstein, D.K., & Mead, M. (1987). The case research strategy in studies of information systems. MIS Quarterly, 11(3), 369-386.
Brooks, F.P. (1975). The mythical man-month: Essays on software engineering. Reading, UK: Addison-Wesley.
Brown, S.L., & Eisenhardt, K.M. (1997). The art of continuous change: Linking complexity theory and time-paced evolution in relentlessly shifting organizations. Administrative Science Quarterly, 42(1), 1-34.
Clayton, E.R., & Moore, L.J. (1972). PERT vs GERT. Journal of Systems Management February, 11-19.
Deming, W.E. (1986). Out of Crisis. Cambridge, MA: MIT, Centre for Advanced Engineering.
Erez, M., & Zidon, I. (1984). Effect of goal acceptance on the relationship of goal difficulty to performance. Journal of Applied Psychology, 69(1), 69-78.
Flyvbjerg, B, Garbuio, M. & Lovallo, D., (2009). Delusion and deception in large infrastructure projects: Two models for explaining and preventing executive disaster. California Management Review, 51(2), 170-193.
Franz, C.R., & Robey, D. (1986). Organizational context, user involvement and the usefulness of information systems. Decision Sciences, 17(3), 329-356.
Johnson, J., Boucher, K.D., Connors, K., & Robinson, J. (2001). Project management: The criteria for success. Software Magazine, 21(1), S3-S11.
Juran, J.M., & Gryna, F.M. (1988). Juran's quality control handbook (4th ed.). New York, USA: McGraw-Hill.
Kapur, G.K. (1999). Why IT project management is so hard to grasp. Computerworld, 33(18), 32.
Keen, P.G.W. (1991). Shaping the future: Business design through information technology. Boston, MA: Harvard Business School Press.
Keen, P.G.W. (2000). Six months or else. Computerworld, 34, pp 48.
Keil, M., Mann, J., & Rai, A. (2000). Why software projects escalate: An empirical analysis and test of four theoretical models. MIS Quarterly, 24(4), 631-664.
Kluger, A.N., & De Nisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254-284.
KPMG.,(2005). Global IT project management survey. Retrieved December 14, 2009 from PMI Australian chapter website: http://www.pmichapters-australia.org.au/canberra/documents/irmprm-global-it-pm-survey2005.pdf
Leach, L. (2003). Schedule and cost buffer sizing: How to account for the bias between project performance and your model. Project Management Journal, 34(2), 34-47.
Lieberman, M.B., & Montgomery, D.B. (1998). First-mover (dis)advantages: Retrospective and link with the resource-based view. Strategic Management Journal, 19(12), 1111-1125.
Locke, E.A. (1982). Relation of goal level to performance with a short work period and multiple goal levels. Journal of Applied Psychology, 67(4), 512-514.
Locke, E.A., Chah, D.-O., Harrison, S., & Lustgarten, N. (1989). Separating the effect of goal specificity from goal level. Organizational Behavior and Human Decision Processes, 43(2), 270-287.
Lockyer, K.G. (1969). An Introduction to Critical Path Analysis. London, UK: Pitman and Sons.
Lovallo, D., & Kahneman, D. (2003). Delusions of success: How optimism undermines executives' decisions. Harvard Business Review, 81(7), 56-63, 117.
Lucas, H.C. Why Information Systems Fail, Columbia University Press, New York, USA, 1975.
Lyytinen, K. (1988). Expectation failure concept and systems analysts view of information system failures: Results of an exploratory study. Information & Management, 14(1), 45-56.
Lyytinen, K., & Hirschheim, R. (1987). Information systems failures: A survey and classification of the empirical literature. Oxford Surveys in Information Technology 4, 257-309.
Margretta, J. (1998). The power of virtual integration: An interview with Dell Computers' Michael Dell. Harvard Business Review, 76(2), 73-84.
Markus, M.L., & Robey, D. (1988). Information technology and organizational change: Causal structure in theory and research. Management Science, 34(5), 583-598.
Merchant, K.A., & Manzoni, J.F. (1989). The achievability of budget targets in profit centers: A field study. The Accounting Review, 64(3), 539-558.
Miles, M.B., & Huberman, A.M. (1994). Qualitative data analysis: An expanded sourcebook (2nd ed.). Thousand Oaks, CA: Sage Publishing.
Newman, M., & Noble, F. (1990). User involvement as an interaction process: A case study. Information Systems Research, 1(1), 89-113.
Newman, M., & Robey, D.A. (1992). A social process model of user-analyst relationships. MIS Quarterly, 16(2), 249-266.
Perlow, L. A. (1999). The time famine: Toward a sociology of work time. Administrative Science Quarterly, 44(1), 57-81.
Pich, M., T., Loch, C.H., & De Meyer, A. (2002). On uncertainty, ambiguity and complexity in project management. Management Science, 48(8), 1008-1023.
PMI, (2008). A guide to the project management body of knowledge. Newtown Square, PA, USA: Project Management Institute.
Porter, M.E. (2001). Strategy and the Internet. Harvard Business Review, 79(3), 63-78.
Radosevich, L. (1999). A lean, mean IT machine. InfoWorld, 21, 64-65.
Rubinstein, D. (2007). Standish Group Report: There's less development chaos today. In SD Times, March 1, 2007. Retrieved December 12, 2009 from SD Times: http://www.sdtimes.com/content/article.aspx?ArticleID=30247
Sabherwal, R., & Robey, D. (1995). Reconciling variance and process strategies for studying information systems development. Information Systems Research, 6(4), 303-327.
Sambamurthy, V., & Kirsch, L.J. (2000). An integrative framework of the information systems development process. Decision Sciences, 31(2), 391-411.
Sauer, C. (1999). Deciding the future for IS failure: Not the choice you might think. In W.L. Currie & R. Galliers (Eds.), Rethinking management information systems: An interdisciplinary perspective (pp.279-309). Oxford, UK: Oxford University Press.
Schmidt, R., Lyytinen, K., Keil, M., & Cule, P. (2001). Identifying software project risks: An international Delphi study. Journal of Management Information Systems, 17(4), 5-36.
Schwalbe, K. (2002). Information technology project management. Cambridge, MA: Thomson Learning.
Short, J., & Venkatramen, N. (1992). Beyond business process redesign: Redefining Baxter's business network. Sloan Management Review, 34(1), 7-21.
Standish Group, (2009). New Standish Group report shows more project failing and less successful projects. Retrieved December 12, 2009, from Standish Group website: http://www.standishgroup.com/newsroom/chaos_2009.php
Stalk, G., & Hout, T.M. (1990). Competing against time;How time-based competition is reshaping global markets. New York: The Free Press.
Tetlock, P.E., & Boettger, R. (1989). Accountability: A social magnifier of the dilution effect. Journal of Personality and Social Psychology, 57(3), 388-398.
Turner, J.R. (2003). The handbook of project-based management. London, UK: McGraw-Hill.
Venkatraman, N. (1994). IT-enabled business transformation: From automation to business scope redefinition. Sloan Management Review, 35(2), 73-87.
Yetton, P., Martin, A., Sharma, R., & Johnston, K. (2000). A model of information systems development project performance. Information Systems Journal, 10(4), 263-289.
Yin, R.K. (1984). Case study research: Design and methods. Beverly Hills, CA: Sage Publishing.
Yourdon, E. (1995). Death march: The complete software developers’ guide to surviving mission impossible projects. Upper Saddle River, NJ: Prentice Hall.
1 The organization, projects, and individuals have been disguised for confidentiality.
2 This checklist was adopted because it is the most complete list available. The 14 groups include 53 items.
3 These overrun percentages are calculated as a portion over the time from start of development through planned completion.
© 2010 Project Management Institute