Abstract
This paper describes the development of a fundamentally novel project representation. It explains the rationale behind this new approach and discusses the problems faced by organizations undertaking the development of large, complex projects. Parallels are drawn between Computational analysis tool such as Computational Fluid Mechanics (CFD) and project management networks.
The paper discusses the use of this new representation implemented as a software tool and reports briefly on the experiences of using it on major industry projects.
Introduction
The great Italian mathematician Fibonacci (Gillespie, 1971) is credited with accelerating the spread of the Arabic numbering system throughout Europe. Fibonacci was born and spent most of his life in Pisa which, during the early 13th century, was a great center for trade and commerce.
At the time, businessmen and traders used the Roman numbering system to conduct and record transactions. This numbering system uses a representation that is not well suited to simple arithmetical operations (try multiplying LXXXVII by LXXIII in your head without converting to decimal!).
Fibonacci realized that the Arabic system was far superior and in his book Liber Abacci (Grimm, 1973) promoted the use of this new system. It soon became the standard within the commercial world because of its superior representation. Despite strong initial resistance the overwhelming advantages of the new system lead to a widespread transformation in the handling of business transactions.
This paper argues that a similar fundamental shift in representation is long overdue in the field of project management.
Product Knowledge and Process Knowledge
A typical engineering organization can be thought of as two intersecting communities who are custodians of the two types of intellectual capital owned by the organization; namely product knowledge and process knowledge.
The first community is primarily concerned with capturing, generating, storing and deploying product knowledge. This community is typically made up of scientists, engineers and specialists who are expected to manage and maintain knowledge in their own particular domain. This knowledge propagates in what Hatcheul (Hatchuel, 2004) calls a “dendritic” manner; always seeking to maintain “world-class” profundity and depth. This knowledge is characterized by precision, mathematical formulation, and determinism.
The second community is concerned with process knowledge and is largely made up of managers and team leaders whose domain is dominated by the time dimension, task sequences, resources and connections. This knowledge is characterized by uncertainty, complexity, and ambiguity.
Successful engineering organizations exploit the knowledge held in both these two communities by stimulating a well-orchestrated interchange of knowledge between these two domains to achieve the efficient and predictable development of new products (Masson et al, 2004).
Arguably, the quality of planning (and therefore likelihood of success) for the introduction of a new product is related to the degree of overlap between the Product and Process domains. Schindler cites the notion of “Project Amnesia” and gives clear reasons for an organizations inability to learn from project mistakes (Schindler 2003). He suggests that a systematic knowledge management procedure is required to prevent project amnesia and that central to this is the need to share knowledge between the technical and project domains.
The Planning Paradox
The authors have spent many years observing a range of organizations through consultancy and research projects. Most of this work has been within Aerospace and Defense organizations where the track record for new product introduction has been particularly poor (UK House of Commons, 2003). Although it is difficult to provide a succinct definition of project failure most researchers agree that a cost or timescale overrun in excess of 100% would clearly fall into this category. Defense and aerospace organizations frequently fail by this criterion. Furthermore, projects in this sector also regularly fail to meet their performance (or capability) targets.
Exhibit 2 is an attempt to classify projects in terms of overall complexity using the two dimensions of resources and novelty. A resource in this context refers generally to time, manpower, facilities or materials available to the project. Novelty is a measure of how new and proven the underlying technologies are and accords somewhat with the US DOD Technology Readiness Level (TRL) Classification.
Exhibit 2 suggests that a project can be complex to achieve either because it is highly resource constrained or because the product itself exhibits a high degree of novelty.
A fundamental research question that has not, in our opinion, been satisfactorily addressed is;
“Why do large, mature, sophisticated organizations routinely fail to deliver complex new products successfully?”
The authors have found that there is a surprising lack of fundamental academic work in this area and the few good studies in this field have been dominated by consultancy organizations such as Rand (Galway, 2004) and Standish (Standish, 1995). Several leading academics have also expressed serious concerns about the quality of many recent project management publications, highlighting for example weaknesses both in their relevance to practice and in their general understanding of research methodology (Meredith, 2000). Most current research suggests that the dominant cause of poor performance is a lack of competence in core project management skills (UK Office of Government Commerce, 2005).
This paper suggests that there is another contributory factor which is more fundamental and concerns the very models, tools and representations used to capture the process knowledge associated with project management. There is an increasing realization that existing tools are inadequate (Williams, 2002). A recent EPSRC funded review of Project Management (Winter & Smith, 2002) suggests that;
“project management theory remains stuck in a 1960s time warp and that the underlying theory of project management is obsolete”
Not only are the current tools weak but the authors have also identified a paradox which reinforces the planning problem.
This planning paradox can be summed up as follows:
- Process Knowledge in the form of planning data, schedules and dependencies is the responsibility of , and is generated within the Management domain
- The Engineering domain frequently has an aversion to planning and currently treats contribution to Process Knowledge as low priority, non-core activity
- Planning data therefore has inadequate input derived from and associated with Product Knowledge
- Plans are consequently too abstract, unrealistic and are more likely to lead to project failure
- A poor track record in meeting /adhering to plans further erodes the Engineering domain’s interest and confidence in planning activities.
Symptoms of the planning paradox include comments from engineers who claim that “planning is a waste of time”. In extreme cases a parallel universe becomes established whereby the management domain struggles to maintain project data which becomes increasingly divergent from the reality inhabited by owners of the product domain.
This is a very real and pervasive problem in many organizations and is symptomatic of a poor interaction between the two knowledge communities.
Contributory factors
The authors believe that a subtle but important cause of the planning paradox concerns the models and representations used to formalize process knowledge.
The model that underpins virtually all commercial project management software was developed over 50 years ago. The current standard project model uses a very simplistic representation whereby a network of activities and dependencies is constructed (the “classical” dependency network). This model was formulated in an era when computational resources were scarce and the need for the model to be computed using hand calculations was a requirement.
As with any model, it was designed to provide a simplified, but useful, representation of a real system. For certain applications this simplification does not detract too seriously from the usefulness of the model. For projects such as simple building construction, for example, where activities take place in a well defined and fixed sequence with clear dependencies, the model is adequate, and application of the critical path method to this model is a valuable management tool. It can be used for predicting the project duration, managing resources and monitoring project progress.
For applications where such characteristics do not prevail (such as large, complex projects), significant differences between the fidelity of the model and the activity being represented become evident. In the absence of any credible alternative, in most organizations process knowledge is captured using project management tools to generate a simplified model of the project. When the critical path method is applied to the resulting models, the results are often meaningless.
The dependency network makes two important assumptions:
- The dependency logic is binary. This means that a downstream task is unable to start until all the upstream parent tasks are 100% complete.
- No feedback loops or iterations exist.
These seemingly reasonable assumptions have far reaching implications. In reality anyone who has been involved in a major project (especially where significant design activity is prevalent) knows that typical behavior within a project violates these assumptions on a routine basis.
Given the separation between process and product level dependencies, detailed product information dependencies are often hidden in large process level tasks in the project model. This, coupled with the binary logic limitation, is a great source of project model inaccuracy and complication. So serious is this difficulty that many of the latest project management tools provide a “fudge” whereby a network may be constructed with various combinations of intricate leads and lags between activities. This convoluted logic further adds to the complexity and lack of transparency of planning data.
Worse still, project management software fails to permit feedback loops to be defined: they are treated as a logical inconsistency. Again, this is a very serious constraint on model accuracy. In reality, the design of most complex products involves design iterations.
Sathi, Morton and Roth (1986) have shown that classical approaches to project management do not provide sufficient functionality to manage large engineering projects. Additionally, the lack of a project management method tailored to the requirements of new product development has been identified as one reason for unexpected cost and/or timescale overruns that have typified such projects in the past (Eppinger, Whitney, Smith & Bagala 1989). Vora and Helander (1992) have furthermore shown that the existing models of engineering design that underpin current project management practice are not adequate, and they have limited power and applicability.
New product design is an example of activity where the yawning gap between the typical project representation and the dynamics of the actual project presents very serious problems. The project manager for a design project is forced to construct an artificially simplified dependency network which cannot show task iterations and partial dependencies. The project plan consequently becomes little more than a reporting tool that shows key milestones. It is of no use in managing, sequencing and controlling detailed activities simply because the detail is absent in such an abstract representation.
Solution
The authors have developed an entirely novel representation which seeks to remove restrictions associated with the classical dependency method. This representation has now been commercialized as a product called Plexus™ (Acsian, 2007).
A fundamental difference between the Plexus™ approach and the typical project modeling method concerns the way activity relationships are defined. Plexus™ defines a network of relationships which also specifies the strength or degree of dependency of each relationship. This network has been christened an influence network, to distinguish it from the more limited classical precedence network associated with typical critical path analysis. The influence network allows complex interactions to be modeled authentically, including iterations and loops.
Plexus ™ is a highly sophisticated process mapping and analysis tool, capable of representing complex information flows to the right level of granularity, whilst capturing an appropriate quantity of data about each activity that makes up a network. Plexus ™ is not restricted by the complexity of any dependency network, and can be constructed and viewed at any level using many different hierarchical views, enabling activities to be viewed in a variety of filtered contexts.
Plexus ™ is capable of representing multiple, interlinking iterative loops within a process and, as a consequence, avoids the need to abstract reality. Projects are represented as they really are, rather than fit to the inadequacies of a limited toolset. Plexus ™ enables rapid and simple data entry into a network with provision for considerable metadata to be applied to each activity. The Plexus™ modeling process is networked and collaborative. Simple diagnostics give multiple simultaneous users immediate notice of, for example, activities which are unconnected and for which there are “dangling” requirements.
A key feature of the philosophy behind Plexus ™ is a lean-thinking methodology which encourages project managers to develop activities back from deliverables. Information-pull methodologies such as these have been demonstrated to deliver a far more representative, focused and realistic project model. The workflow which emerges from such models captures the thinking patterns of the organization when addressing a project. When used in conjunction with optimization techniques that Plexus ™ employs, projects and processes can be subjected to extensive what-if analysis and risk assessment. Activities are optimized at levels of sophistication and realism beyond those available in other tools. Iterative loops are evaluated to determine not only when they should be invoked, but how often, in order to achieve the degree of project robustness required.
A summary of the proven advantages of this approach include;
- Very fast, collaborative construction of complex project networks
- Realistic and sufficiently detailed representation of complex networks
- Reusable data that is easily exported to other applications
- Genuine organizational buy-in across both management and engineering domains
- Detailed understanding of dependencies
Industry Experience
The research team has worked with many international aerospace organizations at the early stage of project planning. Although such large organizations have access to expensive, current-generation project management tools, they universally resort to the use of manual network sketches and “post-it” note planning, typified by Exhibit 3. Such planning sessions require labor intensive follow-up activities, where the hand-written data is transcribed into an electronic format suitable for eventual entry into an inevitably inaccurate project model in a limited software tool.
This situation is analogous to the status of Computation Fluid Dynamics (CFD) and Finite Element Analysis (FEA) tools some years ago. Until recently, such analysis tools required very labor intensive pre-processing prior to any analysis, whereby geometry was manually discretized or meshed to provide an efficient representation of either a 2 or 3 dimensional field (Shimada, 2006).
As with a project management network model, CFD and FEA models have a trade-off between mesh size and detail, versus accuracy of prediction. Exhibit 4 gives an example of a typical mesh, which shows how the granularity of the mesh is increased only where required for focused details.
Specialized software programs have now been developed for the purpose of mesh and grid generation which
- Remove labor intensive preprocessing activities
- Provide detail commensurate with required output accuracy
- Automatically identify “hotspots” where finer detail is required
- Make the analysis stage more accurate, efficient and reliable
Plexus ™ provides a directly analogous capability within the field of project management, by automatically identifying planning “hotspots”, and allowing unrestricted detail to be provided in these critical areas.
Exhibit 5 is a photograph taken at a recent planning session using the Plexus tool in a major aerospace company. This shows a group of engineers and managers concurrently generating a product development network.
A typical early planning session involves the establishment of a project “war-room” equipped with a data projector and a number of networked laptop computers. This allows teams to switch between a collective review of the planning data (using the data-projector) and intensive sessions where sub-groups concurrently develop and refine the network using individual laptops. The Plexus software allows fully concurrent manipulation and development of the network, through web-technologies and a server hosted data-base.
This Plexus™ workshop process has proven to be extremely popular with the engineers and project managers alike, because;
- It allows engineers to declare a detailed representation of the relevant product-level dependencies, including iterations.
- It provides sophisticated viewing and navigation tools, that allow users to gain a deep appreciation of context.
- It includes automated health checking, which immediately reveals missing network data, inconsistencies or unconnected activities.
- The software allows very fast generation of realistic, detailed networks
- Its diagnostic routines automatically classify network nodes, highlighting critical nodes, deliverables and inputs, in real-time.
- It employs powerful automatic layout algorithms to arrange the network in a logical, compact and visually appealing format (Exhibit 6).
Following the data entry phase, a leading-edge optimization technique is used to optimally sequence activities using discrete-event simulation and analysis techniques. Extensive reporting tools allow participants to view the outputs of this analysis, “sanity check” the network, and further refine it, prior to committing to a particular plan.
Conclusions
Experience of using this new representation provided by Plexus™ demonstrates that mixed teams of engineers and managers can successfully share knowledge which covers both the product and process domains. A shared representation results, which is sufficiently detailed and realistic to ensure there is strong-buy-in from all participants.
By decentralizing the development of the project network, a large number of participants are empowered to contribute to the plan. This results in very large, realistic, and detailed networks being developed in a fraction of the time it takes for centralized or “post-it” planning.
Experience in the use of this tool has shown that it is essential to put together an initial framework or project skeleton prior to the planning workshops. This framework typically includes defining key deliverables, resources, organizational breakdown structures, business targets and initial work-breakdown structures. Again this is analogous, in CFD or FE analysis, to ensuring the boundary conditions are well defined and understood at the start of network construction. This prevents participants from wasting time building this contextual data in the workshop.
It is equally important to have good workshop facilitators (typically two). The first of these focuses on good project management and modeling skills, to ensure that the resulting network data is clear, logical and complete. Drawing on the parallels with CFD or FE analysis, this facilitator should help to identify where greater detail is required to provide a robust, accurate and efficient network. The second facilitator’s role is to focus on the software tool itself, providing guidance on how best to enter data, navigate and construct hierarchies, and attach resources, constraints and dependencies.
We believe that this approach is a major step towards removing the reliance on restrictive and clumsy, classical precedence network logic as a project representation. This can help to resolve the planning paradox, and lead to a significantly improved probability of project success.