Knowledge production and the success of innovation projects

John Michela, University of Waterloo, Canada

*The authors would like to acknowledge the financial support for the research presented here from the Project Management Institute, via a grant for the research project titled “Refining the knowledge production plan,” as well as the support from the Social Sciences and Humanities Research Council of Canada, via the grants awarded to the projects “Knowledge Production for Innovation” and “Managing Innovation in the New Economy.”

Abstract

What is the role of knowledge production in innovation projects? What kind of knowledge increases the chances of success of such projects? In this paper we build on the assumption that the success of innovation projects depends to a large extent on producing technological knowledge as well as other types of knowledge. In particular, we argue that the most valuable knowledge depends on the nature of the project, namely on the kind of complexity that is most problematic for it. To test this hypothesis, we first rely on innovation literature to develop a framework that classifies knowledge according to its representational form. We then build on the literature about the relation between science and technology to identify three types of complexity most commonly encountered in innovation. Further, we hypothesize what kind of knowledge is most likely to be produced in projects facing each type of complexity, as well as what types of knowledge are related to their success. Exploratory findings based on a large-scale survey give substantial support for this contingent theory, and provide insights for its development. We conclude by outlining the theoretical and practical implications of these findings.

1. Introduction

Knowledge is a socially constructed system of representations about the world. Innovation projects rely on, and produce in turn, significant amounts of knowledge. For example, technology is essentially a type of knowledge that represents actions, which can produce useful outcomes (Rosenberg, 1982). Other important knowledge used in innovation projects refers to markets and user needs (von Hippel, 1986; Griffin and Hauser, 1993). Knowledge used in innovation is also represented in various forms: factual information, domain maps; scientific laws and explanations; manufacturing recipes; as well as models and drawings of artifacts and of their functioning (Vincenti, 1990; Mitcham, 1994). Each of these forms is produced differently and serves different purposes (Bunge, 1967; Bohn, 1994; Garud, 1997). Researchers also note that some domains of activity seem to favor some of these forms of representation.

However, the theories about the role of knowledge in innovation projects would benefit from explaining these preferences and the impact that each of them has on the success of innovation projects. Therefore, we attempt to make two theoretical contributions in this paper. First, we draw on insights from the innovation literature to identify two key dimensions that set apart the representations of knowledge used in innovation projects. Using the extremes of these dimensions and their common midpoint, we identify five types of emphases in knowledge production and provide concrete examples for each or them. Second, by relying on the historic, sociological, and philosophical literature about technology, particularly about its relation with science and industrial practice, we argue that most innovation projects confront one of three types of complexity. By theorizing the challenges that innovation gets from each type of complexity and by discussing how different knowledge representations help address these challenges, we are able to hypothesize the knowledge emphases that are most likely to be observed, and, respectively, to increase the chances of project success, in the respective projects. Figure 1 summarizes our basic theoretical argument. The dotted arrow suggests that the nature of complexity influences the knowledge representation emphases in innovation projects via a historical learning process, which transpires in the standard skills and approaches that practitioners schooled in the same domain bring with them in innovation projects. But the nature of complexity (the problem to be solved by an innovation project) also moderates the relation between the project activities (mainly the kind of knowledge representations that are built) and the performance of the project. This influence is more subtle, because it stems from the fact that performance may stem from the use of representations that are less common for the given sector but can make a difference in a competitive context, because such contexts put limits on time and resources, value flexibility, impose constraints related to the protection of intellectual property, and so forth.

Outline of the theoretical argument

 

Figure 1. Outline of the theoretical argument.

The paper proceeds as follows. In the next section, section 2, we present our categorization of knowledge representations. In section 3, we discuss the three types of complexity and present hypotheses about the representations that are most often used, as well as more often related to performance, in projects dealing with each type of complexity. In section 4, we outline the methods used for an exploratory test of these hypotheses, based on a large-scale survey of knowledge production practices in innovation projects. Section 5 presents the results of the test. A discussion section summarizes the theoretical insights and presents some tentative guidelines for project managers regarding the direction and intensity of knowledge production efforts.

2. Knowledge representations in innovation projects

The most important properties of knowledge are related to the way it represents a relevant reality. The main argument of this paper is that, irrespective of its degree of completion, be it a fleeting verbal communication or a hesitant sketch scribbled on a whiteboard, or else a definitive “inscription” (Latour, 1987) carefully recorded in digital form, the nature of these representations exerts a crucial influence on project activities (Callon, 1986). For example the form impacts the use of knowledge as a cognitive inspiration or as a guide in problem solving and decision making (Fleming and Sorenson, 2004). Variations in the form of representation also qualify the use of knowledge a means of social coordination and political influence (Carlile, 2002; Ewenstein and Whyte, 2009). Two properties appear to stand out in our review of the innovation literature with respect to the impact on activities in innovation projects.

The first property, the degree of abstraction, captures the difference between, say, representing an object via a photo as opposed as a mathematical symbol. The term abstraction comes from the Latin word abstrahere (to take out from within), and means digging out some essential property from a phenomenon of interest. The operation takes objects and phenomena, with their idiosyncratic forms and specific imperfections in shape, texture, and operation, and produces a more general idea, whose name or symbol can usually be associated with many objects and is not dependent on any concrete object. Abstraction has been linked to several aspects of innovation. For example, users are unable to clarify their needs if they lack concrete experience with a product, which seems to affect more innovative products (Leifer et al., 2000). Marketing scholars also suggest that clients tend to group products into categories with average level of abstraction, such as “chairs,” as opposed to the more abstract “furniture,” or the less abstract “adjustable office chair” (Clark, 1985). This cognitive grouping is relevant for predicting the diffusion of new product categories, or when segmenting markets for positioning a new product (Gutman, 1982; Rogers, 1995). Scholars also found that technical knowledge used in innovation has varying degrees of abstraction. For example, differences were noted between the representations used in different sciences. Hence, theoretical physics seems to favor, or categorize as more legitimate, representations that are more abstract than those used in biology (Knorr Cetina, 1999). Also, science overall seems to favor more abstract representations than technology, which tends toward an average level of abstraction. This happens because designing real objects requires representations that capture some of their irregularities (Kline, 1987; Vincenti, 1990).

The second important property of representations is complexity, which refers to the number of elements included in a representation, as well as the number and nature, especially nonlinearity, of interactions between them. For example, comparisons of predictions with real outcomes reveal that nonlinear models capture new product diffusion processes better than linear representations (Bass, 1969; Arthur, 1989). Also, the extent to which knowledge matches the complexity of natural phenomena influences, for example, the ability to design a modular architecture (Chesbrough and Kusunoki, 2001). Likewise, representations that match the degree of complexity of artifacts are a condition for their safe operation (Perrow, 1984). However, decision processes that use simple real-time data rather than complex models reduce procrastination and lead to better innovation outcomes in high-velocity sectors, such as computers and semiconductors (Eisenhardt, 1989). But success also depends on involving more experienced individuals, which means that integrating the simple data still relies on complex representations, in the form of sophisticated mental frameworks of such individuals. In this case, the role of representations seems to complement the skills and abilities of its users. Figure 2 represents these two dimensions as well as the knowledge emphases represented by their respective extremes, as well as by their common midpoint.

A first emphasis in knowledge representation, “deep dissection,” focuses on the inner causal and operational workings of an artifact or a natural system of interest. Examples include technical drawings of artifacts as well as “causal dissections” of biological, chemical, and other processes. These representations are highly complex because they go beyond the surface perceptions and strive to include as many factors, and relations between them as possible. But they do so at the cost of shedding some of the process, shape, and texture details of real objects, by resorting to representational conventions (Henderson, 1999). The main benefit of this emphasis is increasing the grasp or “mastery” over (Bohn, 1994) the represented reality, for the reliable reproduction and precise control of relevant processes and operations, and the optimization of new object designs.

The two dimensions of representation and the resulting knowledge emphases

 

Figure 2. The two dimensions of representation and the resulting knowledge emphases

On the opposite side of the complex-simple distinction we find the emphasis called “structured database.” Each object of interest is seen as a unitary entity with few characteristics, represented by its name, a pictogram, or by association to a category, in isolation from its context. The focus is instead on producing classifications or simple mappings of objects. Examples include “objects” or “data-sinks” in object-oriented software projects, tables of components or types of product failures in mechanical projects, and client classifications The benefit of this emphasis is to organize a mass of information for quick orientation; facts encoded in ways that speak to the common sense, and mapped onto moderately abstract categories with self-evident meaning, enable fast identification and association by users. This type of emphasis is close to the ideals of the knowledge management movement.

Innovation projects can also favor extreme concreteness, an emphasis we call “rich illustration.” Representations are holistic renderings that encode a maximum of perceptual detail, ideally a mirror recording or “pixelization” of reality. Examples include tridimensional (isometric) illustrations, mockups, photos, sound recordings, videos, as well as narratives, such as detailed incident reports. The latter use many words, with their everyday rich and fuzzy meaning rather than as precise symbols of abstract notions. At the limit, artifacts or prototypes can represent themselves, if they remain available for observation, manipulation, or testing (Hargadon and Sutton, 1997). The benefit of these representations is conveying as fully as possible the sensation of being in the presence of an object, touching and manipulating it, and witnessing its functioning in a real context. This can enable the user of the representation to scrutinize, make associations, extract meaning, and imagine new forms and scenarios (Nonaka, 1994; Dahl et al., 1999).

On the other end of the abstract-concrete divide lies an emphasis that we call “generic formula,” favoring the abstraction of essential properties from a class of objects, establishing a relation between these properties over the range of their variation, and expressing it as a mathematical function or a topological chart. A good example of such emphasis is the effort made in the electrical and aviation industries to develop mathematical formulas, derived from basic science or generalized from numerous trials, to enable the design of a variety of equipment in a given class without extensive experimentation (Hughes, 1983; Kline, 1987; Vincenti, 1990), as well as their effort to chart the properties of materials, such as those of steam under different pressures and temperatures (Robinson, 1937). The benefit of such representations is to inform the development of a broad range of artifacts and processes based on calculations rather than trial and error.

The last emphasis is called “modular configuration,” and it includes representations that partition objects into self-contained chunks, and establish clear and parsimonious links between chunks, as well as with other objects. Examples include architectural diagrams and schematic figures. Such representations are moderately abstract: chunks and links are represented as archetypes that retain some of the concreteness of real objects but extract only properties essential for interactions, such as functions, inputs, and outputs. They are also moderately complex: by seeing chunks as unitary entities, their number is reduced, while, from all possible interactions, just a few influences and flows are included. The main benefit of such representation emphasis is easing the conceptual manipulation and rearrangement of objects and their combination with other objects.

In the next section, we argue that specific emphases are favored in certain innovation projects, because their properties help projects solve the challenges associated with the respective type of complexity these projects face, but also that selected emphases will moderate the relation between such complexity and the success of innovation projects.

3. Complexity and problem solving in innovation projects

Innovation projects have been frequently characterized as a problem solving process (Brown and Eisenhardt, 1995). But the nature of the problem itself has often remained in the background, assimilated with a generic quest for meeting certain goals or specifications. However, a number of recent studies suggest that, in different domains of activity, the problem-solving processes that are used more frequently, as well as those that are more likely to lead to success, are different (Floricel and Miller, 2003). Theorists have identified various contingent factors that influence the problem-solving process, such as the novelty of the innovation (Leifer et al., 2000), the size of the project (Shenhar, 2001), the dynamism of environmental change (Eisenhardt, 1989; MacCormack, Verganti and Iansiti, 2001), etc.

In this paper we suggest a complementary factor, which is right at the core of the problem-solving idea and is intimately related with knowledge production, namely the common nature of the complexity that innovation projects face in a given domain. Studies of innovation suggest that, in a given domain, the most difficult problems have a lot of similarities across projects (Hughes, 1983; Stokes, 1997). By reviewing the literature on technology, and on its relation to science (Bunge, 1967), we suggest that the complexities afflicting most innovation projects can be parsimoniously categorized in three types, namely control, functional, and causal. In the following paragraphs, we first explain its specific nature of each type of complexity, then move to identify the problem-solving process most likely to be used in the innovation projects dealing with and, finally, deduce the forms of representations that are more apt to support this process.

The first type of complexity, control complexity, can be defined as the number of factors and interactions that innovators must take into account to design and produce an artifact that reliably performs a useful function. This complexity is relevant in domains, such as automotive, electricity generation, petrochemical, and aviation, in which innovation traces its roots to artisans’ commonsensical experience of making things by shaping familiar materials for the purpose of achieving some simple useful function (Bush, 1954). In resulting artifacts, the shape of parts is subordinated to this function according to a hierarchical logic (Clark, 1985). The everyday experience suggests that objects can be shaped, and will behave in accordance with their creators' solution, if the latter take into account a number of macro-properties, given either by the everyday experience with objects or by making similar objects in the past. In its “engineering” version, the ideal problem-solving strategy relies on a base of generic formulas to calculate the shapes and select the materials of artifacts (see for example Ulrich and Eppinger, 2000). This ideal of rational ex ante design has long been attained for many artifacts, such as electrical motors (Hughes, 1983), which only implement a few simple functions, with ordinary performance requirements.

However, control complexity resurfaces every time innovators take on designing objects that are “new to the world,” or compete to increase the scale and performance of existing artifacts. Concretely, novelty makes some old rules and formulas irrelevant, while bolder goals push innovators beyond the boundaries in which current knowledge reliably represents the behavior of objects. In both situations, trying to make artifacts reveals some new factors that play an essential role in the functioning of the artifacts. A typical occurrence, as conditions become more severe in the quest for higher performance, is that parts start having secondary, dysfunctional interactions (Simondon, 1989). For example, as power generator capacities increased, the heat produced by poorly understood currents, induced by useful magnetic fields, interfered with the operation of all parts and became the key capacity-limiting factor (Hughes, 1983). These additional factors and interactions for which little knowledge is available will push the problem-solving process away from the ex ante “theoretical” design and calculation, toward an “empirical” approach based on an iterative trial and error (Shenhar, 2001; Leifer, 2000) and practical experience with artifacts.

Of course, new factors and interactions could be understood via the scientific study of natural phenomena, but this can not only take more time, and also lead to knowledge represented in overly abstract and symmetrical forms, such as differential equations, which cannot be applied directly to the design of objects with complex forms and require additional knowledge, say, about boundary conditions (Nightingale, 1998). Even if such knowledge can be transformed into engineering formulas, it would not alleviate the task of calculating the ideal form, as most formulas are only applicable to a limited number of factors and typical interactions. Therefore innovators are forced to resort to their intuition, to trial and error, and so forth, until they find an adequate artifact form and production process. The result is moving away from simple part forms and artifacts with a clear (“theoretical”) repartition of functions to parts, toward a “concretization” of artifact shapes (Simondon, 1989), and an integration of architectures (Ulrich, 1995) that prevents dysfunctional interactions. But this makes ex ante calculation even more difficult, given that formulas are best suited for simple, symmetrical forms. Even the possibility of avoiding these difficulties by using discrete modeling and numeric simulation is not always open. Even with the increasing computing power, numeric techniques, such as finite element analysis, frequently reach computational limits. As well, even an extremely fine discretization can prove insufficient, for example, when modeling a stress concentrator area of a part. The remaining possibility of adding sub-functions, and the respective subsystems, to control new factors or interactions will, in turn, lead to an even larger increase in control complexity. For example, a solution for the generator overheating problem was adding an artificial cooling function, implemented by a system of pipes that removed heat from its parts. But this solution added, in turn, new potential sources of secondary interactions, such as leaks and even explosion, when it was attempted to use hydrogen as a cooling agent (Hirsch, 1989).

This suggests that projects confronting control complexity will produce “rich illustrations” more frequently than other projects. These representations combine a holistic perspective, which helps the hierarchical (top-down) development of “concretized” forms, while also enabling a detailed inspection and discussion of defects, failures, or malfunctions in prototype trial and product use, which helps the bottom-up development of an intuitive understanding of their origin, to be used in the next design iteration. Other frequently used representations would be “deep dissections,” which capture specific forms and relations between parts in current products, helping grasp their functioning but also enabling a reproduction of these forms in case a trial works for not fully known reasons. However, even if these customary representational forms support problem solving in domains facing control complexity, they do not ensure project performance with respect to competitors. Instead, the performance differential depends on the ability to quickly acquire, adapt, or develop generic formulas, in spite of the difficulties described above. Because such projects deal with only moderate interactions in a possibility space of macro-properties, generic formulas enable a broader and more effective evolutionary search for solutions than the exclusive reliance on trial and error (Gavetti and Levinthal, 2000; Ahuja and Katila, 2001; Fleming and Sorenson, 2004). This conclusion holds irrespective of whether the formulas are created inductively or grounded in science, and even if their application is not perfect or precise. These conclusions can be summarized in the following hypotheses:

Hypothesis 1: Projects confronting control complexity emphasize “rich illustration” representations to a larger extent than other projects (H1a), and “deep dissections” to a not lesser extent than other projects (H1b). However, within this group of projects, performance is more correlated to the use of “generic formulas” than to other types of representations (H1c).

A second type of complexity, functional complexity, refers to the number of functions realized by an innovation, and to the interoperability requirements between these functions. It characterizes domains that trace their roots to idealized inquiries into the world made by mathematics, logics, and theoretical physics. Such domains include software products, digital telecommunications, computers, semiconductors, etc. Functional complexity makes a top-down rational engineering problem solving for such innovations difficult because the number of interactions between parts implementing various functions quickly creates an overwhelming computational load. One of the problem-solving approaches used instead is to develop first a modular architecture, which assigns functions to subsystems, in a way that minimizes the interactions between subsystems (Simon, 1981; Ulrich, 1995). While the execution of functions still depends on the physical substrate of the artifact, the material underpinning of each function is contained within a separable subsystem (module) whose interactions with other subsystems occur through well-defined interfaces. Artifact modularity can be achieved by studying the interactions between parts and learning how to separate and control them via interfaces (Chesbrough, 2003).

A problem-solving strategy that can address an even more drastic functional complexity is separating functions outright from their physical substrate and focusing solely on their efficient and consistent interoperation rather than on secondary physical interactions. This approach was enabled by semiconductors technologies that not only reduce the workings of the material substrate to a simple and clearly defined electrical signal, representing one bit of information, but also provided massive capabilities for storing and processing such signals, enabling the execution of various functions. The problems posed by functional complexity could then be solved by what Simon (1981) calls the “sciences of the artificial,” and Bunge (1967) termed “operative technological theories,” whose focus is on optimizing such functional operations. Thus, operations research and other branches of mathematics focus on creating algorithms that optimize such operations within a given set of constraints. In turn, computer science and information theory focus on processing digital signals. At levels further removed from the physical substrate, software engineering and systems engineering focus on how bunches of functions can be grouped in layers and modules that would operate parsimoniously, and separated from other functions by minimal interfaces.

Both these strategies shift problem solving into the abstract representational realm. Hence, the projects facing functional complexity are likely to produce all types of abstract but moderately complex types of representations. Among these, modular configurations are likely to be produced more frequently than in other types of projects, because functional complexity projects usually involve iterative heterarchical restructurings, a process driven with equal force by the overall system and by its parts. For example, functional complexity usually grows with the addition of new functions on top of those already produced by existing systems. This often occurs through the convergence of previously separate products, such as cameras and mobile phones, and systems, such as those for voice, video, and data transmission. Thus, a problem is accommodating new or significantly improved functional subsystems in systems that were optimized for other functions. Even in highly modular systems, this calls for restructuring the overall architecture, which reverberates to most other functional modules. “Modular configuration” representations include many elements but highlight only essential functions and connections, enabling an intuitive grasp and a relatively easy rearrangement. Other frequently used forms of representations are likely to be “generic formulas,” used at the level of detailed design as algorithms and equations involved in the implementation of functions, and “structured databases,” used to organize the information about needs, functions, technologies, interface standards, etc. However, the practice of using of modular configurations and structured databases can sometimes hold back innovation projects from the use of more complex representations, such as “deep dissections.” Trained to deal with schematic figures and focus on their internal coherence, practitioners would be less inclined to focus on complex underlying processes, such as physical interactions between subsystems or customer cognitions, and will attempt to hide them under stylized requirements or layers of technical architecture. The absence of deep dissections could result in failure to solve customer problems, and to adequately carve modules or specify interactions with other systems.

Hypothesis 2: Projects confronting functional complexity emphasize “modular configuration” representations to a larger extent than other projects (H2a), and “structured databases” (H2b) and “generic formulas” (H2c) to a not lesser extent than other projects. However, within this group of projects, performance is more correlated to the use of “deep dissections” than to other types of representations (H2d).

The third type of complexity, causal complexity, refers to the number of causal paths that converge to produce a useful function, and particularly to the number of interactions, feedback loops, and distinct levels of organization that are activated by these pathways. It characterizes the innovation projects that, rather than making things, attempt to harness natural processes to achieve the desired effect. Historically, this included domains close to the mysteries of life and creation, from material making, pharmacy, and agriculture to food industry subsectors using processes such as fermentation. Currently, with the development of knowledge, these deal with control complexity, while the medicine, biotechnology, and biopharmaceutical fields still face severe causal complexity. For example, trying to block the growth of a tumor faces the possibility that the same agent also destroys cells essential in normal living functions and triggers defensive reactions of the organism. In addition, cancer cells and tumors have exceptional heterogeneity and adaptability, meaning that patients need treatments with particular causal “keys,” still, even if such a key is found, the disease can evolve and circumvent the treatment (Kamb et al.; 2006). All mechanisms cut across levels of organization, from molecules and genes to cells and organs. As a result, the success rate for innovations dealing with control complexity remains low (Nightingale and Martin, 2004); effective products only emerge after several decades of effort (Gibbs, 2000).

Of course, problem-solving strategies in such fields depend on the accumulating understanding of relevant natural phenomena across a broad front, in what amounts to a continuous interaction, indeed a co-evolution of science and technology (Stokes, 1998; Murray, 2002). But, regardless of these advances, causal complexity precludes innovators from designing products in a hierarchical or even heterarchical manner. The main obstacles are the failure to develop generic knowledge (Mayr, 2000) and the inability to integrate various knowledge strands (Dunne and Dougherty, 2009). The relevance of the first obstacle is exemplified by complaints about the “data” focus of biological sciences and the rarity of integrative models, while the second, by calls for moving away from molecular reductionism toward considering clusters, and systems of molecules in cells and tissues. Given these conditions, innovators are forced to adopt a bottom-up strategy, which uses serendipitous findings in scientific experiments or previous projects, to identify mechanisms or agents that could lead to a new product. Leads are then envisioned in the context of other factors and processes that affect the relevant natural system and, if they pass the conceptual test, they are checked in systems that increasingly resemble the real systems in which they will be incorporated. The lack in explanatory knowledge is made up for, especially in the initial stages, by a massive, increasingly automated, experimentation with leads (Thomke, 1998). The lack of knowledge integration is compensated, in the later stages, by the iterative checking of promising leads against concrete systems, at progressively higher levels of organization and in conditions more closely resembling the real beneficiaries and contexts (Gibbs, 2000, 1969). This systemic feedback is essential, as initial positive results are regularly reversed during subsequent stages of effectiveness demonstration, and often, even if leads seem to work, innovators realize that it may do so for different reasons than initially thought.

The large number of factors, interactions, levels, and loops considered relevant, the fact that the knowledge about them is constantly shifting, and yet complex integrative representations, such as deep dissections, are still out of reach for innovation projects dealing with causal complexity suggest that different types of representations will be most frequently encountered such projects. The first are generic formulas that capture at least some partial relations between relevant factors. The contention that these formulas will be produced more frequently in projects dealing with causal complexity than in other categories of projects seems to be supported, for example, by the popularity of biostatistical and bioinformatics tools, which produce such relations “empirically.” Second, the branching out of causal explanations into factors and mechanisms at different levels, the need to keep track of a vast array of evolving technologies and measurement methods, as well as of the data and findings flowing from different sources, highlights the importance of structured databases. Because this kind of representation supports the problematic search we anticipate that it will be produced not less frequently than in other types of projects. However, the relative performance of projects will depend on the ability to integrate all this knowledge, in spite of all difficulties already discussed, in order to determine the direction of the iterative problem-solving process. Given the limitations in using external representations for this purpose, one of the few available alternatives is relying on intuitive integration by innovators, taking advantage of their scientific sense and their ability to imagine invisible processes (Knorr Cetina, 1997; Dunne and Dougherty, 2009). But such abilities are cultivated not only by theoretical study and numeric data, but also by perceptual exposure to a large number of exemplars of similar processes. In this respect, rich illustrations enable innovators to grasp processes as they happen in the realm of concrete natural objects with all their causal complexity. The importance of these representations is underscored by the renewed interest for “in vivo empiricism” (Booth and Zemmel, 2004), as well as for visualization tools showing processes as they happen in cells or even at the molecular level.

Hypothesis 3: Projects facing causal complexity emphasize “generic formula” representations to a larger extent than other projects (H3a), and “structured databases” (H3b) to a not lesser extent than other projects. However, within this group of projects, performance is more correlated to the use of “rich illustrations” than to other types of representations (H3c).

4. Methods and data

We took advantage of data obtained by a large-scale survey conducted for other purposes to attempt an exploratory testing of these hypotheses. The survey used an extensive questionnaire that investigated several aspects of the context and practices of innovation in firms. It was sent to vice-presidents of research and development (R&D) and chief technology officers of companies in a variety of sectors from North America, Europe, Asia, and Latin America. Among those executives that could be reached by the survey team, the response rate was approximately 30%. The resulting database contains answers from 792 firms. Of these, only data from 673 firms were usable for all our analyses, as some firms did not answer items located near the end of a long questionnaire. To measure knowledge representation emphases, we relied on questionnaire sections referring to innovation project practices. Table 1 lists the items corresponding to each of five knowledge emphases, divided by innovation project phase. The items use 7-point Likert-type scales. Alpha coefficients shown in parentheses indicate that resulting measures (all items in the corresponding row of Table 1) have a satisfactory reliability. All alphas approach or surpass the. 70 target set by Nunnally (1978), exceeding in all instances the acceptable value. 50 suggested by Hair et al. (1999). The measures used in the analyses were computed by averaging the values of corresponding items.

For project performance, we used four, 7-point Likert-type items measuring managers' subjective perception about the innovative performance of the firm relative to competitors. These items referred to the innovation-driven growth rate of the firm, its creation of customer value through innovation, the frequency of major new product releases, and, respectively, the proportion of revenues generated through new products. Answers to these items were averaged to form a score for a “New product growth;” alpha reliability for this composite was .82.

Table 1: Items used as indicators of knowledge emphasis

Representation emphasis Innovation project phase
Idea exploration Concept development Product design
Generic formula
(.67)

6.1.7 Leading external experts, scientists and gurus

6.1.8 Interact with university spin-offs

6.3.5 Reuse data, methods, exemplars, and models

6.3.7 Reuse knowledge produced for other projects

6.3.4 Produce lots of new knowledge

Structured database
(.67)

6.1.6 Industry associations and standard bodies

6.1.4 Distributed resources for new idea development

6.3.2 Map technical and market environment

6.3.3 Extensive classification of user needs

6.3.13 Partial experimentation to obtain data

Modular configuration
(.62)

6.3.6 Assemble latest modules and parts on market

6.3.17 Iterations redefining concept and architecture

6.3.8 Reuse platforms and modules produced inside

6.3.9 Reuse modules discarded in other projects

Deep dissection
(.67)

6.1.1 Long-term internal discovery programs

6.1.2 In-house market research capabilities

6.3.12 Develop and test several concepts in parallel

6.3.14 Extensive simulation of product behavior

6.3.15 Detailed causal modeling of product behavior

Rich illustration
(.70)

6.1.3 Move staff from unit to unit

6.1.5 Interaction with key suppliers and customers

6.3.10 Early integration of key customers

6.3.11 Benefit from suppliers’ experience

6.3.16 Rich and concrete functioning experience

6.3.18 Ask key customers to test product

Note: Items are presented in abbreviated form and grouped by innovation project phase. Values in brackets represent alpha, an indicator of the reliability of the measure formed by averaging all items on the same line in the table.

To capture the kind of complexity that these firms typically faced in their innovation projects, we first allocated these firms to 4-digit sectors in the North American Industry Classification System (NAICS). Then we allocated these sectors to categories corresponding to the three types of complexity, based on the considerations described in the theoretical section, as well as on Floricel and Dougherty's (2007) discussion of the nature of innovation in different sectors. The sector allocation was performed by one doctoral student and then was checked by a second doctoral student, who also performed the allocation of sectors to the three categories. The allocation of firms to sectors relied on (a) open-ended descriptions of their sector of activity that firms provided in an allotted space on the questionnaire and (b) information gathered from firms' websites and other sources of secondary data. Disagreements between the students with respect to sectors concerned about 10% of the sample; they were resolved tentatively by the second student and approved by the first author of this paper, who also checked the allocation of sectors to the three categories. This ex ante allocation procedure did not use any measures from the questionnaire, reducing the potential for bias in subsequent analyses. Moreover, the two-step allocation to categories made any possible bias even more remote.

5. Analyses and results

The hypotheses regarding the knowledge representation emphases that are more likely to be observed in projects dealing with the three types of complexity, were tested by calculating the means in Table 2 and comparing means (across categories, one variable at a time) by using analysis of variance statistics (one-way ANOVA). Four of the five knowledge variables yielded statistically significant or near-significant (p < .10) differences. For two of these three variables, the highest mean was one predicted to be especially high for that type of complexity. Specifically, firms facing causal complexity put the greatest emphasis on generic knowledge, as anticipated in Hypothesis 3a. Also, represented by boldfaced italics in Table 2, Hypotheses 2a was supported when the highest mean for modular configurations was seen in projects facing functional complexity. However, the higher mean on rich illustrations for the control complexity group, anticipated by Hypothesis H1, was not confirmed. In fact, the projects facing functional complexity seem to have the highest mean for this category of representations.

Table 2: Mean differences in the use of knowledge representations among complexity groups.

Representation emphasis Type of complexity Mean of means F-ratio
Causal Functional Control
N =40 N =153 N =480
Generic formulas** 5.03a (0.92) 4.66b (.99) 4.45c (.98) 4.71 7.54
Structured databases† 5.11 (1.00) 4.97 (.86) 4.82 (.93) 4.97 2.70
Modular configuration* 4.46a,b (1.06) 4.82a (.86) 4.56b (1.06) 4.61 3.80
Deep dissections 4.67 (1.22) 4.64 (.95) 4.56 (.96) 4.62 0.45
Rich illustrations* 4.58a,b (1.12) 4.90a (.90) 4.69b (.92) 4.72 3.22

†p < .10 *p < .05 **p < .01 for the omnibus test (F-ratio) for differences among group means for that representation emphasis

Note: Standard deviations appear in parentheses. Given occasional missing data, the F-ratios' denominator degrees of freedom in the five ANOVAs were, respectively, 622, 623, 617, 622, and 621; mean squared error terms were 0.963, 0.838, 1.044, 0.954, and 0.860. Means shown in boldfaced italics match the main predictions stated in Section 3 (not those predicting that the use will be not less than others). Means not sharing a subscript (a, b, or c) are significantly different by pairwise contrast at p < .05.

Regarding the second series of hypotheses concerning the use of representations, referring to a use that is not less than that for other firms, the results are also encouraging. Projects facing causal complexity have the highest mean on the use of structured databases, albeit the difference is marginally significant. This is consistent with hypothesis H3b, which stated that the use should be not less than for other firms. Projects facing functional complexity had the second highest mean on the use of structured databases and generic formulas, which is in line with hypothesis H2b and, respectively, H2c. Regarding structured databases the average for the functional complexity group is identical with that for the overall sample. In respect to generic formulas, it is slightly less but not significantly different, but significantly higher than the average for the control complexity group, which forms the bulk of the sample. Finally, projects facing control complexity has the smallest mean for the use of deep dissections, although only slightly different from the mean on means, which tends to contradict hypothesis H1b. However, the difference is not statistically significant, which means that the means of all three groups should be interpreted as equal.

Regarding the sources of relative performance within the groups, we used, for each of the three groups, a multiple regression of the performance variable on the five knowledge emphasis variables. Results are presented in Table 3. They show that the representation emphasis variables seem to explain a good proportion of innovation projects performance in the three categories. The pertinent hypothesis that received the most support is H1c, which suggested that generic formulas would be related to performance in the control complexity group (p = 0.061). For the functional complexity group, the coefficient for the deep dissection variable (H2d) is positive but not statistically significant. For the causal complexity group, the coefficient for rich illustrations (H3c) is also positive but not statistically significant.

Table 3: Results of regression analyses with respectto performance

Representation emphasis Type of complexity
Causal Functional Control
N =40 N =153 N =480
R Square = 0.256† R Square = 0.281*** R Square = 0.185***
  B Std. Error B Std. Error B Std. Error
Constant 2.523 1.142 1.939 0.541 2.197 0.267
Generic formulas -0.119 0.273 0.114 0.120 0.112 0.059
Structured databases 0.347 0.298 0.148 0.147 0.134 0.074
Modular configuration 0.197 0.216 -0.219 0.113 0.052 0.052
Deep dissections -0.180 0.195 0.130 0.115 0.110† 0.066
Rich illustrations 0.235 0.310 0.444** 0.134 0.153* 0.069

†p < .10 *p < .05 **p < .01 *** p < .001

But, behind these nonsignificant results, the pattern of results seems to support the main idea behind the group of performance hypotheses. Namely, the customary representation emphases do not explain the performance differential as well. In fact, for the causal and functional groups, the representation, which is emphasized relative to other groups, respectively generic formulas and modular configurations even have negative coefficients. This means that producing these representations, albeit useful in solving problems, is not a source of advantage, perhaps because anyone can do the same thing in the given sector.

In a further set of analyses we sought insights into how the five knowledge emphases vary among the three complexity groups. We used discriminant analysis to reduce to two dimensions the five dimensions that are implied by five knowledge emphases. In this application, discriminant analysis first identified a set of coefficients to use in making a weighted sum that would differentiate the three groups to the greatest possible overall extent. It then identified the next best weighting for differentiating the groups again, in a manner independent of (orthogonal to) the first dimension. All knowledge emphasis variables were entered into the analysis simultaneously (not stepwise), and prior probabilities for group classification were computed from group sizes. The middle of Table 4 shows the coefficients thus obtained. When these coefficients are used to produce two corresponding weighted sums, the three groups take the positions shown in Figure 3 for the two-dimensional space derived in the analysis. Specifically these are positions are based on the following obtained (X, Y) pair values from these two discriminant functions: Causal (.780, -.039), Functional (-.021,.234), Control (-.055, -.073).

Table 4. Knowledge emphasis variables' discriminant function coefficients and their correlations with discriminant function scores

Knowledge Emphases Canonical Discriminant Function Coefficients Correlations with Discriminant Functions (Structure Matrix)
  Function Function
  1 2 1 2
Generic 0.998 0.390 0.63 0.69
Structured 0.545 0.063 0.37 0.53
Modular -0.368 0.593 -0.16 0.83
Deep -0.042 -0.539 0.15 0.27
Rich -0.887 0.509 -0.18 0.74

With three groups, the maximum possible number of dimensions (discriminant functions) to identify is two, and both of these two were found to yield statistical significance. Specifically, the second function yielded Chi-square (4) = 10.07 (p = .039), with a corresponding canonical correlation of .13. The first function's canonical correlation was .19, and its significance conventionally is tested jointly with remaining functions, yielding Chi-square (10) = 32.27 (p < .001). Thus there were two dimensions to interpret.

Average positions (group centroids) of firms in each of the three categories of complexity within the two-dimensional space derived by discriminant analysis

Figure 3. Average positions (group centroids) of firms in each of the three categories of complexity within the two-dimensional space derived by discriminant analysis

The right-hand portion of Table 4 is key to interpreting the bases for the groups' positions in the two-dimensional space. It tells the correlations of the original variables with the two discriminant functions. As an aid to their interpretation, the knowledge variables themselves have been plotted in Figure 4 in accordance with the correlations in Table 4.

Correlations of knowledge emphasis variables with discriminant functions (structure matrix) represented in a two-dimensional plot

Figure 4. Correlations of knowledge emphasis variables with discriminant functions (structure matrix) represented in a two-dimensional plot

Looking first at the vertical dimension in Figure 4 (corresponding with function 2), modular and rich representations are identified as primary distinguishing the three groups vertically. Looking back at Figure 3, the vertical dimension may be seen to distinguish the functional complexity group from the other two groups. Yet, generic and, to a lesser extent, structured representations are seen to contribute to the vertical differentiation that contrasts the functional complexity group from the others. Figure 4 and its origin (right-hand portion of Table 4) indicate that the horizontal distinction (function 1) is derived primarily from differences among groups in generic knowledge representations, albeit there is some contribution from structured knowledge. It is apparent in Figure 3 that the horizontal dimension distinguishes the Causal category from the others. These additional analyses seem to suggest an alternative dimensionality of knowledge representations, different from the abstraction and complexity dimensions. These new dimensions may reflect aspects that are not important in producing representations, but are, however, crucial in projects, such as the time and the level of skill required to produce these representations. These dimensions can perhaps better explain both the use and the performance impact of knowledge representations and would definitely warrant further investigation.

6. Discussion and conclusions

Results provided substantial support for our hypotheses, particularly for the idea that projects facing different types of complexity have different knowledge production emphases. The second series of hypothesis, namely that particular knowledge emphases will differentiate the performance of projects with respect to these types of complexity also received a degree of support, especially in what concerns the overall role of knowledge in explaining performance, as well as concerning the differences between the pattern of emphases and the pattern of performance correlation. This support, in turn, bolsters the original theoretical analysis of the nature and roles of knowledge in solving complexity-generated problems in innovation projects.

Some results in Table 3 dovetailed with resource-based theory (Wernerfelt, 1984; Barney, 1991), which implies that relatively rare practices within a domain may be especially beneficial, because it is not as easy for competitor to imitate or acquire them. One practical consequence is that project managers should not focus solely on developing the kinds of knowledge that appear customary in their field. Instead, they should also painstakingly cultivate the capability to develop and protect other forms of knowledge, especially those that competitors would find difficult to produce. For example, in projects facing control complexity, they should encourage the development of generic formulas, perhaps by sharing information and resources with other projects or by cultivating links with universities and public labs, which can assist in the development of such formulas.

This first attempt at empirical validation of our framework suggests that the basic framework and the concept of emphasis in knowledge representation are promising in terms of explanatory power for explaining the actions and performance of innovation projects. Further research, with measures tailored more precisely and fully to the revised constructs of this theory, may lead to new insights into innovation processes in the firm, giving a more important place to sophisticated conceptualizations of knowledge. In practical terms, showing that different emphases in knowledge production are related to performance may garner support for knowledge production in innovation projects activities by highlighting the value that these activities may help create.

Bios

Serghei Floricel, PhD, MBA, BEng, is a professor in the Department of Management and Technology at the University of Quebec, Montreal. His research focuses on the management of innovation and large projects. His research results have been published, among others, in the International Journal of Project Management, R&D Management, Research-Technology Management, International Journal of Technology Management, International Journal of Innovation Management, International Journal of Entrepreneurship and Innovation Management, as well as presented at many conferences including the Academy of Management Conference. He is a co-author of the book, The Strategic Management of Large Engineering Projects (MIT Press2001). He was also the research director of the Managing Innovation in the New Economy (MINE) research program, which received a $3 million grant from the Social Sciences and Humanities Research Council of Canada (SSHRC), and the Principal Investigator for two projects financed by the Project Management Institute (PMI) and three projects financed by the SSHRC. In 2001–2002 he was the president of the Technology and Innovation Management division of the Administrative Sciences Association of Canada. He also developed a website that aims to provide information and a discussion forum for the practitioners of innovation projects (www.gpi.uqam.ca). In addition, working with the Research Institute for Small and Medium-size Enterprises of Trois-Rivières, he developed a risk assessment questionnaire for the innovation projects of small and medium size firms (www2.uqtr.ca/erisc/). Mr. Floricel holds a Doctorate in administration, a Master in Business Administration, and a Bachelors in engineering.

John L. Michela, PhD, MA, BSc, is an associate professor with the Personnel and Organizational Psychology graduate program at the University of Waterloo in Ontario, Canada. His research applies theories from Social Psychology, often along with advanced statistical methods, to address management issues that involve leadership and teams seeking product innovation or broader organizational or community change. The topics of vision in leadership and emotional intelligence in teamwork are his current focus. His contributions to various books or book series include chapters in the Handbook of Organizational Climate and Culture (Sage, 2000) and the Annual Review of Psychology (Annual Reviews, 1980); his journal articles have appeared in the Journal of Product Innovation Management, Organizational Research Methods, and the Journal of Personality and Social Psychology (JPSP), among others. He previously received research grant support from the U.S. National Institutes of Health and the Spencer Foundation, and he is funded currently by the Social Sciences and Research Council of Canada under its program for knowledge transfer from academia to the broader society. He founded and now serves as director of the Waterloo Organizational Research and Consulting Group (WORC Group), which provides consulting services regionally and internationally. He contributed recently to administering the Psychologically Healthy Workplace Awards Program of the Ontario Psychological Association, and, previously, to organizing conference programs with the American and Canadian Psychological Associations. He has been on the editorial boards of two of the three sections of JPSP (Attitudes/Social Cognition and Personality/Individual Differences) and was an associate editor of the International Journal of Organizational Analysis. After obtaining his PhD degree from the University of California, Los Angeles, he joined the faculty group in Social and Organizational Psychology at Teachers College, Columbia University, prior to relocating to Waterloo nine years later.

Ahuja, G., & Katila, R. (2004). Where do resources come from? The role of idiosyncratic situations. Strategic Management Journal, 25(8–9): 887–907.

Arthur, W. B. (1989). Competing technologies, increasing returns, and lock-in by historical events. Economic Journal, 99: 116–131.

Bush, V. (1954). Science: The Endless Frontier. Transactions of the Kansas Academy of Science. 48(3): 231–264.

Barney, J. B. (1991). Firm resources and sustainable competitive advantage. Journal of Management, 17(1): 99–120.

Bass, F. M. (1969). A new product growth model for consumer durables. Management Science, 15(5, Theory Series): 215–227.

Bohn, R. E. (1994). Measuring and managing technological growth, Sloan Management Review, Fall, 61–73.

Booth, B., & Zemmel, R. (2004). Opinion: Prospects for productivity. Nature Reviews Drug Discovery, 3: 451–456.

Brown, S.L., & Eisenhardt, K. M. (1995). Product development: Past research, present findings, and future directions. Academy of Management Review, 20: 343–378.

Bunge, M. (1967). Technology as applied science. Technology and Culture, 7(3): 329–347.

Carlile, P. R. (2002). A pragmatic view of knowledge and boundaries: Boundary objects in new product development. Organization Science, 13(4): 442–455.

Callon, M. (1986). Éléments pour une sociologie de la traduction. La domestication des coquilles Saint-Jacques et des marins pêcheurs en baie de Saint-Brieuc. L'Année Sociologique, 36 : 169–208.

Chesbrough, H. W. (2003). Towards a dynamics of modularity: A cyclical model of technical advance. In The Business of Systems Integration. In A. Prencipe, A. Davies and M. Hobday (eds.), pp. 174–198. Oxford: Oxford University Press.

Chesbrough, H. W., & Kusunoki, K. (2001). The modularity trap: Innovation, technology phase shifts and the resulting limits of virtual organizations. In I. Nonaka and D. Teece: Managing Industrial Knowledge, p. 202–230. London: Sage.

Clark, K. B. (1985). The interaction of design hierarchies and market concepts in technological evolution. Research Policy, 14: 235–251.

Dahl, D. W., Chattopadhyay, A., & Gorn, G. J. (1999). The Use of Visual Mental Imagery in New Product Design. Journal of Marketing Research, 36(1): 18–28.

Dunne, D. D., & Dougherty, D. (2009). The collective sense of the scientist. Academy of Management Proceedings, p. 1–6.

Eisenhardt, K. M. (1989). Making fast strategic decisions in high-velocity environments. Academy of Management Journal, 32(3): 543–576.

Ewenstein, B., & Whyte, J. (2009). Knowledge practices in design: The role of visual representations as ‘Epistemic Objects'. Organization Studies, 30(1): 7–30.

Fleming, L., & Sorenson, O. (2004). Science as a map in technological search. Strategic Management Journal, 25(8–9): 909–925.

Floricel, S., & Dougherty, D. (2007). Where do games of innovation come from? Explaining the persistence of dynamic innovation patterns. International Journal of Innovation Management, 11(1): 65–92.

Floricel, S., & Miller, R. (2003). An exploratory comparison of the management of innovation in the New and Old Economy. R&D Management, 35(5): 501–525.

Garud, R. (1997). On the distinction between know-why, know-how, and know-what in technological systems. In J. Walsh and A. Huff (Eds.): Advances in Strategic Management. Greenwich, CT: JAI Press, pp. 81–101.

Gavetti, G., & Levinthal, D. (2000). Looking forward and looking backward: Cognitive and experiential search. Administrative Science Quarterly, 45(1): 113–137.

Gibbs, G. B. (2000). Mechanism based identification and Drug Discovery in Cancer Research. Science, 287(5460): 1969–1973.

Griffin, A., & Hauser, J. R. (1993). The voice of the customer. Marketing science, 12(1): 123–142.

Gutman, J. (1982). A Means-End Chain Model Based on Consumer Categorization Processes. Journal of Marketing, 46(2): 60–72.

Hair, J. F., Anderson, R. E., Tatham, R. L. & Black, W. C. (1999). Multivariate Data Analysis (5th ed.). New York: Prentice Hall.

Hargadon, A., & Sutton, R. I. (1997). Technology brokering and innovation in a product development firm, Administrative Science Quarterly, 42(4): 716–749.

Henderson, K. (1999). On Line and On Paper: Visual Representations, Visual Culture, and Computer Graphics in Design Engineering. Cambridge, MA: MIT Press.

Hirsh, R. F. (1989). Technology and Transformation in the American Electric Utility Industry. Cambridge, UK: Cambridge University Press.

Hughes, T. P. (1983). Networks of Power. Baltimore: John Hopkins Press.

Kamb, A., Wee S., & Lengauer, C. (2006). Why is cancer drug discovery so difficult? Nature Reviews Drug Discovery, 6: 115–120.

Kline, R. (1987). Science and engineering theory in the invention and development of the induction motor, 1880–1900. Technology and Culture, 28(2): 283–313.

Knorr Cerina, K. (1999). Epistemic Cultures: How the Sciences Make Knowledge. Boston: Harvard University Press.

Knorr Cetina, K. (1997). Sociality with objects: Social relations in postsocial knowledge societies. Theory, Culture & Society, 14(4): 1–30.

Latour, B. (1987). Science in Action: How to Follow Scientists and Engineers Through Society. Cambridge: Harvard University Press.

Leifer, R., McDermott, C.M., Colarelli O'Connor, G., Peters, L.S., Rice, M., & Veryzer, R.W. (2000). Radical Innovation: How Mature Companies Can Outsmart Upstarts. Boston: Harvard Business School Press

MacCormack, A., Verganti, R. & Iansiti M. (2001). Developing Products on “Internet Time”: The Anatomy of a Flexible Development Process. Management Science, 47(1): 133–150.

Mayr, E. (2000). Biology in the twenty-first century. BioScience, 50(10), 895–897.

Mitcham, C. (1994). Thinking Through Technology. Chicago: University of Chicago Press.

Murray, F. (2002). Innovation as co-evolution of scientific and technological networks: Exploring tissue engineering. Research Policy, 31(8–9): 1389–1403.

Nightingale, P. (1998). A cognitive model of innovation. Research Policy, 27(7): 689–709.

Nightingale, P., & Martin, P. (2004). The myth of the biotech revolution. Trends in Biotechnology, 22(11): 564–569.

Nonaka, I. (1994). A dynamic theory of organizational knowledge creation. Organization Science, 5(1): 14–37.

Nunnally, J.C. (1978). Psychometric Theory (2nd edition), New York: McGraw-Hill.

Perrow, C. (1984). Normal Accidents: Living with High-Risk Technologies. New York: Basic Books.

Robinson, E.L. (1937). The Steam Turbine in the United States, III -- Developments by the General Electric Company. Mechanical Engineering, 59: 239–256.

Rogers, E. M. (1995). Diffusion of innovations (4th ed.). New York: Free Press.

Rosenberg, N. (1982). Inside the Black Box: Technology and Economics. Cambridge, U.K.: Cambridge University Press.

Shenhar, A. J. (2001). One size does not fit all projects: Exploring classical contingency domains. Management Science, 47(3): 395–414

Simon, H. A. (1981). The Sciences of the Artificial. Cambridge, MA: MIT Press.

Simondon, G. (1989). Du mode d'existence des objets techniques. Paris: Aubier

Stokes, D. E. (1997). Pasteur's Quadrant: Basic Science and Technological Innovation. Washington, DC: Brookings Institution.

Thomke, S. H. (1998). Managing experimentation in the design of new products. Management Science, 44(6): 743–762.

Ulrich, K. (1995). The role of product architecture in the manufacturing firm. Research Policy, 24: 419–440.

Ulrich, K.T., & Eppinger, S.D. (2000). Product Design and Development (2nd ed.). New York: McGraw-Hill.

Vincenti, W. G. (1990). What Engineers Know and How They Know It. Baltimore: John Hopkins University Press.

von Hippel, E. S. (1986). Lead users: A source of novel product concepts. Management Science, 32(7): 791–805.

Wernerfelt, B. (1984). A resource based view of the firm, Strategic Management Journal, 5(2): 171–180

This material has been reproduced with the permission of the copyright owner. Unauthorized reproduction of this material is strictly prohibited. For permission to reproduce this material, please contact PMI or any listed author.

© 2010 Project Management Institute

Advertisement

Advertisement

Related Content

Advertisement

Publishing or acceptance of an advertisement is neither a guarantee nor endorsement of the advertiser's product or service. View advertising policy.