Abstract
This paper focuses on the measurement of success and sustainability during the transfer of processes, procedures, and IT application knowledge. Project and program managers will learn about the importance of setting the measurement baseline for knowledge transfer related to ERP/IT rollouts and how this measurement process needs to be tracked and managed during the whole project life cycle.
The paper first outlines the root definition of a knowledge transfer process and describes environmental constraints within typical knowledge transfer scenarios. It presents models to measure success in both an efficient and effective way and highlights the evaluation models from DeLone and McLean and Halonen and Thomander, putting both into perspective with concrete examples of knowledge transfer measurement tools for IT rollouts.
As a summary, net benefits success measurements are of significant importance; however, they cannot be interpreted and analyzed without measurements of system and information quality. The processes, metrics, and tools being used should be incorporated into the related project management plans. Both the meaning of each of the measurement results and the related objectives of knowledge transfer measurement should be transparently communicated to all involved stakeholders.
Introduction
The measurement of project success is always a challenge, as it needs to happen on a continuous basis throughout the whole project life cycle and it needs to be based on a resilient and measurable baseline of project objectives from the very beginning.
This presentation turns the spotlight on the measurement of success and sustainability during knowledge transfer of processes, procedures, and IT application knowledge. Although this is one of the key areas of both project and program management, related integration into project management plans is often insufficient. Many project risks arise out of an inefficient or insufficient knowledge transfer setup. Parameter setup and continuous measurement help to keep the project on schedule and within budget and scope, as well as assure stakeholder expectations. In addition, sustainable success with knowledge transfer sessions keeps project constraints under control, limiting the risks of loss of information or need for reiteration.
While rolling out an IT project into several regions or countries, program and project managers are often additionally challenged by different cultural expectations around communication and integration of decisions.
This paper presents and evaluates applicable models based on analysis of the reasons why knowledge transfers are often inefficient or/and insufficient. Project and program managers will learn about the importance of setting the measurement baseline and how this measurement process needs to be tracked and managed during the whole project life cycle. As a result, project risks related to lack or loss of information are both mitigated and recognized very early in cases where they occur.
Definitions
Forms of Knowledge
There is no universally accepted definition of knowledge, as the usage of the term depends on the context. However, for the purpose of this paper, it is helpful to define and separate major forms in which knowledge can be owned and transferred. According to Metrics for Knowledge Transfer from Public Research Organizations in Europe (European Commission's Expert Group on Knowledge Transfer Metrics, 2009), major knowledge forms can be recognized as:
- Codified knowledge: expressed through language (e.g., patents, process documentation, or literature)
- Internalized by people who have learned or acquired codified knowledge (e.g., through instructions, study, and/or experiences)
- Embedded in tools or artifacts; often “ready to use” (as in software or machinery)
Knowledge can be understood as “a state of mind, object, process or prerequisite of accessing information and in our paper, especially skills” (Alavi & Leidner, 2001). We also distinguish between tacit and explicit knowledge. According to Alavi and Leidner (2001), its cultural, functional, embedded, individual, social, and pragmatic nature needs to be taken into consideration when looking at knowledge management and related system environments.
Knowledge can be found and shared with the help of interaction between actors. Propp (1999) narrows this down to individual interaction, which can influence organizational or team-based knowledge either positively or negatively. According to Propp, supportive communication reveals available knowledge, and it helps the actors in a group to more easily accept what is relevant, pertinent, or useful in a given scenario or situation. Understanding this, one can deduce that disruptive communication may discourage actors in a group who try to find additional information or feel the need to modify the provided information into a more appropriate or convenient format.
This behavior already indicates one of the major risks: As knowledge transfer, especially related to ERP rollouts, is connected to verbal and written communication, any kind of disturbance during the flow of the information leads to at least inefficient or even incomplete or incorrect transfer of knowledge. In A Guide to the Project Management Body of Knowledge (PMBOK® Guide) - Fifth Edition, this is referred to as noise during transmission of information in the basic communication model (PMI, 2013, fig. 10-4, p. 294).
Root Definition of a Knowledge Transfer Process
A root definition for potential knowledge transfer processes in IT rollouts can be found using the CATWOE analysis (Checkland & Scholes, 1999). CATWOE stands for Customers, Actors, Transformation process, Worldview, Owners, and Environmental constraints. With this checklist approach, the surrounding environmental parameters in the system, rather than the issue itself, are investigated in an effort to isolate the root definition of a given process.
Related to the findings of Horecica (2010), the CATWOE outcome for knowledge transfer processes in IT rollouts is as follows:
- Customers: A group of stakeholders receives the knowledge transfer through selected communication channels. This stakeholder group is usually a target group of (key) users and, more generally defined, the organization that is implementing the ERP/IT system (e.g., with support of an external implementation partner).
- Actors: These can be defined as the team members of the project, which can be divided into two groups: those who are responsible for conducting the knowledge transfer and those who are expected to receive it. In IT rollout projects, roles like consultants, sponsors, key users, and—most critical—end users are usually defined. Depending on the organizational (project) structure (PMI, 2013, p. 2–26), the management team is responsible for offering the required level of support, proper resource capacities, and time for the knowledge transfer processes. Details around related roles, responsibilities, and competencies are defined in the human resources management plan (PMI, 2013, p. 264).
- Transformation process: This describes the knowledge transfer processes during the IT rollout. Knowledge can be understood as provided information that is transferred to the recipients during this process. Different mechanisms and tools are applied, which are presented in more detail in the following sections.
- Worldview: In this parameter, both internal and external areas of influence are consolidated which can be interpreted as enterprise environmental factors (EEF) and organizational process assets (OPA) (PMI, 2013, p.23–24 and p.29). However, from the perspective of the organization being affected by the knowledge transfer processes, value-adding business processes and the economic environment need to be considered as well. Horecica (2010) stresses that the organization needs to “have a successful knowledge transfer process in order to optimally operate and be able to self- support and achieve the targeted operational objectives.” With the support of an IT or ERP system, knowledge transfer processes are key factors around sustainably implementing any business process improvements, which have been designed upfront or in parallel with the IT rollout. Having an upfront, defined scope and well-documented expectations around outcomes (e.g., improvement of KPIs) is mandatory.
- Owners: Multiple sources in literature cite lack of business ownership as one of the top reasons why IT rollouts fail. With this, knowledge transfer processes should be assigned to their business work stream and dedicated business process owner. These owners should have the authority from the business for taking go/no-go decisions. Within both the implementer and client organization, these roles need to be defined upfront. A multi-criteria decision analysis (PMI, 2013, p. 271) can be one of the tools and techniques used to identify these.
- Environmental constraints: This parameter can be covered to a large extent by the enterprise environmental factors, which “refer to conditions, not under the control of the project team, that influence, constrain or direct” (PMI, 2013, p. 29) the implementation or knowledge transfer processes. Examples related to an IT rollout, according to Horecica (2010), can be influenced by third party product development parties involved, the local and/or legal environment, and, on a soft- skills level, the overall skillsets and experiences around business or IT process knowledge. In addition, the level of interest of the actors being assigned to the project, in combination with their motivation and ability to influence changes, might be an environmental constraint, which ideally will be identified upfront as a risk.
Focus on Environmental Constraints
During knowledge transfer activities, several factors can influence the choice of communication technology:
- Urgency of the need for information
- Availability of technology
- Ease of use
- Project environment
- Sensitivity and confidentiality of information (PMI, 2013, p. 292–293)
All these factors influence the models to be applied when measuring success of knowledge transfer. Influencing parameters can be found on both EFF and OPAs. On the level of EEF, mainly the organizational culture and structure are influencing success and sustainability. Government or industry standards or regulations, as well as the project management information system, can limit results here, too. EEFs have a strong influence on the outcome.
On the level of OPA, mainly the quality and level of details around historical information, documented lessons learned, and provided templates from previous IT/ERP implementations set a true baseline. Lack of quality and/or quantity here could be seen as one root cause for an inefficient knowledge transfer.
Policies, procedure, or process limitations or frameworks around guidelines for communication management might also hinder or at least influence knowledge transfer. As a recommendation, the approach for application of knowledge transfer models should be outlined in the development of the project management plan and defined in detail as part of the communications management plan.
Additionally, in the PMBOK® Guide, communication skills around knowledge transfer refer to the educational aspect, in order to “increase team's knowledge so that they can be more effective” (PMI, 2013, p. 288). At each point, deliverables and knowledge are transferred between the project and operations for implementation of the delivered work (PMI, 2013, p. 13).
Additional Areas to be respected
As also emphasized in Metrics for Knowledge Transfer from Public Research Organizations in Europe (European Commission's Expert Group on Knowledge Transfer Metrics, 2009), the technology is not the only area of knowledge in which transfer processes should be identified as important. Economical and commercial impacts are complemented by personal benefits and social and cultural aspects. In order to create a structured overview of the area(s) of the required knowledge transfer, Metrics for Knowledge Transfer from Public Research Organizations in Europe (European Commission's Expert Group on Knowledge Transfer Metrics, 2009) suggests providing input based on the following questions:
- In what forms can knowledge be transferred, carried, and submitted?
- Through which kind of mechanisms or channels can knowledge transfer take place?
- How and by whom is knowledge being transferred turned into (measurable) benefits for the organization?
- What approaches and strategies match the channels identified, and how can involved stakeholders organize required knowledge transfer activities per project phase?
This additional, broader approach will support identifying valuable metrics criteria for applying measurements around success and sustainability of knowledge transfer processes.
Motivations for Measurement
As highlighted by Gardner, Fong, and Huang (2010), and adjusted to the scenario of IT rollouts, there are several key reasons to measure effectiveness of knowledge transfer activities:
- To demonstrate the benefit of resulting improvement in knowledge to stakeholders
- To ensure sufficient returns on investment
- To provide benchmarks for comparison across the industry
- To promote competition in a global marketplace
- To educate on society's need for innovation
- To support future requests for funding
The vast amount of published literature on this subject indicates the importance of knowledge transfer measurement for all stakeholders. As an extension of this paper, one can continue comparing valuable and sustainable metrics for the following additional reasons:
- To examine additional metrics used to quantify and qualify the effectiveness of knowledge transfer activities
- To explore and further develop innovative metrics
- To uncover regional differences in the evaluation of knowledge transfer processes and outcomes
Leaning to Horecica (2010), the role of information during the knowledge transfer processes is critical. If information related to the IT rollout is not being made available to the responsible persons, or if it is not sufficiently accurate or completed, then it also cannot be made sufficiently available to the recipients during the process. But conversely, if baseline information related to transferrable business or IT processes will not be accurately and completely transferred to the consultants, there will be a high risk that any additional processes designed in the final solution will not match the business needs. As a result, any upcoming gaps will need to be addressed and will probably trigger change requests to adjust the information system, to modify the timeline or involved resources, or even to re-design a part of the required solution.
According to Horecica (2010), an ideal measurement control system for knowledge transfer relies especially on information that the system is able to provide—at the right time, right relevance, and right quality, and through proper channels.
One key motivation for knowledge transfer according to Holsapple and Lee-Post (2006) is achieving beneficial results. Research done by Holsapple and Lee-Post (2006) indicates evidence that a system which does not provide user satisfaction is less likely to produce beneficial results for an organization or user community. In the same paper, the authors conclude that a well-fitting IT solution can increase related project task relevance for users and improve the adoption rate of the new system. This is closely related to the overall success of an IT implementation.
Taking the previously discussed findings into account, organizations that consider conducting an IT rollout should focus on building a dedicated project team, which should consist not only of IT personnel but mainly operational staff. In addition, organizations are required to handle change management during implementations. It is recommended that executive management be involved and supportive during all phases of the implementation. As part of their responsibilities, they should be sure to assign sufficient capacities to the project and especially free up time from daily work for all involved project team members to implement potential system and business process changes and to focus on learning new tasks, both for the new IT system and business processes.
Knowledge Transfer Scenarios
Transfer mechanisms
As described in Knowledge Transfer in Service Business Development (Konttinen, Smedlund, Rilla, Kallio, & van der Have, 2011), a successful knowledge transfer mechanism focuses on the type of knowledge to be transferred, takes the expectation of the recipient into account, and is individualized. A trustworthy and open knowledge transfer environment supports successful knowledge migration. Due to “the diverse nature of knowledge, different phases of service development process, the level of enduser engagement and contextualization of knowledge as well as related costs of knowledge transfer should be considered when designing knowledge transfer activities and policies targeted to service development” (Konttinen et al., 2011). In Knowledge Transfer in Service Business Development (Konttinen et al., 2011), six knowledge transfer mechanisms were selected for closer analysis to gain more detailed information on their functioning and effectiveness. As part of these mechanisms, support activities should be considered to ensure the internalization of knowledge.
Categories
Knowledge transfer mechanisms differ based on how many parties are involved and the roles of intermediaries within the network where knowledge transfer processes occur. Konttinen et al. (2011) defined six categories based on the analysis of identified mechanisms. Five of these, presented below, should be examined as being relevant for scenarios around IT rollouts.
Media
Different types of media are used as a platform for sharing knowledge between parties. Usually, an intranet portal or Microsoft SharePoint provides this platform. Owners can be assigned to certain knowledge areas and/or content that need to be developd. The PMBOK® Guide refers to this method as pull communication (PMI, 2013, p. 295).
Training
System/solution capabilities and process definitions are transferred with education events or professional training courses. In addition, an internal virtual platform is provided for mutual learning and sharing of knowledge among participants.
Project Cooperation
Process definitions and IT capabilities are transferred between cooperation projects. If these projects each include multiple (and partially the same) participants, the structure will resemble the previous training scenario. However, the difference is that each project usually has distinct, project-specific objectives and some of these may allow for an efficient sharing of knowledge, while training usually “concentrates on developing the overall competencies and general skills of the trainees” (Konttinen et al., 2011).
Communities
Knowledge transfer is provided through networking of larger communities. According to Konttinen et al. (2011), this could be business benchmarking workshops, cross-department events, or service innovation roundtables. The objective is to create a knowledge exchange community or network. Experts and learners can move flexibly in and out of such a community. Events may be organized periodically or on demand around certain topics. Business process owners may operate as organizers or facilitators of such events.
Partnerships
Long-term cooperation between implementation consultants and knowledge recipients might achieve transferring capabilities or processes to other stakeholders. Konttinen et al. (2011) refer to strategic matchmaking sessions, company circles, joint marketing/sales approaches, mutual development of intellectual property, etc.
Models to Measure Success
Efficacy, Efficiency, and Effectiveness
Horecica (2010) categorizes success and sustainability of knowledge transfer as efficacy, efficiency, and effectiveness. Around each of these three categories, models should be identified and applied during the project life cycle.
The efficacy of the knowledge transfer process should be quantified before actual rollout/go-live through tests conducted after user training by user test scores. Horecica (2010) recommends an analysis of overall average, distribution, and extremes scores. Most rollout projects miss or fail to measure the knowledge transfer efficacy and act accordingly and in a timely manner. As a result, users are not completely aware of how they are asked or how they need to use the new system; they might use it incorrectly or even feel unwilling or afraid to use it.
Knowledge transfer efficiency should be measured throughout the project implementation life cycle. Horecica (2010) suggests measuring the number of issues opened by the knowledge recipient team versus the number of issues solved by the implementation team. Another relevant parameter of efficiency of knowledge transfer is the average response period from the time an issue is registered until the time it is solved and closed. The challenge with this measurement is that registered issues are often only counted, independent of their level of complexity, impact, or risk to the project. But even if these parameters are tracked, Horecica (2010) states that they are often subjective and dependent on the user who registered the issue.
In order to measure the effectiveness of knowledge transfer, several approaches can be used. Horecica (2010) suggests conducting several questionnaire campaigns throughout the project life cycle, both during the implementation and after go-live. The purpose is to assess how knowledge recipients evaluate the knowledge transfer process, and especially how confident they feel asking for support from both internal and external sources. Also, effectiveness is indicated by how many registered issues were directly solved by the internal team and how many required external support. The quantity of external requests and the quantity of change requests that don't refer to operational changes, according to Horecica (2010), are both good indicators of how effective any knowledge transfer and/or training sessions were.
Both the meaning of each of these results and the objectives related to each of these measurements should be transparently communicated to all involved parties. Horecica (2010) indicates that in most implementation/rollout projects, only a light communication plan is included, but this rarely includes metrics related to knowledge transfer. Often, metrics are only related to classic project constraints (PMI, 2013, p. 124) around schedule, resources, and/or budget.
Recommendations
Gardner, Fong, and Huang (2010) recommend differentiating between the following three parameters when measuring the effectiveness of knowledge transfer activities.
Input versus Output
According to Gardner, Fong, and Huang (2010), inputs are easier to measure than outputs and, as a result, inputs are referred to more often. However, inputs show very little indication of actual benefits. Input measures and resulting values prove that certain activities have occurred, but only output measures can support evaluation of the results of efforts. What sounds logical at first glance is still often missed when it comes to statements around knowledge transfer effectiveness in real-life scenarios. The main reason is that output results are more difficult to be (consistently) measured.
Quality versus Quantity
As previously stated, and underlined by Horecica (2010), the challenge around these two parameters is that often only quantities are counted (e.g., registered issues, support tickets, etc.), and it's difficult to distinguish between different quality parameters. For example, receiving workshop feedback sheets from two different groups—one large and one small, each with a different business relation to the workshop topic—might result in misleading interpretations. Each of the feedback sheet results from the small group might be considered of higher rating in comparison to the large group, just because of the different weighting/number of participants.
Objectivity versus Subjectivity
According to Gardner, Fong, and Huang (2010), subjective measurement results are prejudiced by nature and therefore often not useful for evaluations. Simply stated, personal opinions are not useful for benchmarking or comparison. Objective measurements therefore should be around actually measureable figures. The results should be comparable and should fulfill the requirement of being observable, consistently repeatable and impartial.
To capture and compare results of knowledge transfer activities, Gardner, Fong, and Huang (2010) refer to the time-series versus cross-sectional analysis, which is only briefly mentioned in this paper for reasons of completeness. The time-series analysis basically compares results/improvements over time, as well as analyzes the underlying reasons for change. The cross-sectional analysis is more like a snapshot of a specific time, comparing separate knowledge transfer scenarios.
Applied to IT/ERP rollouts, a cross-sectional analysis could indicate the system readiness of each department after the analysis phase (having conducted key user training and workshops), while results from a time-series analysis could describe the system readiness of the logistics work stream user group after initial trainings, workshops, test-script support week, and end-user trainings.
As an overall recommendation, in order to decide which knowledge transfer metrics are to be used, underlying motivations and potential hidden agendas must be identified and examined. In general, the focus should be on measuring output rather than inputs, and on assuring the results are comparable, measureable, and consistent. Gardner, Fong, and Huang (2010) state that even today, providing practical evaluations of knowledge transfer effectiveness is still challenging IT transfer professionals. A variety of different metrics are applied by each organization in order to measure related performances. Often data is consolidated, compiled, and analyzed based on surveys, feedback sheets, or checklists; however, more and more organizations are also moving toward more abstract, subjective measures like case studies or reference reports. There is no global evaluation standard in the IT transfer industry.
DeLone and McLean Success Model
The main objective of DeLone and McLean's well-known success model for information systems is to present influential factors and their relationships. Theme assuring classes in DeLone and McLean's model include several known measures; however, only relevant measures should be selected in each research case. Since its introduction, the model has been applied and modified in multiple studies that indicate that a general approach is required to measure success. DeLone and McLean later updated their model, and instead of five factors (“information quality,” “system quality,” “use,” “user satisfaction,” and “individual impact”), which influence “organizational impact,” the improved model includes six factors, which also influence “net benefits” (see Exhibit 1, based on DeLone & McLean, 2003; Horecica, 2010). The improved model includes “service quality” as a new factor; “individual impact” was removed, and “intention to use” was added in relation to “use.” In addition, “net benefits” replaced the “organizational impact” as an output of the measures.
Information quality represents the quality of provided information in the form of documentation, deliverables and training manuals (e.g. shared online training instructions). The provided information should be easy to understand, personalized, and complete.
- System quality measures the desired characteristics of IT systems to be rolled out. Parameters like reliability, usability, availability, and response time are potential parameters that can be measured. However, expected results should be defined as acceptance criteria in the project scope statement (PMI, 2013, p. 124).
- Service quality measures the overall services provided by the implementation partner provider. According to DeLone and McLean (2003), this is independent of whether these support services are delivered by an external implementation partner or internal new organizational unit (e.g., a competence center). This parameter is of even higher importance in cases where the recipients are also customers, as poor quality will result in credit notes and/or complaints.
- Intention to use and use both measure all activities around the system, including navigation, information retrieval and execution of a transaction.
- User satisfaction should be understood as an important parameter for measuring the user's opinion about the rolled-out IT/ERP system. This should cover the whole experience cycle across all departments and solution processes.
- Net benefits, as emphasized by DeLone and McLean (2003), are the most important success measures, as they capture the balances of positive and negative effects (or impacts) of the implemented IT system on the users as well as other stakeholders like suppliers, third parties, markets, etc. Some sample questions to evaluate net benefits include:
- By what percent has the new sequence planning functionality increased the capacity for manufacturing?
- How much increase of revenue resulted from the implementation of the new supply chain management system? Did it lead to access to larger markets, increased supply chain efficiencies, and positive customer responsiveness?
- By how many days did the forecasting and material requirements planning (MRP) processes improve our response times and created better visibility around manufacturing and shipment dates?
Measurements around net benefits must be determined by context and objectives for each project. With this, there will always be a broad variety of net benefits measures, but especially in rollout projects, some of them will be the same per implementation.
According to DeLone and McLean (2003), net benefits success measurements are of significant importance; however, they cannot be interpreted and analyzed without measurement data around system quality and information quality. For example, within an IT/ERP rollout environment, the impact or benefit of a harmonized process design around handling of external raw materials cannot be fully understood without both an evaluation of the usability of the functionality within the solution and the relevance and completeness of related data, which are provided by the involved third party.
The next section demonstrates how the six dimensions of the updated DeLone and McLean model (2003) can be used as a framework to organize various success metrics around IT rollout scenarios.
Evaluation Model
Depending on the project phase and activity involved, there are different sets of information and efficient toolsets available. In the following sections, only those tools delivering sustainable, objective, and consistent data should be discussed. There can be different platforms or media used for some of these tools; for example, an evaluation sheet can be filled out on paper or online, and a checklist can be filled out via interview technique or as a joined file online on a portal. However, the baseline of the tool (an evaluation sheet, a checklist) remains the same.
The following exhibit represents the evaluation model according to Halonen and Thomander (2008), derived from the DeLone and McLean Model of Information Systems Success. The measurement parameters have been limited according to frameworks described by Horecica (2010), and also refer to Holsapple and Lee-Post (2006, p. 7).
Measurement Tools
It's important to use the most efficient evaluation tool at the right point in time during a project life cycle, especially if it is being used multiple times. Referring back to Gardner, Fong, and Huang (2010), both time-series analysis and cross-sectional analysis can be interesting depending on the scenario. More often, it might be relevant to measure progress and level of improvement of knowledge after certain knowledge transfer sessions (e.g., after kickoff session, user training, and test script exercises). A time-series analysis should be applied, repeating a certain measurement method after each of those sessions, to receive comparable information and indicate improvements.
In order to measure the readiness among different units or groups, the cross-sectional analysis should be applied at important milestones (e.g., after integrative test sessions). This results in measurement information pertaining to readiness and successful/sufficient knowledge transfer across teams.
An overview of the most important measurement tools during IT rollouts includes mostly lists and exercise-related evaluation tools, such as:
- Feedback evaluation sheets
- Practice tests
- Checklists
- Exercise sheets
- Test case scenarios
- Case studies
The project manager needs to make sure that an overall measurement process is applied and that related results are tracked and managed throughout the project life cycle. The following exhibit indicates the most important activities and recommends tools and processes to be applied. It also indicates risks project managers should be aware of in case the related knowledge transfer is insufficient or even fails.
All collected information should be consolidated on a joined project workspace, such as a project portal. By providing this workspace, at least a light time-series analysis is applied automatically and information is available for evaluation and analysis. The project manager can identify action items on initiating additional knowledge transfer sessions, if certain knowledge areas are revealed to be significantly insufficient. Also, if the collected information is not anonymized, specific action can be applied by addressing only individuals showing needs for knowledge realignment or repetition. The way this information is tracked and managed, and how action items for risk mitigation are proactively identified, should be described in the project management plan, especially in the risk and communication plan. Again, transparency on purpose and reasons for this measurement assures upfront acceptance and support by all stakeholders.
References
Alavi, M. & Leidner, D. E. (2001). Knowledge management and knowledge management systems: Conceptual foundations and research issues. MIS Quarterly, 25(1), 107–136.
Checkland, P. & Scholes, J. (1999). Soft systems methodology in action. New York, NY: John Wiley & Sons.
DeLone, W. & McLean, E. (2003). The DeLone and McLean model of information systems success: A ten-year update. Journal of Management Information Systems, Spring 2003, 19(4), 9–30. Retrieved March 3, 2014, from http://www.mesharpe.com/MISVirtual/07Delone.pdf
European Commission's Expert Group on Knowledge Transfer Metrics. (2009). Metrics for knowledge transfer from public research organisations in Europe. Office for Official Publications of the European Communities
Gardner, P. L., Fong, A. Y., & Huang. R. L. (2010). Measuring the impact of knowledge transfer from public research organizations: A comparison of metrics used around the world. International Journal of Learning and Intellectual Capital, 7(3/4), 318–327.
Halonen, R. & Thomander, H. (2008). Measuring knowledge transfer success by D&M. Retrieved February 26, 2014, from http://sprouts.aisnet.org/537/4/Halonen-KnowledgeTransfer-fp.pdf, 6–7.
Holsapple, C. W. & Lee-Post, A. (2006). Defining, assessing, and promoting e-learning success: An information systems perspective. Decision Sciences Journal of Innovative Education, 4(1), 67–85.
Horecica, M. (2010). Knowledge transfer during ERP implementations, term paper, Sheffield University - City College
Konttinen, J., Smedlund, A., Rilla, N., Kallio, K., & van der Have, R. (2011). Knowledge transfer in service business development: Transfer mechanisms and intermediaries in Finland. Original title: [Osaamisen ja tiedon siirtäminen palveluliiketoiminnan kehittämisessä.], Espoo, Finland: VTT Publications 776.
Project Management Institute (PMI). (2013). A guide to the project management body of knowledge (PMBOK® Guide) - Fifth Edition. Newtown Square, PA: Author.
Propp, K. M. (1999). Collective information processing in groups. in Frey, L. R., Gouran, D. S., & Poole, M. S. (Eds.), The handbook of group communication theory & research (pp.225–250). Thousand Oaks, CA: SAGE Publications.
This material has been reproduced with the permission of the copyright owner. Unauthorized reproduction of this material is strictly prohibited. For permission to reproduce this material, please contact PMI or any listed author.
© 2014, Michael Bresa, PMP
Originally published as a part of the 2014 PMI Global Congress Proceedings – Dubai, UAE