I can't get no--satisfaction

moving on from the dominant approaches to managing quality in complex programs

Lead author: Dr Harvey Maylor

Co-authors: Dr Joana G Geraldi, Dr Mark Johnson

Abstract

Achieving customer satisfaction through the active management of quality is a perennial challenge for program and project managers in many contexts. Indeed, we begin with a practitioner problem where despite extensive efforts and apparent success in managing quality, customers were still not satisfied. Exploration of this problem in the current project and program management (PPM) literature showed a prevalence of the models that were not proving beneficial in practice. Reconceptualizing the program as a service provided some insight, but with limited confidence that the models were appropriate to complex programs. A two-phase study was therefore carried out to develop a more appropriate model. The resulting enhanced framework for assessment is shown to be a better interpretation of quality in this context. However, it does challenge some long-held beliefs about service quality—namely the reliance on the perceptions—expectations gap for assessment.

Keywords: satisfaction, quality, service, SERVQUAL, gap model.

Introduction

The point of departure for this study was a practitioner problem concerning the management of quality in IT-enabled change. The research team was involved in another study and noted that a number of managers complained that, “despite all of our key performance indicators showing green, the client still isn't happy.” “Green” in this case indicated that performance in the defined indicators was at a pre-agreed (acceptable) level, but this wasn't sufficient to imbue a state of satisfaction in the client. Any deviation between required levels of progress in programs was noted as a change in color of key performance indicators (KPI) from green to amber or red. When investigated further, it was noted that program and project managers were “managing the gap” between expectations and perceptions—as proposed by Maister (1993)—attention would be focused on any indicator that was not green. Preliminary investigation of the indicators showed that they used a very limited conception of quality. Specifically, these resembled product or outcome rather than service-based measures, and usually formed part of the contract between the client organization and the outsourcing company. This paper reports on the subsequent investigation into the conceptualization of quality in the context of complex programs, specifically outsourced IT-enabled change.

Outsourcing of IT is extensively employed by large organizations from all sectors; it is usually long-term and relational and involves the service provider being embedded within the client organization. Outsourced IT provision is a complex business-to-business (B2B) service offering often comprising both repetitive operations (referred to in the industry as “managing the mess for less” and non-repetitive operations (programs and projects) to develop new or enhanced capabilities. This second group of activities are often referred to as “IT-enabled change” (Benjamin and Levinson, 1993, and this group is the concern for this discussion.

The prevailing approaches to managing quality in a wide range of organizations involved in outsourced IT provision are seen to be much less mature than in other service operations, for example, in repetitive business-to-consumer (B2C) contexts (Chase & Apte, 2007). In the example used previously, they focused on product-based control and conformance to contract. Such manufacturing-centric definitions were noted to be inconsistent with the nature of the services offering (Haywood-Farmer, 1988; Parasuraman, Zeithaml, & Berry, 1985). In order to progress the development of the approaches to managing quality in complex programs, a more appropriate conceptualisation is required. The high-level question that this work has sought to answer is: What are the attributes of quality in the delivery of complex programs?

Relevant Literature 1—The Origins of Quality Management

Both scholars and practitioners have found that producing a definition of quality is far from straightforward. For instance, Garvin (1992, p. 126) stated that “Quality is an unusually slippery concept, easy to visualize, and yet exasperatingly difficult to define.” The field of quality can be split into major two schools of thought. First, the “product quality” school developed in manufacturing and formed a key part of operations management (OM) theory and practice. This has been supplemented by the second— the “service quality” school. It was developed in both marketing and OM; its focus is on the B2C market. Developments in the school of product quality, following Shewhart (1931), focused on controlling manufacturing processes, minimizing variation, and developing tools and techniques to support these goals. Key figures and their major contributions are summarized in Table 1.

Table 1. Key Figures/Contributors to the School of Product Quality (Maylor 2000, p. 254)

Instigator Major Influence
Deming Initially developed statistical methods for control of repetitive processes, and their usage. Took the tools to Japan post-World War 2 and was seen as part of the Japanese quality revolution (though largely ignored in his native United States until the last years of his life). Recommended 14 points for management and the use of the Plan, Do, Check, Act cycle (also known as the Deming Cycle).
Crosby Championed the notion of “zero defects” and “quality is free.”
Feigenbaum The holistic approach to quality—company-wide or total quality control.
Taguchi The proponents of Taguchi methods claim great results for the design of experiments, though good examples are few and far between.
Ishikawa Quality circles and brainstorming tools including the fishbone or Ishikawa diagram.
Ohno Architect of the Toyota Production System that took quality to new levels in manufacturing, through teamwork, training and education, ongoing continuous improvement, and a focus on the absolute elimination of waste.
Juran The engineer's quality guru—established and compiled the requirements of systems and procedures for sampling and control with tangible products. Many of the routines are equally applicable to the tangible elements of service products.

The strategic nature of quality was highlighted by Garvin (1984), which showed how product quality could be considered as a competitive tool. It has led to the adoption of practices by organizations, such as total quality management (TQM) (Boaden, 1997; Dotchin & Oakland, 1992; Spencer, 1994), techniques, such as Six Sigma, and quality systems, such as ISO 9000. This can improve operational performance (Ahire, Waller, & Golhar, 1996; Sousa & Voss, 2002), although the “softer” aspects such as leadership, HR, and customer focus have been shown to be important in the implementation (Powell, 1995; Samson & Terziovski, 1999).

Relevant Literature 2—Managing Quality in PPM

The concept of quality in program and project context has received little attention from both research and practitioner literature. Much of what has been written on quality in the project context refers to the management of quality based on the ideas of quality management introduced in manufacturing firms, such as TQM (e.g., Association for Project Management [APM], 2006; Office of Government Commerce [OGC] 2005; Maylor, 2005). Publications on project quality management are dominated by management books and standards rather than academic publications.

Table 2 shows the different definitions and processes proposed by key bodies of knowledge and standards in project management. Both APM and the Project Management Institute (PMI) have proposed definitions of quality close to the meeting or conformance to agreed requirements or commitments. The concepts of “right first time” and “zero defects” are also applied to quality in projects. Companies that achieve high quality projects demonstrate a continual systematic approach to improvement, which is related to the exact definition of requirements, and meeting them efficiently, without wasting time or resources (APM, 2006). Quality in the program level is treated differently. Changes in requirements and objectives are expected, and therefore, meeting requirements is not adequate proxy for quality (OGC, 2005). According to Managing Successful Programmes (OCG, 2005), the quality of a program involves eight aspects:

  1. Delivering the needs and expectations to stakeholders;
  2. Optimizing the use of resources from partners and suppliers;
  3. Adhering to strategy, policy, and standards;
  4. Applying the systematic use of process, tools, and techniques;
  5. Supporting informed decision making with required data and information;
  6. Managing assets and resources via configuration management;
  7. Having program leadership create high quality of decisions taken; and
  8. Optimizing skills and experience of people.

However, this list refers to the overall quality of program management rather than a means to assure levels of customer satisfaction.

Table 2. Project Quality from the Perspective of Bodies of Knowledge

Source Quality Quality Management Quality Management Processes
A Guide to the Project Management Body of Knowledge (PMBOK® Guide)— Fourth Edition (PMI, 2008) “Quality is the degree to which a set of inherent characteristics fulfill requirements” (p. 180). “Project quality management processes include all the activities of the performing organization that determine quality policies, objectives, and responsibilities so that the project will satisfy the needs for which it was undertaken”. Planning, assurance and control (however they acknowledge the importance the modern quality management as complementary to project management and responsible for customer satisfaction, prevention over inspection, management responsibility and continuous improvement)
APM Body of Knowledge (APM, 2006) “Quality is broadly defined as fitness for purpose or more narrowly as the degree of conformance of the outputs and processes” (p. 28). “Project quality management is the discipline that is applied to ensure that both the outputs of the project and the processes by which the outputs are delivered meet the required needs of stakeholders” (p. 28). Planning, assurance, control, and continuous improvement

The concept of quality has received more attention in some project management textbooks. For example, Turner (2000) proposed a model for quality management based on the combination of process and product quality, which has to be assured and controlled, and demands an overall organizational attitude towards quality. This distinction is important and signalizes projects as a combination of services and products, and therefore demands a combination of product and service approaches to quality.

As an indicator of the popularity of the area in project and program management PPM, a search of the term “quality” on all articles published in the International Journal of Project Management from inception to date resulted in 1,051 articles. Until 1995, no more than 23 articles mentioning quality were published per year; from 1995 to 2005, the number increased to around 50; and in the last 4 years, between 70 and 106 articles mentioning the word quality were published in the journal each year. While the number of papers in the journal has increased in this time, the numbers indicate that at least this is a concept on consistent concern to researchers.

When taking a closer look at the publications and reducing the search for the word quality only in abstracts, title, or keywords, 1,051 decreased to 142. Of these, 101 used the word quality but treated different topics: 13 used quality as a performance measurement in other studies or criticized the iron triangle as measurement of project success, and 7 were book reviews of quality management books, while only 1 focused on project context. Thus, only 20 articles focused on quality per se. The majority, 13, applied concepts of quality to project management; four looked at the influence of project managers and project owners on quality. One proposed a model for project quality management. Only two articles were related to concepts of quality and looked at how these changed but did not examine in detail what quality was.

Outside of project management journals, we identified three studies with relevance to our topic. Firstly, Zwikael and Globerson (2007) developed a model for quality in projects in the service industry (project management planning quality). The survey of 275 project managers in the service industry identified aspects related to quality that have a high impact on success:

  • qualification of project managers;
  • use of quality planning processes, especially in: activity definition, activity duration estimation, communication planning, quality planning and schedule development; and
  • high organizational support (systems, culture and style, structure, and project office)

However, the model related success factors with planning activities and organizational support, and consequently, the paper does not explore factors in planning and organizational support that would lead to higher quality, but rather to higher success. The lack of study on this area may derive from the very close relationship between the way to define success factors and quality factors. Therefore, although we are looking at defining project management quality, it is insightful to address the literature associated with project success also, and this is done later in this document to support the conceptual model proposed.

Cicmil (2000) adopted a multiple perspective approach to define quality in projects and proposeed a model based on “project completeness,” which involved the project communication system, organizational behavior, project context, project content, project congruence, and project management processes.

Finally, Winch, Usmani, and Edkins (1998) analyzed the gaps between perception and expected quality in client and customer domains based on the gaps proposed in the service quality management literature. In this paper, they proposed four ways to define quality of a construction project.

  1. Quality of conception: in terms such as spatial articulation and elegance of form;
  2. Quality of specification: technical standards, fitness for purpose;
  3. Quality of realization: client review of the process, total quality management; and
  4. Quality of conformance: the manner used to meet objectives; this is related to assurance and control.

These are valuable contributions, but in attempting to comprehend and assess project quality, we now attempt to synthesize a high-level model with key inputs from the product, service, and project schools. Still, the definition of quality is far from widely agreed in project management community. “Quality is a term that has so many different meanings for different people that it must be subject to some further definition before we can in any sense manage it” (Maylor, 2005, p. 166).

However, it is notable that none of these models so far presented embrace service quality as an important aspect. A brief summary of the service quality literature is included here, to see how this might enlighten the management problem and corresponding research question identified.

Relevant Literature 3—Managing Service Quality

There are three generalized and well-documented differences between the production of goods and services. These are intangibility (services are often performances rather than objects), heterogeneity (performance varies and is reliant upon consistent behavior from personnel), and inseparability (production and consumption are often simultaneous) (Parasuraman et al., 1985). Consequently, the service literature takes a markedly different approach to the concept of quality. While products can generally be measured in terms of attributes and evaluated by objective criteria, service quality is conceived as the difference between what the consumers perceive against what they expect from the service. It is therefore a far more subjective approach, although measurable nonetheless. There are two schools of thought when looking at service quality, the “Nordic/European” school (characterized by Grönroos, 1984), and the “American” school, after Parasuraman et al. (1985). Both are based on evaluation of consumers, and this needs to be considered when applying the models to complex service offerings such as major programs. However, the frameworks are beneficial in offering methods of analysis.

Grönroos' (1984) study of service quality differentiates between technical quality (the “what” aspect, or “instrumental performance”) and functional quality (the “how” aspect, or “expressive performance”) in a service delivery environment. The former is an evaluation based on what the consumer receives; the latter is an evaluation of how the service is delivered. Adequate instrumental performance (i.e., the output) is a prerequisite for customer satisfaction, but this is not enough. If the expressive performance (the service delivery aspect) of the product is not considered satisfactory, then the consumer will still feel unsatisfied, irrespective of the degree of satisfaction caused by the instrumental performance. Harvey (1998, p. 98) noted that when delivery is a key part of the service, then “perception is reality.” The perceived service is the consumers' view of a “bundle” of service dimensions. When this perceived service is compared to the expected service, we get the perceived service quality. However, Grönroos also considered a third quality dimension—the corporate image, and asserted that the expectations of consumers are influenced by their view of the company. Kang and James (2005) identified that this acts as a form of filter in terms of the consumer's perception of quality. Other similar models are also relevant in the context of service quality (Table 3).

Table 3. Alternative Service Quality Models

Authors Dimensions of Service Quality Model
Lehtinen and Lehtinen (1991) Physical quality, interactive quality, corporate quality, process quality, and output quality.
Rust and Oliver (1994) Service product (technical quality of the outcome), service delivery (functional or process quality), and service environment.
Philip and Hazlett (1997) Pivotal, core, and peripheral attributes.
Haywood-Farmer (1988) Professional judgment, physical facilities and processes, and people's behavior.

When applying these approaches to the scenario outlined at the start of the paper, it is hardly surprising that there is a lack of satisfaction on the part of the client. The instrumental or product performance is, indeed, only part of the requirement. It is a pre-requisite but it is not enough, no matter how many indicators may be showing that everything is positive. Good expressive or service performance is also required, and potentially Grönroos' (1984) third dimension of corporate image.

For now, it is this second set of expressive or service performance elements that are of interest. However, the diverse and generalized requirements described in Table 3 appeared to be either inappropriate or incomplete for the context we are considering; no clear method was apparent that could adequately be used to determine the nature of quality in the PPM context. In the past, researchers have used SERVQUAL (Parasuraman, Zeithaml, & Berry, 1988) to achieve an appropriate (cf. manufacturing) conceptualization of quality. Given its ubiquity and apparent flexibility, we chose SERVQUAL as the starting point for this context. Due to the uniqueness, business-to-business, relational, and temporal aspects of IT-enabled change programs, it was expected that there would be changes required to the original structure.

Relevant Literature 4—SERVQUAL

The original model included the following categories, and within these, 22 dimensions or attributes:

  • Tangibles: Physical facilities, equipment, and appearance of personnel;
  • Reliability: Ability to perform the promised service dependably and accurately;
  • Responsiveness: Willingness to help customers and provide prompt service; and
  • Assurance: Knowledge and courtesy of employees and their ability to inspire trust and confidence; and
  • Empathy: Caring, individualized attention the firm provides its customers.

The list is not self-evidently comprehensive for the current context. Pitt, Watson, and Kavan (1997) noted that “We cannot discern any unique features of IS that make the standard SERVQUAL dimensions inappropriate,” and “We cannot discern any unique features from the IS domain that have been excluded from SERVQUAL” (p. 212). This appears to justify SERVQUAL as an appropriate provisional framework and that it is useful for an IS context, but it does not provide any further reassurance about its applicability to a program environment.

There is another important aspect of the SERVQUAL work that is of relevance here. Maister's Law (Maister, 1993) stated that quality is the difference between perceptions and expectations. Parasuraman et al. (1985) identified other gaps that contribute to this. These are as follows:

  • The difference between customer expectations and managerial perceptions of customer expectations;
  • The difference between managerial perceptions of customer expectations and service quality specifications;
  • The difference between service quality specifications and the service actually delivered; and
  • The difference between service delivery and what is communicated about the service to the customer.

For the purpose of this study, the prime concern is with the gap between customer expectations and perceptions. The process that contributes to this and the role of the other identified gaps will be areas for further research.

The concept of service quality being expressed as the difference between perception and expectation has been questioned on many occasions since its introduction. Notably, it was stated that “performance-minus-expectations is an inappropriate basis for use in the measurement of service quality” (Cronin & Taylor, 1992, p. 125). They proposed an adapted model looking only at the perception of quality, the SERVPERF. This led to a series of papers discussing and testing which of the models would be most informative or better in understanding customers' satisfaction and how to improve it (e.g., Teas, 1993; Cronin & Taylor, 1994; Elliott, 1994; Parasuraman et al., 1994;). Elliott (1994) compared the practical usefulness of SERVQUAL and SERVPERF to marketing managers through a survey with 461 frequent flyers of airline carriers. The results showed that “SERVPERF performs better in explaining variance in customer satisfaction and overall service quality; while SERVQUAL appears to be better at pinpointing areas of service deficiency (performance-consumer expectations). However, service managers would probably be more interested in identifying areas of deficiencies than explaining variance” (Elliott, 1994, p. 59). This is useful for our context. As already discussed, the managers would be more concerned with understanding and managing areas of deficiencies. De Ruyter, Bloemer, and Peeters (1997) tested an integrated SERVQUAL and SERVPERF measure. However, this did not appear to give a superior understanding of the requirements to achieve customer satisfaction. SERVQUAL still appears to be the most relevant preliminary framework.

Lastly, Van Dyke et al. (1997) also questioned whether SERVQUAL's expectation construct is sufficiently well defined, arguing that the measure could be of what customers realistically expect will occur, or what they think should occur. This distinction is useful for our purposes. We have chosen to use the should view for this study, though note that the alternative could be viewed in the future.

The review of the four relevant literature areas provides some insight into the original management problem. First, quality management is a well-developed concept. Second, the program and project management literature focuses on a product-based view of quality. We content that this approach is inappropriate here, as the complex programs we are considering, are predominantly services. The service literature is well developed and has provided an alternative basic assessment for quality in this context— in the form of the SERVQUAL framework. This will be used as the basis for moving forward in this study.

Finally, we are aware that we have conflated the concepts of service quality and customer satisfaction. These are explicitly separated in some of the literature (e.g., Cronin & Taylor, 1994). However, there is more recent empirical evidence that such a conflation is justified (Grapentine, 1998); they are not equivalent, just very closely related. We do note that this conflation should be tested in further work to explore what the situation is in this context.

Research Design

The first stage of this study was designed to answer the question: What are the attributes of quality in complex programs? Our intention was to uncover the perceptions of quality in project and program contexts. It was important not to collect data without influencing the interviewees with a pre-defined set of questions based on previous models used to define quality in other contexts apart from project. This could lead to a misleading impression of convergence between explanations of other context and that of projects. Therefore, we contextualized ideas of quality based on 18 exploratory interviews with project and program managers, quality managers, and senior executives within a large IT provider. This sample was purposive and was limited to this one sector.

The interviews were conducted in English, took 30 to 90 minutes, and were taped and transcribed. The interviews were structured around two areas:

  • Context explored the background, job, and responsibilities of the interviewee in general as well as the current practices in management and measurement of quality.
  • Defining quality delved into the interviewees' understanding of quality and the respective key attributes by comparing projects they considered of high quality and low quality. This was the main part of the interview. We did not mention SERVQUAL or its constructs and attributes in the interviews. As portrayed in the literature review, conception of quality in project and program context is less developed than in other areas, so the comparison should aid interviewees to make their impressions of quality more explicit.

We analyzed the data by a process of manually coding, and then contrasting the concepts extracted with the attributes in SERVQUAL. Five additional concepts were found over and above those in the standard SERVQUAL framework and these fitted well within the existing categories. Table 4. shows the additional elements and the categories where they fitted.

Table 4. Additional SERVQUAL Items

Additional SERVQUAL Item Category
Adapt styles Empathy
Zero defects Reliability
Smooth execution Reliability
Change management Responsiveness
Back on track Responsiveness

“Adapt styles” concerned the ability of the delivery team to match their style to the culture, state, and level of people that they were working with at that time. This would include the ability to handle change in an appropriate manner, recognizing that this can be politically sensitive. Zero defects might appear to be a product characteristic, but in this context, the reliability of the service was described in this manner. The zero defects applied to the level of errors that the client perceived had been made in the process. “Smooth execution” relates to the general experience of the client in the delivery of the work. “Change management” is a standard feature of most programs or projects, and so it is not surprising to see this concept emerge strongly here. Specifically, this concerned how responsive the supplier would be to both formal and informal change requests. “Back on track” reflected the reality that in such work there would be disturbances, for instance, caused by a change in the requirements or an issue that has arisen from the work carried out. For the client, the ability to move from this disturbance to a restored sense of purpose with the work being carried out was important.

Following well-described scale development procedures (Parasuraman et al. 1988), we then produced an instrument comprising 44 questions, using 7-point Likert scales, 22 that assess the expectations and 22 that assess the perceptions of quality. Before deploying the instrument to the sample, the revised framework was piloted (and subsequently went through minor amendments) with two academics and three practitioners not involved in the research project to check for clarity, understanding and readability.

The instrument was then used to test the attributes of quality in a cluster sample of 40 business units based in several countries in Europe within a global IT outsource service provider. Each of the business units operated as an independent company, but still within the same business and with certain similarities. Through these means, we reduced the number of other possible contextual variables (e.g., corporate culture, industry specific factors) that might have influenced the data, but guaranteed a still heterogeneous set. The instrument was deployed via a Web-based survey where a link was sent via e-mail to five managers of each business unit (n = 200) at different levels and with different responsibilities. Forty-one responses were received in the first week, and 44 were received in the second week after a reminder. From the survey, 85 usable assessments were obtained, a return rate of 42.5%. Table 5 details the geography, role, and experience of the respondents.

Table 5. Sample Frame

Country Experience in Current role (years)
United Kingdom 47 1 to 5 42
Italy 21 6 to 10 26
Other European countries 17 11 to 20 14
    Over 21 2
    No answer 1
Current Role
PMO and support staff 8 Project manager 25
Line manager and staff 7 Account manager 17
Team manager 5 Program manager 12
No answer 1 Technical staff 10

Analysis, Results, and Discussion

Our objective was to develop a model for project and program context to define and measure quality based on SERVQUAL. It was not enough to apply SERVQUAL and testing model fit. The data were analyzed in three steps. In the first step, we evaluated the validity and reliability of the original and contextualized SERVQUAL constructs to test whether the models passed the thread hold criteria. We then conducted a cluster analysis to determine whether there were different configurations of the scales and constructs possible within the data. We then constructed and tested a measurement model using confirmatory factor analysis (CFA) to determine whether the new constructs fit the data better than the existing SERVQUAL constructs. Table 6 shows the results for the reliability and validity tests and the measurement model. With this analysis, we could establish which one of the models was best to explain the data.

Table 6. Cronbach's Alpha, AVE, Path Loading, for Original and Contextualized SERVQUAL

Cronbach's Alpha, AVE, Path Loading, for Original and Contextualized SERVQUAL

We analyzed the internal consistency of the original constructs through Cronbach's Alpha using SPSS 17.0. The majority of the values were between 0.7 and 0.8, which are considered adequate and good for exploratory purposes (Nunally, 1978), although some values fall below this limit which indicates a lack of internal consistency within the constructs. The contextualized SERVQUAL constructs had better results than the original SERVQUAL constructs, with only one value under 0.7, and an average of 0.795.

To assess the convergent validity of constructs (extent to which indicators are related to their respective construct), we first assessed the discriminant validity (degree of overlap between indicators within the same construct) through AVE (average variance extracted) as proposed by Fornell and Larcker (1981). The values were greater than 0.50, and thus justified the use of the construct (Hair, Black, Babin, & Anderson, 2009). We then conducted a CFA using AMOS 16 of the original SERVQUAL and the extended SERVQUAL models with full information and maximum likelihood estimation. One of the indicators of each construct was fixed to a value of 1.0, so that the constructs were scale-variant (Jöreskog & Sörbom, 1984). The overall fit of both models was not adequate (<0.9), especially if expectations and perceptions are considered together. This clearly shows that the model can be improved. The results also indicated that perception and expectation had different reliabilities, and if tested in two different models would generate better overall fit. Due to the inadequate fit indices, we ran a hierarchical cluster analysis using Ward's method with centroid weighting and standardized z-scores (e.g., Hair et al., 2009) in SPSS 17.0 to determine whether the observed variables clustered differently to the SERVQUAL scheme.

Table 7 shows the resulting clusters for perception and expectation, the rationale explaining their grouping, reliabilities, path loadings, and model fit indices. Only three observed variables were not considered in the clusters, two of them related with visual appeal (updated equipment, professional appearance of materials and commitment). However, the observed variables were clustered around different constructs, and there was a significant difference between constructs explaining perceptions and expectations. The constructs related to expectations are similar to the contextualized SERVQUAL. Assurance did not appear as a group, empathy and visual appeal were akin to what we termed, respectively, relationship and aesthetics. Responsiveness was focused on to the ability to deal with change, and reliability was refocused into a professional relationship with the client, in other words, the expectations clusters emphasized the ability to deal with change and the strength of the relationship with the client. However, the constructs related to perceptions were very different. The original set of SERVQUAL constructs were reclassified into three clusters: reliability, client centricity, and understanding. The clusters suggest an increase in the levels of tangibility of the measures. We suggest that this is due to greater clarity around what and how the program or project is being delivered.

We carried out the same set of tests as for contextualized and original SERVQUAL, and the results improved significantly. All AVE values were greater than 0.5, Cronbach's Alpha were above 0.8, with the exception of aesthetics and empathy. This indicated acceptable convergent validity. The new clusters also showed a significant improve in overall fit. Although the values of NFI are still below 0.9, the values are close to it, especially in comparison to the previous models (Table 7).

Table 7. New Clusters

Construct Rationale Reliabilities / Items Path loading
Expectations
Professionalism “Providing the solution the client wants” α= 0.903 / AVE= 0.770  
Keep promises 0.87
Interest solve client's problems 0.86
Understand client 0.86
Inform progress 0.82
Resilience “The ability to deal with change” α= 0.825 / AVE= 0.742  
Back on track 0.81
Change management 0.83
Smooth execution 0.73
Aesthetics “Looking professional” α= 0.773 / AVE= 0.642  
Professional appearance of people 0.61
Professional appearance of physical facilities 0.90
Relationship “Going the extra mile in the relationship with the client” α= 0.781 / AVE= 0.674  
Accessible 0.84
Adapt styles 0.67
Courteous 0.76
Perception
Reliability “Deliver what you are expected to deliver even with changes” α= 0.863 / AVE= 0.646  
Smooth execution 0.77
Back on track 0.66
Zero defect 0.75
Confidence 0.79
Safety 0.78
Client Centricity “Delivery of solution on time and with minimum error throughout the process” α= 0.851 / AVE= 0.669  
Keep promises 0.82
Interest solve client's problems 0.86
Error free record 0.66
Prompt service 0.75
Understanding “Know and understand what the client wants” α= 0.801 / AVE= 0.693  
Adapt styles 0.83
Focus client 0.53
Have knowledge 0.73
Model fit indices
    Expectation Perception
  NFI= 0.880 0.879
  CFI= 0.940 0.954
  X2/df= 1.794 1.494
  p= 0.001 0.013

Finally, we ran a bootstrap comparison of the six models (Linhart & Zucchini, 1986; Arbuckle, 2007) to determine whether the new, clustered multi-item scales provide a better fit to the data. Table 8 shows the model fit measurements. The saturated models were tested against the new clusters, meaning that the observed variables in each model were the same.

Table 8. Model Fit Measurements

  Failures Mean Discrepancy AIC BCC CAIC RMSEA
Original expectations 14 429.86(2.68) 396.50 429.77 513.97 0.134
Original perceptions 14 323.13(1.41) 267.13 291.13 418.61 0.088
Cont expectations 10 749.76(3.65) 568.60 609.32 754.51 0.123
Cont perceptions 27 629.01(2.33) 465.06 505.78 650.96 0.096
New expectations 0 142.63(.95) 146.07 157.06 249.35 0.096
New perceptions 0 136.93(.81) 154.173 168.455 223.13 0.077
New expectations saturated - - 156.000 184.563 424.527 -
New perceptions saturated - - 180.000 212.958 - -
New expectations Independence - - 740.970 745.365 782.282 0.343
New perceptions Independence - - 678.454 687.243 - 0.319

As indicated in Table 8, the new models fit the data better than the existing and contextualized SERVQUAL scales and the saturated and independence models. This is due to the mean discrepancy, Akaike Information Criteria (AIC), Browne-Cudeck (BCC), and Consistent AIC (CAIC) being lower for the new models. Root mean square errors of approximation (RMSEA) values of the new cluster are between 0.05 and 0.10 indicating an adequate model fit (Browne & Cudeck, 1993). This improvement in AIC, BCC, CAIC, and RMSEA indicates that our new constructs any be a more appropriate set of scales to use when measuring the expectations and perceptions of service quality within program and project delivery.

Conclusions

This study began with a practitioner problem—how to measure quality in complex programs. This was not well understood and attempts using product-based measures yielded high performing measures, but low performing customer satisfaction. The service management literature provided some indication of why this was the case, but did not yield guidance to managers on a set of quality characteristics that would be appropriate. The study was therefore carried out to determine the attributes of quality in delivery of IT-enabled change. We used SERVQUAL as a point of departure and introduced new observed variables based on a qualitative study to contextualize SERVQUAL. The reliability and model fit measurement of both original and contextualized models indicated possible improvements. We carried out a cluster analysis with all observed variables, and the resulting constructs had better values for reliability and model fit. Constructs for expectations and perceptions were clustered differently. Expectations approximated to contextualized SERVQUAL, while constructs of perceptions were different and emphasized three aspects:

  1. Understanding of requirements,
  2. Engaging delivery, and
  3. Reliable outcomes.

This is a development of the current approach that we have noted in both the standards and examples of current practice that focus on the outcomes only. This focus on outcomes may be appropriate for some projects, but is causing the lack of customer satisfaction noted in the management problem at the outset of this work.

The findings further suggest that some of what is expected is not relevant in our perceptions of quality in business reality. At the beginning of a program or project, the uncertainty is higher, and therefore, expectations underline the soft aspects of relationship, such as accessibility and even courteousness. As projects and programs become a daily reality, tangible aspects such as zero defects, error free records, and having knowledge become more relevant. Thus, given that IT-enabled change is often delivered over an extended period, one explanation for the difference is the time delay between the setting of expectations and the forming of perceptions concerning the services. This does fundamentally question the basic gap model approach to managing service quality in this context. Due to the evolving nature of the service and the requirements, a more dynamic model is required. For the practice of managing projects, this means that project managers should create a habit to revisit key performance indicators with the client and understand whether these are still aligned with the envisaged benefits of the project and program. Governance has an important role to play in this process, assuring that the key performance indicators are indeed revisit. Ideally, project leaders would explore the changes in key performance indicators and strive for new business opportunities. This broadens the scope of, for instance, training on stakeholder management, which should also involve the capturing and active management of client's perception of quality. The model proposed here could be used as a baseline for reflection of project leaders and training in managing quality of projects beyond the mainstream quality management.

Areas for Further Research

For project and program management researchers, there is considerable potential for researching the management of quality and the achievement of customer satisfaction. Further studies should continue to explore the attributes required of complex programs within different sectors with a view to constructing a dynamic model of quality. In this study, we explored the view of project management and people involved in the supplier side of the project. This was because clients are usually also involved in repetitive operations and might color the development of a definition of quality specifically for projects and programs. However, we acknowledge that there is still a gap in exploring how the client articulates quality in projects and program contexts. In addition, the assumption about the link between quality performance and customer satisfaction does need testing.

Finally, being a service, the customer is embedded in the delivery process, and therefore, is not an objective arbiter of quality. The revised conceptualization of quality, based on understanding needs and engaged delivery in particular, requires a different role for clients in programs.

References

Ahire S., Waller, M., & Golhar, D. (1996). Quality management in TQM versus non-TQM firms: An empirical investigation. International Journal of Quality and Reliability Management, 13(8), 8–27.

Arbuckle, J. L. (2007). Amos 16.0 user's guide. Chicago: SPSS Inc.

Association for Project Management. (2006). APM body of knowledge (6th ed.). High Wycombe, UK: Association for Project Management.

Benjamin, R. I., & Levinson, E. (1993). A framework for managing IT-enabled change. Sloan Management Review, 34(4), 23–33.

Boaden, R. (1997). What is total quality management…and does it matter? Total Quality Management, 8(4), 153–171.

Browne, M. W., & Cudeck, R. (1993). Alternative ways of assessing model fit. In K. A. Bollen & J. S. Long, (Eds.), Testing structural equation models (pp. 136–162). Newbury Park, CA: Sage Publications.

Chase, R., & Apte, U. (2007). A history of research in service operations: What's the big idea? Journal of Operations Management, 25(2), 375–386.

Cicmil, S. (2000). Quality in project environments: A non-conventional agenda. International Journal of Quality and Reliability Management, 17(4–5), 554–570.

Cronin, J. J., Jr., & Taylor, S. (1992). Measuring service quality: A reexamination and extension. Journal of Marketing, 56(3), 55–68.

Cronin, J. J., Jr., & Taylor, S. A. (1994). SERVPERF versus SERVQUAL: Reconciling performance-based and perceptions-minus-expectations measurement of service quality. Journal of Marketing, 58(1), 125–131.

de Ruyter, K., Bloemer, J., & Peeters, P. (1997), Merging service quality and service satisfaction. An empirical test of an integrative model. Journal of Economic Psychology, 18(4), 387–406.

Dotchin, J., & Oakland, J. (1992). Theories and concepts in total quality management. Total Quality Management, 3(2) 133–145.

Elliott, K. M. (1994). SERVPERF versus SERVQUAL: A marketing management dilemma when assessing service quality. Journal of Marketing Management, 4(2), 56–61.

Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39–50.

Garvin, D. (1984). What does product quality really mean? Sloan Management Review, 26, 25–43.

Garvin, D. (1992). Operations strategy, text and cases. Englewood Cliffs, NJ: Prentice Hall.

Grapentine, T. (1998). The history and future of service quality assessment. Marketing Research, 10(4), 4–20.

Grönroos, C. (1984). A service quality model and its marketing implications. European Journal of Marketing, 18(4), 36–44.

Hair, J. F., Jr., Black, W. C., Babin, B. J., & Anderson, R. E. (2009). Multivariate data analysis: A global perspective (7th ed.). Upper Saddle River, NJ: Pearson.

Harvey, J. (1998). Service quality: A Tutorial. Journal of Operations Management, 16(5), 583–597.

Haywood-Farmer, J. (1988). A conceptual model of service quality. International Journal of Operations and Production Management, 8(6), 19–29.

Jöreskog, K. G., & Sörbom, D. (1984). LISREL-VI user's guide (3rd ed.). Mooresville, IN: Scientific Software.

Kang, G., & James, J. (2005). Service quality dimensions: An examination of Grönroos's service quality model. Managing Service Quality, 14(4), 266–277.

Lehtinen, U., & Lehtinen, J. (1991). Two approaches to service quality dimensions. The Services Industries Journal, 11(3), 287–303.

Linhart, H., & Zucchini, W. (1986). Model selection. New York: John Wiley and Sons.

Maister, D. H. (1993). Managing the professional business firm. New York: The Free Press.

Maylor, H. (2000). Strategic quality management. In L. Moutinho (Ed.), Strategic management in tourism. Wallingford, UK: CABI Press.

Maylor, H. (2005). Project management (3rd ed.). London: FT Prentice Hall.

Nunnally, J. (1978). Psychometric theory (2nd ed.). New York: McGraw-Hill.

Office of Government Commerce. (2005). Managing successful projects with PRINCE2. Norwich, UK: The Stationery Office.

Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1985). A conceptual model of service quality and its implications for future research. Journal of Marketing, 49(4), 41–50.

Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1988). SERVQUAL: A multiple-item scale for measuring consumer perceptions of service quality. Journal of Retailing, 64(1), 12–40.

Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1994). Reassessment of expectations as a comparison standard in measuring service quality: Implications for further research. Journal of Marketing, 58(1), 111–124.

Philip, G., & Hazlett, S. (1997). The measurement of service quality: A new P-CP attributes model. International Journal of Quality and Reliability Management, 14(3), 260–286.

Pitt, L. F., Watson, R. T., & Kavan, C. B. (1997). Measuring information systems service quality: Concerns for a complete canvas. MIS Quarterly, 21(2), 209–221.

Powell T. (1995). Total quality management as competitive advantage: A review and empirical study. Strategic Management Journal, 16(1), 15–37.

Project Management Institute. (2008) A guide to the project management body of knowledge (PMBOK® Guide) (4th ed.). Newtown Square, PA: Project Management Institute.

Rust, R., & Oliver, R. (1994). Service quality: Insights and managerial implications from the frontier. In R. Rust R. & T. Oliver (Eds.), Service quality: New directions in theory and practice (pp. 1–19). Thousand Oaks, CA: Sage Publications.

Samson, D., & Terziovski, M. (1999). The relationship between total quality management practices and operational performance. Journal of Operations Management, 17(4), 393–409.

Shewhart, W. (1931). Economic control of manufactured products. New York: Van Nostrand Reinhold.

Sousa, R., & Voss, C. (2002). Quality management revisited: A reflective review and agenda for future research. Journal of Operations Management, 20(1), 91–109.

Spencer, B. (1994). Models of organisation and total quality management: A comparison and critical evaluation. Academy of Management Review, 19(3), 446–471.

Teas, R. K. (1993). Expectations, performance evaluation, and consumers' perceptions of quality. Journal of Marketing, 57(4), 18–34.

Turner, J. R. (2000). Managing quality. In J. R. Turner & S. J. Simister (Eds.), Gower handbook of project management (3rd ed.). Aldershot, UK: Gower.

Van Dyke, T. P., Kappelman, L. A., & Prybutok, V. R. (1997). Measuring information systems service quality: Concerns on the use of the SERVQUAL questionnaire. MIS Quarterly, 21(2),195–208.

Winch, G., Usmani, A., & Edkins, A. (1998). Towards total project quality: a gap analysis approach. Construction Management and Economics, 16(2), 193–207.

Zwikael, O., & Globerson, S. (2007). Quality management: A key process in the service industries. Services Industries Journal, 27(8), 1007–1020.

This material has been reproduced with the permission of the copyright owner. Unauthorized reproduction of this material is strictly prohibited. For permission to reproduce this material, please contact PMI or any listed author.

© 2010 Project Management Institute

Advertisement

Advertisement

Related Content

Advertisement

Publishing or acceptance of an advertisement is neither a guarantee nor endorsement of the advertiser's product or service. View advertising policy.