A simple organizational project quality assessment tool

Lawrence K.Saunders, Quality Manager, Project Management Office, COMPAQ Federal LLC, Professional Services

How does your organization know whether project quality is good? Which projects are on track and which are on a path to failure? Are project quality processes being used effectively throughout the organization? Are there organizational factors that may be negatively affecting project success? Is your organization learning from its successes and failures? How easy is it to get answers to these questions?

In this paper, I describe a process for making simple numeric project quality measurements that can help predict success and allow early corrections to be made, both to troubled projects and to organizational support systems, before failure becomes inevitable. I discuss the factors in our business that led us to recognize the need to quantify project quality and the requirements we identified for the measurement process. I also provide some preliminary results related to improved understanding of organizational success factors and to improved organizational learning.

Background

In 1996, a major COMPAQ (then Digital) customer began to require all system integration suppliers to demonstrate that their software development processes were at SEI CMM Level Two as a condition to bid on new projects.

COMPAQ Federal LLC Professional Services provides system integration services to U.S. Federal Government customers. We had always prided ourselves on our quality software development and project management processes. Our worldwide Project Management Office (PMO), responsible for defining processes and techniques for project management and for validating their application to customer projects, had invested heavily in the development of our Quality Project Methodology (QPM). QPM is a collection of standards, methodologies, tools and techniques aimed at promoting best-in-class practices across our project business. Project managers and integration technologists were trained in QPM. Routine monthly project reviews, from initiation through closeout, were a way of life. Project managers and technical leaders had typically been with the company for many years and were well schooled in the guiding principles of QPM. These principles contributed to the high probability of delivering successful projects and led to engagements that delighted our customers. We depended on well-trained people executing best-in-class processes for our excellent results.

We were now faced with the need to prove the effectiveness of our processes to people outside our organization. We embarked on an assessment of our use of QPM against the SEI-CMM Level Two criteria with positive results. However, the exercise provided some unexpected insights.

Project success tended to depend on the experience of the project manager; not so much their total years of experience, but on their years of work within our organization. Recently hired project managers with extensive previous experience were not as successful as we expected them to be. We trained new project managers to employ the same QPM processes, subjected their project activities to the same reviews and measured their results by the same standards. We also began to notice that our seasoned project managers, with long years of experience in our organization, were beginning to express feelings of frustration that following process seemed to get in the way of project success. What was going on here?

The Problem

It appeared that project factors we didn't understand were eluding our careful reviews. Of course, 100% inspection of a complex human process like project execution is not possible. Whatever else they might be, projects are certainly non-deterministic. Constant invention of solutions to unexpected problems attracts people like us to project management. Our review strategy focused on these opportunities for invention, the out-of-the-ordinary events that were worthy of mention. Project managers routinely reported that things were (or were not) as expected and identified changes in risk probabilities. However, seldom were factors like compliance to process, successful customer quality reviews and similar expected conditions reported, even when they were somewhat off. Because we believed everyone knew the system, we had little reason to verify that people were doing the right thing. The reasons why people were experiencing more problems were not easily discovered in reviews. Violations of process went undetected as long as nothing bad happened.

Beginning around this time, we began to experience a dramatic growth in our project business. This has been accompanied by an influx of new people, including project managers. We have five times as many project managers today as we did three years ago. Most have excellent credentials in project management, but many are unfamiliar with QPM and other aspects of our organizational culture and infrastructure. In general, they understand the universal language of project management as expressed in A Guide to the Project Management Body of Knowledge (PMBOK® Guide), but are unfamiliar with the specific methods and techniques that have formed the underpinnings of our historical success.

Based on the evidence that newer people were producing poorer results, but that literally everyone was experiencing more problems than in the past, we questioned whether non-compliance to the QPM process might be the primary culprit. There can be many reasons for non-compliance to process.

•   People may not understand a process or why it is important. We may not believe we are held accountable for process compliance. Training may tell us what to do but not why it is important to follow a certain path. We tend to believe that the way we have always done things is still OK in a new organization or a new situation. Because we are human, we believe that doing things a different way is harder than doing things the way we already know. We rely on analogy in understanding our world. Our first instinct is to assume that something we encounter for the first time behaves like something similar in the past. This allows us to get on with our lives in new situations.

•   Processes are specific to an organization. Differences in processes across organizations reflect organizational differences. The environment in which a process develops influences its shape. The role each person is expected to play is shaped by historical as well as practical, contemporary influences. Thus, managing projects in one organization may be quite different from managing them in another organization. For example, some organizations employ Planners, whose skills focus on the nuts and bolts of embodying a conceptual plan devised by someone else (usually a project manager) into a tool, database or other organizational representation. In another organization, the project manager has to know how to drive the tool as well as specify the destination and the route to take. In either organization, the style … naming conventions, colors, data columns, etc. … of the representation of the plan may be important because of the particular way the organization integrates details of the plan with other infrastructure components. The process specifies these interdependencies, often implicitly. The reasons for a particular process activity may not be obvious to someone who is only involved in that process step, but correct execution may be critical somewhere else in the process. A project manager coming from an organization that depends on Planners to an organization that does not may be quite unprepared for the need to be a hands-on tool person.

•   Organizations change. Processes must adapt. A project manager who has always been able to sidestep some parts of the process and still be successful may no longer have that luxury. This may be especially true when the organizational pace changes or when the personal relationships typical in a smaller organization become less possible as the organization grows. Also, processes that worked at one time may need to be revamped when the situation changes. People tend to abandon processes that no longer contribute to (or may even stand in the way of) their success.

In order to understand whether process was at fault, whether we need to do a better job of integrating new people into processes, or whether organizational changes are placing more stringent process compliancy demands on old timers, we had to have a way of measuring the degree to which intended process was being followed. This is not a one-part question. There are many aspects of process that are critical to our success. We need to know where deviations occur and to what extent. Then we need to establish correlations between process compliance and success. If we can discover specific areas where compliance is a problem, we can apply remedial action to those key aspects.

Measurement Requirements

Based on our analysis of what we needed to know about the quality of our project management process and on considerations of cost and intrusiveness, we identified some specific requirements for our measurement system:

•   It must be easy to apply with minimum disruption to project activities. One aspect of our QPM methodology is a structured Project Quality Audit. We audit a sampling of projects, sometimes at random and sometimes at predefined milestones for more complex projects. While the audit considers project success factors like on schedule and within budget, its primary focus is on compliance to all aspects of QPM process. Is documentation current and according to standards of organization and completeness? Is there an explicit plan for ensuring quality and are planned quality checkpoints executed?

However, we couldn't afford to perform these extensive audits on all projects. There is a delicate ecological balance in a project-focused organization between no measurement, which can sometimes produce accidental success but typically leads to quite unpredictable results, and too much measurement, which can predictably produce uniformly zero success because there is no time left for invention after the documents are examined and the reports are written.

•   It must provide a means of characterizing projects relative to each other across the organization. Many organizations (including ours) depend on discovery of trouble events to signal potential project failures. A customer complains that something is not going well. A project manager declares a project “red” because some key milestone was missed. A key contributor threatens to quit because “everything is always screwed up.”

Quality-focused project delivery organizations typically use regular management reviews to attempt to discover these events and thereby identify troubled projects before they risk failure. Building on organizational learning, these reviews employ templates and checklists for status reporting that guide project managers toward identifying problems in key areas known to have caused trouble in the past.

Exhibit 1. Project Quality Factors

images

These reviews, while important to organizational success, can overlook emerging problems if they are the only means of assessing project status. In a typical review, the status of each project is considered in isolation from other projects. Unless the same reviewer is routinely involved in all project reviews, quality trends across the organization are not easily identified. Organizational factors that may be negatively impacting multiple projects are difficult to recognize.

•   It must examine organizational factors as well as projectspecific metrics. We understood early in this effort that project management activities are not the only process components that need to be measured. Performance of infrastructure services like the PMO, staffing, procurement and finance organizations that support projects are also critical to success.

•   It must allow us to collect data in a uniform, quantitative way that can result in effective organizational learning and lead to measurable process improvement. Most large project-focused organizations have separate quality groups. In addition to performing QA checks, they typically review data accumulated from project reporting and review activities. They attempt to synthesize macro-level learning and disseminate the results in the form of revised templates and checklists, process descriptions and enhancements to training curricula. However, even when the organization can justify this overhead expense, the data are limited to the few events in each project that happen to be reported. Project reviews tend to focus on anomalies … emergencies and special conditions. Usually, there is no mechanism for tracking situations as they develop. A problem is identified and resolved, only to be forgotten in the flood of new emergencies. Organizational learnings that might be applied with positive effect to other projects are hit-or-miss. Only the people directly involved learn from the experience.

An increase in the amount of information available, not just information about abnormalities but quantification of normal conditions with which abnormalities can be compared, is essential to understanding key factors.

The Solution

We have defined a set of quantitative project quality metrics. They are relatively easy to measure and provide meaningful information about the state of individual projects, and of the project delivery organization. They consist of both project-focused and organization-focused measurements, primarily based on key success factors identified by the PMBOK® Guide. The measurements are made in the course of conducting routine monthly project reviews.

Exhibit 2. Organization Quality Factors

images

In our organization, project review participants typically include the project manager, the business manager and a representative of the PMO. In addition, attendees might include a project technical lead, a financial officer and one or more other participants depending on the size, scope and complexity of the project.

Our project quality measurement system, like our Project Quality Audit, focuses on the degree to which assumed processes are being followed as well as on project success metrics. However, it is simple and non-invasive enough to permit its use frequently throughout the course of a project and across the spectrum of ongoing projects. It has not replaced the audit, but its routine use allows us to spot problems early and to apply remedial action before projects in trouble place our business at risk. It provides quantification of key factors that were traditionally reported only anecdotally in quality assessment reports and project reviews.

The key measurement device is a questionnaire employed at every monthly Project Implementation Review. Over the course of the review, all participants, typically three or four people in addition to the project manager, score the project on the same quality factors employed in Project Quality Audits. Factors are rated on a scale from 1 to 5, where 1 is the best possible score and 5 is the worst. A score of 3 is interpreted as “meets minimum standards in our organization.” Examples of score sheet questions are provided in Appendix A.

Data from all participants' score sheets are entered into a database that manages the quality data and provides analysis and reporting capabilities. From this database, trend reports can be generated for each project and for the aggregate of all projects in the organization. Exhibits 1 and 2 provide examples of quality trend reports for project and organizational factors. In order to obtain unbiased assessments of process quality, we are very careful not to publish individual project results or to use the quality ratings as performance indicators to evaluate individuals. Trend reports for individual projects are available to project managers for their own use. The primary use of the quality assessment data is to understand organizational issues that affect project quality and to react to trends before they begin to cause project failure.

Monthly reviews take no longer now than they did before we began quantifying project quality, typically between 30 minutes and an hour per project. A small additional amount of clerical effort is required to enter data from the score sheets into a project quality tracking database each month, but no additional effort is required of project managers or reviewers.

About two thirds of the 28 assessment questions on the current survey form relate to project quality factors; one third relate to organizational factors. There is a single “overall project quality” rating. The concept of correct performance is somewhat subjective. From the point of view of a project manager who is new to the organization, something may be broken based on previous experience in another organization, but performing exactly as expected to someone who has grown up with it. The summarization of opinions of a number of people with differing viewpoints and levels of experience tends to minimize the subjectivity of the measurements.

Project quality factors are evaluated against QPM-defined standards and practices. Project factors include such things as:

•   Completeness of the monthly project report, the document that forms the primary agenda for monthly reviews, against our QPM standard template

•   Degree to which variances are documented, explained and managed in accordance with QPM guidelines

•   Management of scope change using our QPM change management methodology

•   Management of internal organizational relationships and dependencies using QPM guidelines

•   Quality Plan executed to date and activities documented.

Organizational factors focus on the adequacy of the support provided to the project by various internal functional units or agents.

Results

We have been collecting quality assessment data on approximately 15 projects since October 1999. In January 2000, we published score sheet factor guidelines based on feedback from reviewers. Data collected since then shows greater uniformity, although we don't expect to see unanimity; the ratings remain somewhat subjective. Appendix A contains an example of factor guidelines.

We have not yet collected and analyzed enough data to begin to point to specific areas of our project practice that might be contributing to variable results. There have, however, been some positive measurement effects resulting from the structured data collection process. Project managers have begun to be more aware of process conformance expectations on the part of reviewing managers. This has helped them to focus on understanding and executing critical process activities they may have seen as secondary to their responsibility. A number of project managers have asked for assistance in understanding how to do things their managers assumed they already knew how to do. Reviewers representing different interests within the organization are becoming more familiar with the cross-functional nature of project success as they participate in discussion about the ratings.

Results to date have included:

•   No additional time is required to collect the data. Reviewers enter data in their score sheet during the course of the review. Since the data format is uniform, little time is required to record the results in the database.

•   Anonymity of reporting has made it easier to uncover organizational problems that can affect multiple projects. The PMO manages the data analysis and reporting process and we do not publish ratings of individual reviewers or individual project results.

•   Collected data now includes assessment of areas of project performance additional to those related to the current crisis that may have been ignored in previous reporting mechanisms. There is a better opportunity to see the next crisis coming.

•   Although ratings are somewhat subjective (based on the judgment and experience of the reviewers), significant differences over time are becoming easier to identify from run charts as our body of data increases. It is now possible to focus attention on those factors that have the greatest impact on project success.

•   Project managers have begun to ask to see the results of their review scores. They seem to value the quantified feedback as a tool to help them improve their project practice. As we continue to maintain the anonymity of individual reviewers and only publish organizational statistics and trends, we are making summary trend data available to project managers for their projects.

•   We are beginning to use the questionnaire as a means of quantifying the results of formal project quality audits. We expect that the data resulting from audits will be more accurate than that obtained during the monthly review since the scope and depth of review is more thorough. We plan to test the validity of our review data by comparing review scores for individual projects with their audit scores.

Summary

Managers now have a tool to help them understand what's happening in the face of the increasing complexity of the rapidly growing organization. Before this initiative, they had a collection of apparently unrelated project crisis anecdotes. Now they have a way of differentiating project difficulties that are the result of very specific local events from those that stem from organizational factors. It is these systemic problems that tend to affect the ability of all projects to weather the storms of the inevitable opportunities for invention. As they work to resolve immediate crises, the organizational memory this tool provides will allow managers to also reflect on the big picture. The cost for this capability is negligible.

The concept of adding numerical measurements to routine reviews is quite extensible. In the months we have been using the project quality measurement system, we have identified some additional factors for study and have immediately been able to add them to the survey. People from other parts of our organization have expressed interest in applying this data collection and analysis principle to other aspects of the business where regular anecdotal reviews are already in use.

Notes

SEI-CMM is the Software Engineering Institute's Capability Maturity Model. For more information, visit the SEI web site at Carnegie Mellon University (http://www.sei.cmu.edu/)

Appendix A. Score Sheet Example

Example Score Sheet Questions

images

Example Score Sheet Question Guidelines

images
This material has been reproduced with the permission of the copyright owner. Unauthorized reproduction of this material is strictly prohibited. For permission to reproduce this material, please contact PMI or any listed author.

Proceedings of the Project Management Institute Annual Seminars & Symposium
September 7–16, 2000 • Houston, Texas, USA

Advertisement

Advertisement

Related Content

Advertisement

Publishing or acceptance of an advertisement is neither a guarantee nor endorsement of the advertiser's product or service. View advertising policy.