Development of a metric program

This article is copyrighted material and has been reproduced with the permission of Project Management Institute, Inc. Unauthorized reproduction of this material is strictly prohibited.

In late 2001, JPMorgan Chase adopted an initiative to move to CMM® (Capability Maturity Model) level 2. Our organization, Chase Cardmember Services, began the initiation process and set a time frame that we will achieve this goal in a period not to exceed 24 months.

A process team was developed, and organized into each of the 6 key process area's (KPA’s) as defined in the Guidelines text (SEI, 1994), and a process owner was identified for each. Sub teams were then selected to define how we would implement CMM.

During this same time, a small group of people, composed of the head of the CMM initiative, the head of the Productivity and Quality group and the CMM consultant, were meeting to identify the metrics that would be used to measure the process and the projects within the process. They also used as a guide the Guidelines text (SEI, 1994), chapter 7 which not only specifies the goals of Repeatability, but suggests metrics that can be used to measure the activities. They also examined the metrics currently in place to validate their usability.

Existing Processes

Prior to the decision to adopt the CMM level 2 initiative, there were a few areas of concern identified.

The first, and most problematic was in regard to requirements.

Project requirements were, and are, a large issue. Many of our “final” approved, signed off, requirements were laced with TBD’s. In other cases, the requirements were little more than an expanded version of the business case. No more than one or two paragraphs.

Scope creep was also a major concern. Changes were being made to projects at various stages, and not being communicated to the project team. We instituted a change control process in an effort to control this feature, also a good practice. However, the change control team was comprised of the business requester and the project manager. If they agreed to a change, it was accepted. But there was still no process for reviewing or communicating the change.

Before we began this initiative, there were only 2 project metrics in place: Projects delivered on time; Projects Delivered on budget. These are both very sound starting metrics; however, when you look at the method in use to determine “on time/budget” we had a problem. Our process and project repository allowed the project manager to change the scheduled completion date, without explanation or recourse. This, then, became the new schedule, providing us with an on-time basis of over 95% (some projects were too visible to reschedule).

 

Begin in the Beginning

The primary emphasis for the initiative was the correcting of our requirements issues. We spent the most amount of time in the development of and definitions for the project plan.

The project plan, as we have it defined provides the four W’s:

  • Who – Who are the participants and stakeholders in the project
  • What – What documents are to be produced
  • When – What is the timeframe and schedule for the project
  • Why – Why should we work on this project to produce its expected deliverable
  • How – What schedules, Deliverables, success criteria, Change Control process will be used.

These were all concepts that the project managers would swear they were following and using, however they were not in place in our organization.

We also introduced the concept of Requirements Traceability. Again a concept that “was being” used, however there was no documentation, or consistency, if it was used.

These two documents caused the most amount of discussion by the project management staff, primarily because they saw the burden for producing these documents fell to them, with no benefit. At this point they are being produced and followed begrudgingly, but there have been a few proponents within the ranks to foster the beginning of an acceptance momentum.

Project Life Cycle

We developed our first formal Project Delivery Life Cycle (PDL) in the fall of 2000. This was our first attempt at defining how our organization would develop software projects – a challenge following two company mergers and trying to organize 5 delivery groups in 3 states. While this met with limited success, it did set the ground work for a more organizationally supportive process that was begun in early 2002 to map to the new “should be” life cycle in support of CMM.

The revised PDL adopted retained much of the original, added more specific requirements in some phases, and lessened the details in others.

There were two large changes in this new PDL: The first was the inclusion of the CMM required phase assessments by an independent SQA (Software Quality Assurance) group. These definitions included definitions on when the assessments will be conducted (each phase end for medium-large projects, less frequently for smaller maintenance project)and defines what will be reviewed (those documents required at each phase of the process). The other large change was that the PDL also defined the process whereby the assessments and documents would be mutually agreed upon by the Project Manager and the SQA Analyst at the beginning of the project, based upon the parameters of the project. i.e. If all of the work was in house, there would be no need for subcontract management reviews.

The PDL also defined the Change Control process for requirements. As previously stated our then existing change process would be the customer requesting a change and the project manager either agreeing or disagreeing. No other involvement was included with the review.

The new process requires that a Change Control board be defined at the beginning of the project and identifies the chair for the project. Newly developed forms were also adopted and are being used to monitor the progress and status of the change requests.

Metrics

After process improvement teams were identified and running, we began to look at the metrics that would be used to report on the process, not only as required by CMM but to measure the improvements we were looking for.

We again used Guidelines text (SEI, 1994), chapter 7 as the primary source. We reviewed each of the metrics suggested in the text, and reduced this to over 20 potential metrics that we found meaningful.

These were reviewed and spec'ed out in greater detail for review, and meaningfulness. We are firm believers in “Less is Better”. We further reduced this list and developed a final list of 12 metrics that we would use. These included 5 Project metrics, 5 Process Conformance metrics and 2 specifically for sub-contractor performance. We also set a policy that did not limit any metrics used to these, but allowed other project measures at the discretion of the project manager.

On a regular basis, we report two types of Metrics. Project Metrics, specific to each project, are produced at the end of each phase. These metrics are to be reviewed with the SQA analyst assigned to the project, the project managers, manager (a Relationship Manager in our environment) and the project team. These metrics are also posted into our Project Repository for history and archival.

The SQA Manager is also responsible for reporting on Process Metrics. These are reported on a monthly basis and include the number of process non-conformance issues that were identified during the month, as well as data on the number of projects and documents that were scheduled and reviewed.

Project Metrics

There are 5 project metrics that have been defined for all projects, and are the responsibility of the project manager (PM).

  1. Effort Level by Phase (Exhibit 1)
    This metric will allow the project manager to examine where time is being spent on his/her project. Depending upon the group reporting time, there should be observable patterns. Any variation from these patterns should be an indicator for the project manager to examine. These patterns would vary by the role being performed such as a project manager, where more of his/her time should be in the initiation phase, or developers where the bulk of their time would be in the construction phase.
    Exhibit 1

    Exhibit 1

  2. Project Variances (Exhibit 2)
    Project variances will provide a visual indication for deviations/variation of actual dates/effort from the plan. This metric will show variances for the actual completion date of the project (date), overall duration and total effort expended Vs. plan. If there are external resources (sub contractors, consultants), or any hardware/software purchases involved, these would also be tracked.
    The Variance data is not only computed at the end of the project, but at each phase. This will allow the PM to be aware of the variances and take the appropriate actions.
    Exhibit 2

    Exhibit 2

  3. Productivity Rate
    Productivity Rate tracking is a new measure that we have identified, but for which we have no history. At this time we are gathering the data for future analysis.
    We calculate productivity taking the total effort for each phase (or to date for the project metric) and dividing by the size of the project. In our case we have identified Function Points for the sizing.
  4. Defect Counts (Exhibit 3)
    There are several definitions of “Defect” that are applied to project data. For the purposes of this metric, we have defined Defects to be code delivered from the developers that does not function as expected. While zero defects would be an ideal, we are not assuming this to be the case.
    We measure Defects at three points. Systems Test is the internal testing group within IT. This would be the first group to get new/changed code prior to any of the business areas. We anticipate that most defects would be found within this round of testing.
    The next level of testing is User Acceptance testing. The goal established here is that this group should concentrate on usability, that the functionality has previously been tested and approved to function according to the original requirements.
    The final version of defects would be Production. This would include any issue that was found once all other testing has been completed, and the product has been distributed to our internal customers. The anticipation here is that there be no Production defects.
    Exhibit 3

    Exhibit 3

  5. Function Point Counts (Exhibit 4)
    In order to properly compare projects, we need a common denominator. This is usually identified as “size”. We have selected Function Points as our sizing method.
    We will count Function Points at three points in our process. The first is upon completion of the initial project analysis phase, once final requirements have been approved and submitted, and upon completion of the original analysis of the requirements by IT.

    The second time would be upon the completion of the Design, as the project is given to the developers for construction.

    The final count is done at the end of the project, upon the final delivered product.
    These three measures will allow us to identify changes in scope during the later phases of the project, and allow us to examine the affect of these scope changes on the project schedule, and plan.

    This final count (we call level 5) will be the count that is used for all other measures utilizing “size”.

    Exhibit 4

    Exhibit 4

Selling the Metrics

Once we had the metrics defined and agreed to, we had to get acceptance of the metrics by the project managers. Most Project Managers did not “see” the benefit of Metrics and questioned having to do the extra work. Their feeling was that metrics were only used by senior management to show how well the process was working. There were frequent comments from the PM’s of “I’ve been a Project Manger for years, and never used metrics, why do I need to start now?”

In order to overcome this we developed a multiple staged effort.

The first step was to identify internal practitioners that were using metrics on their projects. These people were identified through analysis of usage of Metrics reports that were being collected. We also received recommendation from managers of project mangers that used metrics in their status reporting. Additionally, we selected those that were observed using metrics in team meetings or in status reports.

Once we had identified the practitioners, we set up a series of Lunch and Learn sessions. In these sessions we had one or more of the people identified explain how they use metrics, and how it helped them with their projects. We also opened up the panel to discussions of each metric and how they can be used. We also provided Q & A sessions. Each question was logged, and posted in a Q & A section on the Metrics portion of the project web site for reference.

The practitioners were also identified as resources that the other project managers could go to for mentoring and advice if needed.

Data Collection

We use Excel as our data collection tool. We selected Excel for a number of reasons.

  • Everyone had it, so no additional cost
  • Easier distribution of the collection mechanism
  • More people were familiar with using Excel then any newer tool, which would ease the training process. Excel would be less intimidating.

An Excel workbook was created that contains 5 separate worksheets, and was posted to our primary project intranet site. Each page is protected, so that data can only be entered into the data areas, and preserve any of the calculations that were created.
The worksheets are:

  • Instructions – gives detailed instruction on how to use the workbook, and where the source for each of the data elements can be found.
  • Schedule – provides a guide on which data/metrics are to be completed at each phase during the project cycle.
  • Data – the primary form for entering data Exhibit 5). This form is divided into the various sections for each metric.

    Estimates are the original project estimates for effort, taken at three stages in the project (time of request, following analysis, and following design). These estimates are further broken down based upon the group that could be working on the project. This will allow us to measure our effectiveness in providing “expert analysis” and for future estimating based upon size.

    Actual Effort is the amount of time expended by each group on the project. The time entered is the cumulative project to date total for each group, as of the end of the phase. As the time is entered, a usage comparison is produced that will show how much of the estimated time has been utilized to date.

    – Schedule allows the PM to enter the original project planned phase dates, and the actual dates as they occur. For the purposes of our tracking we don't consider a phase date committed to until after the Design phase is complete.

    Expenses provides a place to enter any external budget requirements such as hardware or consultant expenses. This will contain both the planned and actual space, and will be used in the variance reporting.

    Function Points is a place to record the size counts developed.

    Number of Defects-is a place to record the development defects identified.

    Scope Change is a relatively new measure. This will be used to track the number of requirements change requests that were submitted and approved during the project. This tracking is done by phase to allow us to determine when they were identified.

    Document Impact is a future measure that will allow us to demonstrate the amount of additional effort that is needed to identify and track change requests after the completion of the requirements documents, resulting in rework of these documents.

  • Charts – this tab contains all of the charts that can be used to track and monitor the project, based upon the data that was collected. These charts are created automatically as the data is entered, and formatted to allow the PM to print the charts on a single page.
  • Calculations – contains all of the interim calculations that are needed to produce the charts and statistics for the project. This page is fully protected and can not be modified by the PM.
Exhibit 5

Exhibit 5

We maintain a data dictionary for every metric that is in use within our organization. The same format is used for each metric for the sake of consistency.

The data dictionary will provide a definition of each metric, the calculation/formula used in the creation of the metric, the source of the data, and who is responsible for the maintenance of the metric. A sample graph or chart is provided. (Exhibit 6)

Exhibit 6

Exhibit 6

Lessons Learned

So, as with any initiative, what did we learn from this process?

First we learned that Project Managers don't like change. Big surprise there! This we attempted to overcome with the approaches identified in “Selling”. We used training, and the use of other internal resources that were using metrics.

Second. the process used for the initial rollout needed revision. Not all of the IT managers believed that metrics were important and critical to the process. They read the books and knew they were required, but that they didn't need much attention.
Metrics was not integrated into the first rollout of the process. They were not considered key or prime to the process, and handled as a secondary element. They were not included in the primary team meetings, and were always handled as one-off sessions.
Metrics was not part of the original training that was provided to the project managers or project team members. They were not even mentioned in the curriculum as a separate training. When the metric training was rolled out (concurrently with the primary, but separately) there was a lot of pushback from the project managers on “Why weren't we told about this in the first training”? This was resolved in the second round of project team training, so that practices and metrics are combined into one session. Separate training on metrics is available for those that desire the more in depth experience.
Result: We failed our first reassessment for level 2 in December. One of the key findings of the assessment was that metrics were not institutionalized. This “failure” resulted in the greater scrutiny of the metrics involvement and rollout.

Following this original assessment, greater attention to the need for training and the “selling” of metrics was provided. There is now a greater emphasis on metrics throughout the organization as a tool to help them manage, not simply something that the book says are needed.

Software Engineering Institute (SEI). (1994) The Capability Maturity Model; Guidelines for Improving the Software Process, Indianapolis, IN, Addison Wesley Longman, Inc.

Project Management Institute. (2000) A guide to the project management body of knowledge (PMBOK®) (2000 ed.). Newtown Square, PA: Project Management Institute.

This material has been reproduced with the permission of the copyright owner. Unauthorized reproduction of this material is strictly prohibited. For permission to reproduce this material, please contact PMI or any listed author.

Proceedings of PMI® Global Congress 2003 – North America
Baltimore, Maryland, USA • 20-23 September 2003

Advertisement

Advertisement

Related Content

Advertisement

Publishing or acceptance of an advertisement is neither a guarantee nor endorsement of the advertiser's product or service. View advertising policy.