Project Management Institute

Project portfolio management

what we did, how it worked, and why it was worth it

Introduction

In the fourth quarter of 2001, after considerable industry research, our firm took the first steps in implementing a project portfolio management process within our Information Systems (IS) practice. We recognized the value of having a repository of projects in order to track project health, avoid redundancies, better allocate our resources, and be better able to take proactive action when trends within the portfolio indicated that action was necessary. We also wanted an opportunity to determine whether or not we were doing the “right” projects; i.e., the projects that best met our firm's strategic goals and objectives, as well as supporting the “bottom line.”

Sometimes change has to happen slowly. A number of factors made an immediate implementation of full-blown project portfolio management virtually impossible: inadequate project financial reporting systems, a project management culture that was still in its infancy, the position of our IS group primarily as an “order taker” within the firm (making the selection or rejection of requested projects a non-issue for IS), and a reluctance among senior management to address too many issues at one time. In order to have any chance at success, we would have to implement project portfolio management one step at a time.

We chose project health tracking as our first step, since it required the least culture change and would (we hoped) deliver concrete value quickly. We based our process upon a project management portfolio model that we developed in-house that in turn was based upon industry best practices. We subsequently customized and fine-tuned the process to accommodate the specific needs of our organization.

The process was initially rolled out within our IS practice and our first project portfolio consisted of high priority/high risk IS projects that were selected by our CIO and the IS leadership team. The process quickly expanded to manage other project portfolios within the IS area—our IS managers recognized the potential benefits in having their projects tracked in a consistent manner. Our Project Office team assumed the responsibility for managing the various project portfolios.

The Model—What Did We Do?

The model we developed was based upon our internal needs and consisted of a multi-step, iterative, consistent process, as follows:

•  Define the metrics to be tracked

•  Weight the metrics (and weight the “bottom line,” i.e., total metrics for an entire project)

•  Create a project prioritization tool to assist in determining which projects qualify for the various project portfolios

•  Communicate the entire process to project managers and project sponsors

•  Create a metrics collecting system

•  Create a metrics reporting system

•  Create a “project health review” process to verify the metrics being reported

•  Devise a periodic review mechanism—of the metrics and the process.

Defining the Metrics

We initially thought this would be the easy part. Having a group of experienced project managers in our Project Office unit, we were able to define a very complex set of approximately thirty metrics, using industry standards, research, and our own experience in managing projects. Fortunately, we then considered our audience. Had we not taken the time to do this, our portfolio management initiative would have been a dismal failure. Quality assurance always beats quality control!

The audience consisted of various levels of project management experience—ranging from “none” to “advanced.” And the “advanced” project managers were in the definite minority. Based upon that knowledge, we then took another look at the metrics we would be asking our project managers to report each month. Adopting the tried and trued KISS method, we reduced the number of metrics from 30 to 11. This was done through a very painful process of give and take, and included input from upper level IS management. Our first lesson learned: consider your audience.

Had we gone with the set of 30 complex metrics, most project managers would have been lost, overwhelmed, or both. Learning to crawl first, and then walk later was our objective. So what did we end up with?

The 11 metrics we settled upon covered what we felt were the “must have” metrics related to “must have” minimal project management artifacts. Most of our project managers are just learning how to really “manage” a project. We always produce quality results, but often at a price. Project management reduces that price. So the metrics rolled out on day one were:

•  Has an approved scope document been created?

•  Has a project schedule been created?

•  What is the milestone hit ratio for this reporting period?

•  Are there any deliverables past due?

•  How many open issues exist?

•  How many unresolved open issues older than 30 days exist?

•  How many scope changes during this reporting period?

•  Was there any unplanned turnover during this reporting period?

•  Is the project on schedule?

•  Has the project end date changed during this reporting period?

•  What is the project budget risk?

Obviously, some of these metrics are simple and binary—yes, no, number, percentage. We made sure that the three legs of the project management triangle (scope, schedule, cost) were represented. They were designed to get our project managers thinking about some of the most critical project management artifacts. We also designed them to give us, as portfolio administrators, some “checks and balances” or red flag indicators. For example, if a project manager reports that a project schedule exists, but a project scope document has not yet been created, a large red flag would be raised. Another example: if the project manager indicates that the project is on schedule yet there are deliverables past due and/or the milestone hit ratio is suspect, another large red flag will be raised. How to deal with these issues? That became another lesson learned which will be discussed later.

Please note that there were initially no metrics related to quality, risk, and a few other critical project management knowledge areas. Again we were learning to crawl first.

Once the metrics were defined and whittled down to the critical few, it was decided that not all the metrics carried the same weight. Hence, our next step.

Weighting the Metrics

This was another painful process. When you gather six or seven experienced project management people, with passions for different aspects of project management, it becomes difficult to come to a consensus. Some wanted to give project scope the most (or more) weight. Some wanted project schedule to carry the most weight. Others, milestone hit ratios or budget risk. After considerable give and take, we settled upon a weighting system that all could support (not necessarily agree with 100%, but support). Our final take was “we have to start with something—let's start here and modify it as we get smarter.” We didn't realize how quickly we would get smarter!

These are the weights (and scoring) that we applied to our initial set of metrics:

•  Has an approved scope document been created? Yes = 0.0, No = 2.0

•  Has a project schedule been created? Yes = 0.0, No = 2.0

•  What is the milestone hit ratio for this reporting period? >90% = 0.0, 80-89% = 1.0, <80% = 2.0

•  Are there any deliverables past due? No = 0.0, Yes = 1.0

•  How many open issues exist? <5 = 0.0, 5-15= 0.5, >15 = 1.0

•  Are there any unresolved open issues older than 30 days exist? No = 0.0, Yes = 1.0

•  How many scope changes during this reporting period? 0 = 0.0, 1-5 = 0.5, >5 = 1.0

•  Was there any unplanned turnover during this reporting period? No = 0.0, Yes = 0.5

•  Is the project on schedule? Yes = 0.0, No = 0.5

•  Has the project end date changed during this reporting period? No = 0.0, Yes = 0.5

•  What is the project budget risk? 0-3.0% = 0.0, 10-20% = 0.25, >20% = 0.5

We also quickly came to the realization that a project might “look bad” when it first starts, because the answer to the scope and schedule questions would be “No.” We opted to collect some generic information for each project as well—including project start date and anticipated end date. Obviously we also collected project manager and project sponsor names. By looking at the project start date, it could be readily determined that a project scope and/or schedule may be “in progress”—which is perfectly fine. However, if a six-month duration project still had the answer “No” to either or both of those questions four months into the project, another red flag is raised.

Based upon the weighting system we adopted, the lower the numbers, the better shape a project is in. It may be painfully obvious now (hindsight is 20/20), but some of our weighting was arbitrary and caused some consternation among project managers later. The first example was the number of open issues. For a large, complex, high-risk project, 50 or more open issues may not constitute a red flag. However, 50 open issues in a simple, two-month duration project would be unacceptable. The same issue applies, maybe to a lesser degree, to some of the other metrics, such as number of scope changes. As we progressed into the collection and reporting processes, this reared its ugly head.

A total project score is obtained by adding the weights of the 11 metrics based upon the input provided by the project managers. Our initial scoring system for the total project was as follows:

•  Project is healthy (green) = 0.0 - 3.0

•  Warning—potential problems (yellow) = 3.1 - 5.0

•  Problem project—immediate action required (red) = > 5.0

We designed the scoring/weighting system so that a project that is in trouble can eventually climb out of the hole the next reporting period or the reporting period after that.

The Project Prioritization Tool

We developed and utilized a simple project prioritization tool, based upon weighted answers to 13 separate business driver questions, including questions related to ROI, risk, and strategic value. This tool was utilized by some areas to select projects to be included in their portfolios. A sample of the priority tool can be made available upon request.

Communicating the Whole Process

A very integral part of the initiative was communicating the whole process—the how's and why's—to management, sponsors, and project managers. Upper management support was critical during this stage of the initiative.

Collecting the Metrics

While there are a number of comprehensive project portfolio management tools on the market, our research led us to the conclusion that they were geared toward organizations further along the portfolio management path. Therefore, for simplicity's sake, we utilized an Excel spreadsheet to both collect and report the metric data. This worked well for the initial project portfolio, but we will be adopting a more sophisticated collection/reporting system as we move forward.

Exhibit 1. Metric Collection Format

Metric Collection Format

We opted for a frequency of once per month—anything more frequent would be overkill due to the size of the projects in the portfolio, and anything less frequent would probably not give us the opportunity to intervene. If a project were going south, we would prefer to catch it in Mexico as opposed to Brazil (be aware that this metaphor is being applied from the Upper Midwest of North America!). We provided enough information on the collection sheets so that a project manager could make an informed decision on how to report the data. We also provided a “comments” column, which proved to be extremely useful. A lesson learned here is to provide more information to the project manager regarding what the metrics mean as opposed to less.

Exhibit 1 is an example of our collection spreadsheet.

Reporting the Metrics

The reporting format reflected the collection format. Again, we utilized an Excel spreadsheet. However, our management wanted a more visual representation. We adopted the traditional “red, yellow, green” dashboard color reporting scheme for each metric as well as the total score for each project. We developed these report spreadsheets in two flavors: an upper management report which listed all the projects for a given reporting period, and a spreadsheet designed for project managers and project sponsors that showed a single project across the time slices (in this case, once a month). We also had to deal with the situation if a metric, or metrics was NOT reported. We opted to highlight these in a flesh-tone color so that they stood out. Any unreported metric automatically received a weight of 1.0.

Exhibit 2 illustrates the upper management report.

Exhibit 3 illustrates the project manager/project sponsor report.

Project Health Reviews

One of the processes we established was a “project health review.” We were concerned that the monthly metrics were not always capturing the true health of projects, and, since we had some skin in the game for reporting health accurately to senior management, we needed a way to gather more and better information. After several metric reporting periods, we interviewed each project manager, asked to see the project management artifacts, and made judgments as to the quality of the artifacts. For example, a one line scope document would not constitute a quality project scope. Project schedules, if not being used to track progress, are useless. This was also the time when we discussed any “red flags” that we had observed. The results of these health reviews were also shared with the project sponsors. We also solicited feedback on the entire process during these interviews.

Exhibit 2. Upper Management Metric Report

Upper Management Metric Report

How Did It Work?

And what have we learned so far? The process itself worked very smoothly—due in large part to management support. The Excel spreadsheet collection/reporting formats are being replaced by an Access application and a relational database that will simplify the collection and reporting process. As we grow and administer even more portfolios, this will become an essential tool.

Our metrics also evolved—we added, at the request of several of our managers—some “forward-looking” metrics, for example:

•  Risk assessment score (from a template we provided)

•  Discussions with sponsors

•  Upcoming milestones

•  Anticipated risk factors

•  Potential need for unplanned (or unbudgeted) resources

•  Upcoming key handoff points

•  Management and/or sponsor assistance required

•  A “gut feel” overall rating of the project by the project manager.

Each of the new metrics was weighted and a new project total weight tolerance factor was calculated for those portfolios that utilized the additional metrics. These “forward-looking” metrics were designed to be even more proactive than the original set of metrics.

A list of some of our “lessons learned” to date:

•  Keep the initial metrics simple—attempt to match them to the project management expertise of your organization. As the project managers become more knowledgeable, create more demanding metrics.

•  Make the collection mechanism as easy as possible for the project managers to provide the inputs.

•  When collecting the project metric data, provide for a “comments” section that can be used by project managers to provide additional, supporting information. This proved to be very valuable.

•  Revisit the metrics, and the entire process, on a regular basis. Tweak and correct both the metrics and the process as needed. Do not assume that your first set of metrics will be your last set of metrics. For example, after our first metric review, we eliminated the metric “Number of open issues.” Depending on the size and complexity of a project, 50 open issues may not constitute a problem, whereas 50 open issues on a smaller project might. However, we retained the metric “Open issues > 30 days old.”

•  Solicit input from the project managers and the project sponsors. Allow them to help you determine the value (or not) of the metrics and the portfolio process.

•  Communicate the ENTIRE process to everyone who will be involved—project managers, sponsors, upper management, and any other stakeholders you can identify. Do this PRIOR to implementing anything.

Exhibit 3. Project Manager/Sponsor Monthly Report

Project Manager/Sponsor Monthly Report

•  Ensure you have the support of your management. This is critical.

•  Implement a project health review process to verify/validate the metrics being reported and to provide proactive intervention as required. Do NOT call them “project audits.”

•  On all reports, put bottom line on top; i.e., show the dashboard score for the total project on the first line, then the detail of the metrics following that. Many management people are “bottom liners.”

•  Start slowly—project portfolio management in most organizations is an evolutionary vs. a revolutionary process.

•  The portfolio administrators should take some accountability for the success/failure of projects within the portfolio.

Was It All Worth it?

The final vote is not yet in. Minimally, it has made our project managers and our leadership teams more aware of the value of good project management. It has also created a clamor for additional project management training (a need which we are currently addressing). It has gotten our leadership teams, project sponsors, and project managers communicating with each other about the critical projects within the various portfolios—and sometimes asking some tough questions. We are moving forward and plan to add several more of our IS groups into the portfolio process. We have learned much since the implementation, and are constantly reviewing the process and looking for additional improvements.

We (Project Office), as administrators of the various project portfolios, have made a commitment to take some responsibility for the success (or failure) of the projects we are administering. Since the portfolio administrators are measuring and reviewing the progress of the projects within each portfolio, they need to be proactive when they see problems or trends—and not simply sugarcoat what they observe. We are striving to make the process a useful tool not only for our management, but also for our project managers and their project team members.

This material has been reproduced with the permission of the copyright owner. Unauthorized reproduction of this material is strictly prohibited. For permission to reproduce this material, please contact PMI or any listed author.

Proceedings of the Project Management Institute Annual Seminars & Symposium
October 3–10, 2002 • San Antonio, Texas, USA

Advertisement

Advertisement

Related Content

Advertisement