a new model for a new era
by John W. Reaves
HOW DO YOU reduce your systems implementation team by 40 percent, increase their workload twenty-fold and still bring your projects in on budget and on schedule without increasing the budget or elongating the schedules?
You get everyone thinking “out of the box,” develop a new model for managing your systems implementations, and act as your own system integrator. This new model breathes life into an old process and allows companies to keep their implementation process—and their dollars—in house.
As a $25 billion telecommunications company, GTE is no stranger to system implementations. For years, the company managed its business operations with internally developed systems designed specifically for the highly regulated business that existed prior to the Telecommunications Act of 1996. So when the decision was made to replace 90 business systems with a commercial-off-the-shelf (COTS) product—SAP R/3—we put together an implementation team secure in the knowledge that we knew what we were doing and were prepared to pull off one of the largest SAP/R3 implementations in the world.
After the first installation, the team conducted a “lessons learned” session and began a review that we felt would help us ensure the success of future rollouts. This session taught us that, while we had a solid foundation, there was plenty of room for improvement before we would have the processes in place that would sustain a program of this magnitude over a three-year rollout period.
John W. Reaves joined Honeywell Information Systems as a software engineer after retiring from the military. During his time at Honeywell, he was sent to the Supreme Headquarters Allied Powers, Europe (SHAPE) in Belgium as a special advisor in the development of large integrated databases, and to Germany as a special software development advisor to the Army Corps of Engineers. Reaves joined GTE in 1990 as a software engineer and database specialist. Today he is the director for GTE's SAP Program Management Office in Dallas, Texas.
Our original approach for this program included the processes with a proven track record from the days of mainframe implementations. We knew we had to develop a schedule, track progress against it, manage the budget, ensure that those accountable were held to their commitments, and the information technology organization would drive and manage the project. These were obvious details that no one questioned. What wasn't so obvious was how to manage and track a program while implementing a COTS product in a client/server environment for a telecommunications giant whose business was changing almost daily. Understanding that problem became the key to changing the way we approached the program.
Our comfort zone had evaporated. We were no longer in the traditional mainframe, in-house software development environment that we all knew so well. Because there was no model for implementing a program the size of ours in this new environment, we had to develop our own. Traditionally the information technology organization drove and managed the projects. However, because business was changing so rapidly within the telecommunications industry, it was felt that the business area should drive and manage the SAP program. A triad made up of Finance, Human Resources, and IT organizations would run the program, with overall responsibility falling to the vice president of infrastructure management within the finance organization. The responsibility for tracking and monitoring the program fell to the Program Control Office (PCO), which has traditionally been located in the IT organization. In our new model, the PCO is assigned to the business area and reports directly to the vice president for infrastructure management. The reason for this was to have the PCO develop a methodology for tracking and monitoring the program that would be directly related and responsive to the needs of the business. In our model, the project leads are also from the business area to help the development effort respond more easily to the rapidly changing needs of the company.
To determine how to report on the overall health of the program, the PCO worked directly with the vice president of Infrastructure management, the project leads, and the IT organization. Their input provided the basis for us to manage, track, and report on 20 software development implementation projects simultaneously. Here is what we learned.
Planning. At the onset of the program, our team of finance, human resources, and information technology experts came equipped to participate in the planning process. Each had an action plan that addressed their specific roles in the program. While this worked initially, it had the effect of creating silos. The finance staff knew where they stood with their projects and the human resources staff knew where they stood with theirs. The information technology staff, however, was often put in a position of having to serve both groups simultaneously, without the information they needed to support either group and without knowing what the interdependencies were among the projects the two groups were implementing. This was particularly true when it came time for the projects to move into or out of the integrated test environment or into production.
With the assistance of the team leads and the IT organization, the PCO tackled the job of developing a planning template that could be used for all the teams. The result was a template that took into consideration the rollout of all the projects and at the same time followed the life cycle methodology that we developed for our COTS/client server environment. We revamped the schedule so that each phase of all projects was planned end-to-end in relation to the overall program release schedule for the entire year, rather than only looking at just the next phase of any given project. In this way skilled personnel from the functional development teams—ABAP language programmers, for example—are retained from the outset of the requirements gathering phase through the end of the integrated testing phase. Once the integrated testing phase has been completed, the production support team takes over and rolls out the system to the users. The production support team is still supported by the functional development team that remains actively involved throughout a 30-day customer warranty period, minimizing “handoffs” and providing those personnel most familiar with the product to support it. Each group repeats its efforts for subsequent project rollouts after completing their work on the current project, per the planning template. This provides smoother transition from one project to another, improving our ability to use our human resources effectively and allowing us to manage staff hours more efficiently. We can share staff among the projects, add specialized consultants if necessary, or otherwise customize our team based on the projected needs. We are now able to look at the big picture, including all interdependencies, and assess the impact of our decisions across the entire program.
Progress Reporting. One of the most important changes we made was to improve our reporting process on the overall health of the program. Our first implementation taught us that the traditional method of tracking “lines of code” or “function point counts” to determine the size and complexity of a given project has no meaning in the implementation of a COTS product. We had to find another metric. In the program vernacular, we called this metric “CRIMES,” to reflect the Conversions, Reports, Interfaces, Legacy Modifications, Extensions and Security elements that we track to keep our implementation on schedule. The size and complexity of a given COTS project is really determined and can be measured by the number and complexity of the CRIMES elements. The metrics are segmented by project phases—requirements gathering, design, code development, testing, move to production—and contain baseline and current estimated completion dates. We create reports on a weekly basis that go to the entire team in the form of wall charts that are displayed in a central area of the project team's work location. Color-coding identifies task completion against the plan or benchmark. Green is for completed tasks and yellow is a “caution,” for a task for which the completion date has been changed from the original schedule and, if not monitored carefully, may slip. Red is for missed completion dates.
Reader Service Number 030
Risk Management. We were neither insulated from nor unaware of the possibility that a crisis or business requirement would cause us to alter the schedule or modify a planned phase. The requests were rare, but did and continue to come up. We chose to handle these requests with a process that focused on two key factors: managing the expectation of the customer and monitoring the implemented phase until we could guarantee the desired results. For example, a request to quickly move an urgently required type of functionality into production, which would require spending less than the desired time in the testing process, would be honored only after the risk of cutting short the testing was clearly communicated to and accepted by the internal customer. For us, this method of managing the risk has proven effective.
Documentation. From the beginning, we placed high importance on documentation. People change, processes require fixes, and requirements need modification. Without accurate documentation, the entire implementation can become bogged down with rework or attempting to create “workarounds.” This is why it is absolutely imperative that the documentation be created in tandem with the project. Attempting to create documentation after the fact is difficult at best and runs the risk of being inaccurate as team members try to remember why decisions were made or modifications created.
Meetings. Meetings are inevitable and necessary. And though we try to keep them to a minimum, there are three that have proven to be the cornerstone for assessing and ensuring the health of the overall program.
The weekly program review meeting is made up of the program's leadership. At this meeting, they review the overall program to assess progress; identify problems; assign action items that have to be reported to the members; make business decisions and assess the impact of those decisions on cost, schedule, and quality of the product. Risk assessments and mitigation are also reviewed and acted upon. Changes are discussed but schedules are adjusted only in extreme instances.
The second meeting is the change control board meeting, which takes place on an “as required” basis to ensure internal customer expectations are met. This meeting is designed to address proposed changes to the baseline requirements and to ensure that any changes are tracked and handled in a controlled environment.
With so many projects running concurrently, there was and continues to be a need to track dependencies within and across the multiple projects. For that reason, we've established a third meeting open to all team members: the integration meeting. This meeting is mandatory for the program's leadership but everyone on the project is welcome to attend to get information on the “big picture” and give input. The sole purpose of this meeting is to examine the relationships and dependencies among various phases of all the projects, any changes to planned dates, and the impact of these changes on other projects. Network diagrams showing the dependencies among the different projects, due dates, and status are provided for the meeting.
TODAY WE OPERATE in a mode of continuous process improvement—as the program has evolved, so have our processes. Our second and third phases of the implementation—moving personnel information on 60,000 of the 89,000 employees to the SAP R/3 human resources/payroll module and introducing SAP R/3 finance functionality into seven additional business units—have gone without a hitch. We are better able to use our staff more efficiently because more accurate staffing and time reporting has helped us know in advance which team resources are tied up on what tasks.
The development of a new program management model and the establishment of repeatable processes have been evolutionary. We are in the midst of implementing in excess of 20 projects simultaneously as SAP R/3 is phased into the various domestic business units. The success of the program is defined by our ability to achieve far better results while managing 20 times the number of projects with 40 percent fewer team members since many of those on the original team have gone on to other jobs. A streamlined model, rigorous program management, and a policy of continuous process improvement have proven to be the linchpin to our success. Although we have come a long way and our successes have been many, our entire team continues to “think out of the box” for better ways of doing things. ■
Reader Service Number 101
July 2000 PM Network