It seemed as if the project would be a slam-dunk. The new replacement system would do the work of the three systems currently in operation (and help us decommission the old systems). The users developed a rigorous Requirements Document, a vendor was selected from a field of candidates, the contract was executed, and the project team was formed and created its project plan. All seemed to be in place to move ahead. What the team did not see was the string of land mines that had been placed in its path.
The mines included broad issues like a pending corporatewide merger that would render the proposed configuration less than appropriate in the newly merged environment. Core gaps in functionality that were acceptable before the merger would ultimately impact anticipated efficiency gains, post merger. Perhaps the most serious land mine of all was the end-user operations group simply did not want the new system.
In the end, the recommendation to terminate the project was made by the project sponsor and accepted. It was not a pretty picture. A lot of money was lost; resources were consumed that could have been applied elsewhere; expectations were raised and then eliminated; and reputations were impacted. On one hand, it was better to stop a doomed project before even more money and resources were burned. On the other hand, this was a loss that some felt could have been avoided.
Could any of these issues and their associated risks have been forecasted? Sometimes a project takes on a speed and life of its own and turning back seems no longer an option. More likely, the team was focused on the project end state and did not consider the view, on the ground. Importantly, the team did not consider the readiness of the end-user community or the organizational feasibility to make the requisite change to the status quo.
How often have we heard that the project is ready to implement? Systems are all checked, user acceptance complete, production environment is ready to roll! But the project somehow fails to implement as expected. The users did not seem to understand the new capabilities. The bandwidth was nowhere near what it was supposed to be. Roles and responsibilities had not been established. Workflows were not thought through. The stories are all too familiar.
So why didn't it work? What are we leaving out? For all their sophisticated project structures and discipline, many project methodologies today lack a critical planning component—a built-in readiness assessment process. Just how ready is the environment for the new system, product, or process the project is about to bring? Preparing for readiness is not something done just prior to turning the key to open the door. This process needs to be activated earlier in the project and it needs to be rigorous. Readiness must be woven into the project fabric and developed even as the project requirements and specifications are being created.
Readying the environment means looking well beyond the system, product, or process (the “entity”) the project will create and looking to the world in which these will function. The original project vision or business case rarely asks for a specific kind of system (unless of course we are discussing a new operating system or systems platform or something of that sort). Typically, the business case that is developed to launch a new project is presented at a more strategic level. It envisions how a new product, a new processing facility or a redesigned workflow will grow market share or revenues, or how it will reduce operating expenses. Systems can be part of the equation, but they are not the only component.
To really be successful with our implementations, we must view the project delivery from a broader perspective. Are the people ready? Are their skills at the level they need to be to operate this new entity? Does the local management team support the changes being brought by the project? Have they prepared their people? Do they want to make this change? Will the facilities support the demands of the new operating environment? Are proper continuity of business and contingency plans in place? What about the cultural aspects? In short, is the environment ready to change?
If all this seems pretty obvious, it apparently is not. If it were so, wouldn't all our projects be implemented more efficiently? Why are they not successful? We have heard about how our projects are failing in the up-front planning process because of poor requirements and user support. They are also failing at the end point—acceptance testing brings too many issues to the surface that should have been resolved in the requirements or design phases. Desired functionality is not in place. The end-to-end testing is system focused and does not consider changes in workflow. End users are not fully informed of the changes about to occur in their world. Roles and responsibilities have not been established or communicated. These components can signal a substantial risk to project success if no corrective actions are taken. This is particularly applicable to projects in financial services institutions, which have a global reach.
Operational readiness significantly enhances the chance for project success by preparing the end-user environment, not as an afterthought, but as an integral part of project management. The concept of readiness embraces five of the nine PMBOK® Guide Knowledge Areas—Scope, Time, Cost, Quality, and Risk.
This paper will explore the key elements of an effective readiness program and then return to another story to examine how proper application of the program can mean success for a project.
Background and Fundamentals
“Operational Readiness” is a state that is moved toward incrementally by performing tasks and creating deliverables throughout the Project Life Cycle. An Operational Readiness Assessment ensures the operating environment is prepared to effectively support and accept the changes resulting from the project. The assessment helps determine the readiness state of the “receive” organization and defines how close this environment is to the desired readiness state.
The project manager and project team must develop the readiness program and perform the assessments. These are necessary deliverables for any project that develops new or enhanced products, processes, or systems.
Operational readiness should be assessed and reassessed throughout the life of a project:
• During project initiation where strategies for delivery can be related to the status quo. The end state vision is first introduced to key stakeholders.
• In the Requirements/Definition phase, after requirements and functional specifications are approved. The team knows how the end state will look on paper. Requirements and specifications can be reviewed with the users to test feasibility and reasonability.
• In the Design phase, after the product, process, or system and its infrastructure are designed. The resultant “blueprint” can be tested with the end users to ensure a proper “fit” with the current environment.
• Toward the end of the Build/Test phase, readiness is checked as the project moves through its final testing cycles. The end product is fully formed and can be clearly presented as the end-user community prepares for implementation. In this way, deficiencies in the project deliverables can be quickly addressed and retested prior to actual implementation as part of the overall testing strategy.
• In the Post-Implementation phase, after all operational adjustments are complete. Changes are assessed to determine whether they met expectations.
The readiness assessment illustrates where the operating environment is and is not prepared for pending project implementation. If performed properly, the people who will ultimately own the new system or operating environment become proponents of the change, prepared to support the implementation and, as a result, they will use the new entity more efficiently. Results of the assessment carry greater significance for those projects that require ongoing or iterative implementation phases. Where the end state is created through multiple implementation stages, lessons that are learned from the first assessment are more easily applied to these subsequent stages.
Key Elements of an Assessment
The assessment starts with a baseline condition for each dimension of the operational environment that will be affected by the implementation process. This is applicable regardless of the project phase in which the assessment is performed. The operational processes, structure, culture, and infrastructure are all included as dimensions being assessed. Other dimensions can include:
• Functional relationships
• Staff skills and experience
• Furniture and equipment
• Network connectivity
• Available training
• Continuity of business.
Questions that should be raised by the assessment typically focus on (1) what is in place for each of these dimensions, (2) what is not in place, and (3) what actions are needed to fill the gap. The information produced by examining each of these dimensions generates action steps necessary for creating readiness. These actions can be “cultural” (creating change advocates throughout the environment), “physical” (acquiring facilitates and connecting networks), and “developmental” (writing procedures manuals and coding applications). Once these steps and their respective resources and schedules are developed, they become part of the updated project plan as the project proceeds.
We previously noted that readiness should be assessed and the results reviewed continually during the project, each time revealing greater levels of granularity. The project manager and team are responsible to ensure readiness throughout the project and must determine the right point to conduct each assessment. For example, reviews conducted in the early project phases simply present the end-state concept and the blueprint for awareness and feedback.
As the project moves toward implementation, these reviews can produce a clearer picture of how that end state will impact the users. For this reason, project managers conduct on-the-ground assessments as close to implementation as possible. It should be remembered, however that sufficient time must be left in the schedule to apply any changes that emerge from the assessment. The key is in the comparison of the desired end state to the current state…can the proposed change really work in the real world?
The primary tools of an assessment include a series of templates and reports:
• Pre-Assessment Success Factors—provides a readiness overview and helps determine to what extent key factors are in place, such as change specification developed, change targets identified, roles and responsibilities defined, end state defined and understood, feedback plan created, alignment established among all stakeholders.
• Implementation Checklist—precisely defines the steps required to successfully implement the end state in this specific environment.
• Risk Assessment Report and Escalation Process—illustrates clearly and unambiguously, the assumptions, risks and related issues that can derail the project, and the mitigation steps necessary to address those issues.
When spun together, information gathered from these lists and templates highlight the potential impact of the planned implementation.
Implementation Plan Template
A comprehensive Implementation Plan is one of the critical deliverables that results from the readiness assessment. This structured and rigorous, step-by-step approach defines the order in which the new system or process improvements will be implemented for the end-user community. It should be designed to maximize the value available to the end users and minimize risk associated with the deployment. A typical plan would include the following:
• Tactic Descriptions (implementation tactics, change management tactics, risk escalation)
• Implementation Tasks and Major Milestones (typically presented in the Project Plan)
• Implementation Team Organization Structure
• Resource Requirements
• Conditions for Success (strengths, weaknesses and their mitigation, critical success factors)
• Implementation Risks
• Implementation Checklist.
If this looks like a Project Plan, that is by design. While projects will vary in duration and complexity, they all have an implementation phase. All too often, project planners fail to (1) begin implementation planning soon enough (should start during project initiation) and (2) do not apply the planning rigor necessary to ensure the implementation is well thought through. In fact, implementation should be seen by project managers of large projects as a unique, stand-alone effort, often requiring a separate team and plan. The good news…shifts in behavior are already being seen.
Assessing readiness is not an event. It is an essential part of the fabric we weave when we plan and manage a project. There are some key actions to remember in each project phase (see Exhibit 4).
Several of these actions are already part of most project life cycle methodologies. Their incremental value is in the readiness focus they bring to a project. Taken together, these actions are the basis of a readiness program that is formed during project initiation and enables effective reporting and decisive action throughout the project.
So, what happens when the system is more ready than the actual environment? Let's see how assessing readiness can make a difference.
A Different Story
It started as a large, global project—a large, disabled, global project. After four years, $4 million overrun, the loss of both the business and technology sponsors, a first-stage implementation judged “disastrous” by the most well-meaning of reviewers, this world-wide, super-scope, burn-all-the-resources project had left a wide trail of bodies and plans. Things had not gone well. A region-by-region deployment had been attempted for the first stage, with no prior training for the service representatives and limited communication to the stakeholders. Because no one knew what they were supposed to be getting from the system, issues of communication and training quickly escalated into apparent operations and systems issues. The results left the IT and Operations managers and the Project Team scrambling to find out what “was wrong with the system.”
In fact, nothing had been really wrong with the system (well, maybe there were a few bugs here and there—well, maybe a lot of bugs—but not enough to cause the resultant uproar). The system had been built to specifications. In reality, the lack of communication, preparation, and training was the biggest issue, since this condition forced the project team to chase down unreal problems. This burned resources unnecessarily and led to serious problems around management, image, and trust.
But this was a new day and the team was ready to begin its second stage deployment.
Well, it wasn't quite ready, since the team was not yet certain how the “target” area—a web of customer service sites, held together by several major processing hubs—would deal with the impact of the second stage. This stage would deliver true global presence for the sponsoring organization. It would deliver a system with high-volume image processing and financial transactions, supported by load sharing for massive amounts of data, across three continents and a dozen time zones.
Many of the issues that plagued the first stage were still present—the project was still being run as a “technology project” (in reality the project involved a full workflow, reengineered operation, new equipment, new responsibilities, new expectations). Finger pointing and accusations were the primary mode of communicating. There was no risk management program. Project meetings were held two times a month, and these meetings were brutal—three-hour sessions with as many as 40 participants, 20 in the room and 10–20 more on the phone, scattered around the world. For those participants on the other side of the planet, it was midnight. Each meeting was driven by the review of a 20-page, 1,500 step Microsoft Project schedule. you could actually see the participants’ eyes glazing over. One could only imagine how the folks on the phone were feeling. People were exhausted, and they were staring a full second stage deployment in the face.
So now, with the clock ticking, faced with this pending deployment and headed in the same direction, the project team knew they needed to take some drastic steps. They jumped at an opportunity offered by the Project Management Office (PMO), which had just recently gained jurisdiction for these large-scale projects. The PMO offered to perform an Operational Readiness assessment of the entire end-user operating environment. It was understood this effort would review technology and workflow and would also put more focus on communication and training. While the approach was not a radically new concept, the sponsoring organization had not really spent the time before to test the end-user environment with the same rigor it had used to test the systems. It would be a new experience for the team.
So now, for this second stage deployment, much more attention would be paid to the end user—not just by acceptance testing the system, but also by “acceptance testing the environment.”
We designed the Readiness Plan to establish a baseline for measurement, in this case, what must be in place to support key system components—(1) deployment of the image processing workstations and their support workflows and (2) the infrastructure and interrelationships necessary to support the new configuration. The assessment would address hardware, functionality, operational processes, people, organizational structure, local culture, infrastructure, and roles and responsibilities among the various end users.
The scope would include key branches targeted for the new system and the regional processing centers that supported those branches.
We agreed the PMO would manage the assessment with the following scope:
• Attend to key dimensions (facilities, technology, people, processes)
• Conduct the assessment within the target organization as well as in the client environment, suppliers, and other external stakeholders, as required
• Extend the readiness process to all aspects of this system implementation, worldwide
• Address all necessary management levels, including project sponsor and business management, technology teams, end-user management.
We began with a series of meetings among the project manager, regional project manager, regional implementation manager, and the project office manager. A contextual map was created that would inform the local operating area about why the assessment was being done and also define what was to be implemented. An outline was created to build a baseline view of the environment and to help guide the actual assessment time frame.
We defined a five-day period for the assessment.An aggressive agenda was laid out and agreed with the local management team. We planned to present key aspects of the system to the end-user group and then break into subgroups according to processing area to conduct the interviews. Our scope was developed within the following objectives:
• Meet the local management team
• Begin the process of communicating change
• Assess management/staff expectations and develop ways to manage those expectations
• Understand barriers to success
• Put feedback mechanisms in place
• Assess project risk and establish the steps to mitigate that risk and ensure readiness
• Create a prototype readiness strategy using this first deployment to set a benchmark and extend that strategy to all subsequent deployments of the system, worldwide.
We then developed an interview “protocol” to guide discussion with the local and regional operating staff.
In one week, we were determined to accomplish the following:
• Present business view and service delivery model
• Introduce the assessment concept
• Define interview objectives and introduce the assessment approach
• Assess the target service and operating areas
• Present findings to the senior management team.
Just before the team departed for the assessment site, a pre-assessment questionnaire was forwarded to the user group. We hoped the answers to these questions would provide us with a basic understanding of the current environment and some context for the assessment by the time we arrived onsite. And they did. The service and operations representatives were very forthcoming with their information, and their responses gave us a good picture of what we had to face.
We met on a Sunday night just before the assessment was to begin. The assessment team, consisting of the project manager, project office manager, and the regional implementation manager, laid out the week's timetable over dinner. It was the first time the three of us sat down together (all prior communication had occurred by phone, fax and email) since the assessment area was in Southeast Asia and the project team operated out of New York. It was a great meeting (and the food wasn't bad either).
We began the assessment the next day and completed our work within the five-day target. And all the planning paid off.
What We Discovered
The first thing we learned from doing this assessment was how much the end-user group was willing to be part of the initiative. By providing them with an overview of what they should expect from the system, we began to build trust. In a project world where end-user readiness was not part of mainstream thinking, this was a breath of fresh air. For the group that had experienced the hobbled first stage implementation, this was heaven-sent! All participants, from the senior management team to the customer service and processing staff, were quite willing to be part of an exchange of information, issues, and ideas.
We had shared our objectives, plans, and the agenda with this group from the beginning, and we received the same kind of openness in return. The user community quickly learned that the end result of this assessment was a better system, more adapted to their needs, and a more efficient implementation.
Our findings, however, gave us cause for concern, given the degree of the system complexity. For instance, we saw that:
• There were support issues that involved difficulties with workstation installation, data center support, and general communication.
• There was no risk escalation process in place.
• Current bandwidth configuration could not support the anticipated image processing transaction volume.
• Image scanning problems were creating significant rework, which added to an already oversized workload.
• While anticipated improvements included significant reduction in some current processing functions, selected changes would necessitate workarounds that would require manual practices to stay in place longer than expected. This would require continued reliance upon older systems that were not well integrated with the new configuration.
Additionally, it was determined that several processing areas had not prepared workflow plans. Roles for these operating areas were not yet set between the branches and the regional processing center.
It was quite a week. Through some long work days and nights, we were able to complete the assessment and provide a preliminary report of findings to the management team before we departed. Several of the elements we discovered were addressed and corrected during the week. Others were placed on a high priority list for resolution prior to the pending implementation.
The good news is that the majority of these issues were found some three months prior to the scheduled implementation date and this provided enough time to make the corrections. The end users guided these changes to ensure their needs were represented.
The end result was a successful second stage implementation that was relatively error-free and a smooth transition for the service and operations staff. Success was measured by the lack of serious errors, issues, or rollbacks during the implementation process. Success was also seen in some very satisfied users and in the response from senior management that is best illustrated in the “thank-you” letter from the senior sponsor:
This has been a long and sometimes, painful journey…many believed this day would never come, but there were others who had the FAITH and waited for this day…this is the fruition of many, many days of labor starting from requirement definitions to rollout…I congratulate each of you on this achievement…would not have happened without you.
Another result of the work done was the decision by the project sponsor and business managers to integrate readiness assessments into every subsequent deployment of this system. Follow-up conversations with the project manager confirmed that there was “no discussion” as to whether the readiness assessments would continue. It simply became the way things would happen, going forward.
Is this the right approach for every project? Readiness is a part of ensuring project success. Perhaps the better question to ask is: Can your project afford not to test the readiness of the environment?
A word to the wise…if you are in the midst of a project and have not planned for readiness, perhaps it is best to take the advice offered by an old Turkish Proverb: No matter how far down the wrong road you have traveled, turn back.