Enterprise project management and estimating

This article is copyrighted material and has been reproduced with the permission of Project Management Institute, Inc. Unauthorized reproduction of this material is strictly prohibited.

Introduction

According to a study by the Standish Group, 52.7% of software projects overrun by an average of 189%, 31.1% are canceled, and only 16.2% are successful. (Standish 1999, p.2) Enterprise-level software integration projects have price tags that start at around $1M and can range up to $100M and beyond. While this makes them the most important projects to control and manage from a funding, budget, and cost standpoint, many will overrun by millions of dollars. Many IT managers feel that they are not achieving ROI as a result.

This paper examines techniques to control your large software integration project, focusing on proper estimating practices and techniques. Using estimating techniques that are 15-20 years old and well proven, you can properly control the scope of these projects and estimate them with considerable accuracy. In addition, these principles are applicable to many other types of projects and industries.

The Nature of the Problem

Imagine that you are the head of the professional services division for a large company that sells its software and services products internationally. You have just had a very difficult conversation with a key client that is headquartered in Atlanta. A few weeks ago, the CIO of this client kicked off an initiative worth several million dollars in sales and services. It seems that the IT Director of a San Diego division of the company called their local sales office and received an estimate that differed by more than 75% for the same job. This can realistically happen when there are sales pressures and no estimating standards in place. This scenario can be very damaging to credibility.

The possibilities for similar scenarios are endless in our current environment where communications are global and instantaneous. It becomes important to have methodologies and procedures in place that instill confidence and show integrity. “We tend to think of Peter Drucker as the originator of the idea ‘You can't manage unless you can measure’, but the idea goes back much further.” (Symons 1991, xi) Creating an estimate is the first step in creating the measurements from which to manage.

The Nature of Enterprise Software

While there are many different types of software packages that support an enterprise, some of the large vendors such as PeopleSoft, Clarify, and Siebel can address a broad range of functionality, including back office requirements (e.g. general ledger, financials), front office requirements (e.g. call centers, customer relationship management), asset management, supply chain, and other categories. At a high level, all of these pre-written software modules and packages share a common architecture. This architecture is database driven, with data elements representing screen displays and forms, and ultimately data representations of the business entities and processes which they support. As such, there are fairly limited ways in which the software can be modified or customized. A few of these are outlined in the following paragraphs to give the reader a sense for the structure and effort involved.

The first, and most basic customization, involves modifying or adding a new data field to a screen form (and ultimately to the underlying business data table). Since most vendors of pre-written software modules do not supply the entity-relationship diagrams for the software, some small amount of investigative work is usually required to make sure that the field addition does not disturb anything else in the software. Such a modification might take a day or less of effort to implement.

A somewhat more difficult operation is to develop a simple script of under 30 lines to modify data entered, perform a simple calculation, and/or populate a new field. The investigative portion is more substantial, and some time is required for even the most expert practitioner to code and debug a script. Such a modification might take a week or less of effort.

Finally, a more difficult operation might be a larger script, adding ActiveX controls, or making a change that requires a substantial modification to a database table. Here the impacts of modifications might be broader in scope and might in the long run, require either an expert practitioner or multiple resources (e.g. configuration specialist and database administrator).

A Possible Solution

Of course the first piece of advice that I provide to clients that want to integrate pre-written software modules into their enterprise is to limit software customizations and integration points. Leveraging the expensive software that you have purchased is going to provide better return on investment than business process re-engineering and major customizations. Reality, however, dictates that there needs to be some level or compromise, and that there is also often missing functionality that is absolutely required.

Often the challenge, however, is that it is difficult to build a complete Work Breakdown Structure (WBS) for some projects. Discovering and flushing out the required customizations, and then assessing each and every one can be a long and onerous task. Customers often seek a way to shortcut this process and cut down on planning time. This paper suggests the possibility of building an estimating model that is not based on a traditional WBS, but rather by identifying customizations, dividing them into groups and categories which can have an aggregate estimate for project execution, and then using this aggregate estimate to develop a full project estimate.

Tying Function Points to Software Customizations

During the 70s and 80s when there was a focus on custom developed software systems and applications, a need was recognized to be able to measure performance, starting with an estimate, to predict the development time. Many of the models developed, such as COCOMO, focused on studies of programming productivity. Weinberg, Shneiderman, and others focused on code production, programming languages used, programmer habits, and other sociological factors to attempt to quantify the activity of programming. (Weinberg, 1971; Shneiderman 1980) Indeed, many estimates were based on the thousands of lines of code that were to be produced, relying on the fact that one could accurately predict this. Many of my earlier software projects used the metric of 10 lines of debugged code per programmer per day, regardless of programming language. While this was successful, it was not without its challenges.

It was not always possible to accurately estimate the lines of code, especially in complex systems. It was particularly difficult when one had to develop a new and novel system, from scratch, that had limited existing paradigms and historical information to draw on. Indeed, another issue of constant concern was exactly how one should count lines of code in block structured languages with compound statements, such as Pascal, C++, and Java. What's more, some non-procedural languages such as APL, LISP, and SNOBOL were even more difficult to quantify and many fourth generation languages were moving away from traditional statements. The simple days of assembly language, COBOL, and FORTRAN were long behind us.

In parallel and separate from the studies of Weinberg, Allan Albrecht of IBM looked at alternatives. In particular, he focused on the entities that went into constructing systems: inputs, outputs, internal and external files, queries, and other high level notions. These became known as “function points”, and were the new basis for an estimating method called Function Point Analysis or FPA. In addition, he proposed increasing the estimate in a linear fashion, based on up to 14 types of non-functional requirements. It took more than 15 years to improve on this work, and in 1988, improvements began to emerge based on the consulting work of Charles R. Symons. The term “MkII FPA” was coined to distinguish the updated estimating methodology from the original work of Albrecht. (Symons 1991) Based on these works, the International Function Point Users Group currently defines a function point by the following statement:

“Function Points measure software size by quantifying the functionality provided to the user based solely on logical design and functional specifications. With this in mind, the objectives of FP counting are to:

  • measure functionality that the user requests and receives,
  • measure software development and maintenance rates and size independently of the technology used for implementation, and
  • provide a normalizing measure across projects and organizations.” (IFPUG, 2002)

The notion of breaking down projects into discrete, repeatable units has obvious applications beyond custom software development (and indeed the software industry). It became apparent to those of us working on the pre-written software integration estimating problem that it was not a large diversion from the intent of FPA to use the customizations of this software as the basis for the functionality. Focusing on just one software package initially, we drew up a finite list of possible customizations. Next, after extensive consultations with implementation practitioners, we decided to break down the list further into three “complexity categories”: low, medium, and high. Depending on the accuracy of the model, we reasoned that we could later provide a further breakdown to accommodate differences in the customizations. By performing this grouping and both observing and interviewing the actual implementation practitioners, we were able to assign a level of effort in hours to each grouping.

The level of effort was developed in such a way to encompass the time to understand the requested customization and its impact on the system, to actually make the customization, and to provide a unit test of that customization. Further, we organized the customizations by module. This provided the flexibility to add additional information on a case-by-case basis as needed. Small scripts that needed to be developed, for example, could rely on the older “lines of code” model since they are typically small and under 50-100 lines of code. It also allowed us to add and subtract modules easily as customer requirements expanded or contracted. In this manner, we were able to create an accurate and repeatable estimate for the core implementation project that focused on the normal activities of pre-written software integration project execution.

Adding together the customization times allowed us to develop an estimate of the work involved for project execution. The next and more difficult step would be to develop a method to expand the estimate to include costs and durations, not just for implementation execution, but the other project phases as well.

Using Heuristics to Complete the Estimate

The Guide to the Project Management Body of Knowledge provides for five project processes: initiating, planning, executing, controlling, and closing. (PMI 2000) In actual practice, however, most software project development life cycles operate on an expanded waterfall model (the project phases and activities cascade) or a spiral model in which steps are iterated based on trial implementations presented for further input and refinement. Exhibit 1 shows a hypothetical life cycle that shares the characteristics of the proprietary methodology that we first had to tackle.

Hypothetical Implementation Project Life Cycle

Exhibit 1. Hypothetical Implementation Project Life Cycle

A heuristic is nothing more than a “rule of thumb”. I realized that by applying several heuristics from past projects, I could finish the estimating process. In effect, while the work estimates for project execution were derived in a bottom up, objective fashion, applying heuristics to these results could lead me to a complete estimate that included duration, resources, and budget.

The first step in the process is to attempt to create an estimated duration from the work estimates. While at GE Information Services and Bluebird Systems, I commonly used the heuristic that of a 52 week year, any particular resource was generally available for 44 weeks due to vacations, holidays, training, sick leave, jury duty, and other paid absences. I also reasoned that a similar ratio could apply for the short term as well, since it is unrealistic to assume that anyone is constantly applied 100% to any task. Thus, by multiplying work estimates by 52/44, one could obtain a reasonable estimate of the duration. This is not to suggest that this might be the only heuristic that might apply, but this represents a good starting point and estimates can be later refined through actual project experience and historical records.

A more difficult task is to determine how to estimate the work and duration for the other project phases. While investigating the possibilities, I came across the following statement from Fred Brooks:

“For some years I have been successfully using the following rule of thumb for scheduling a software task:

1/3 planning

1/6 coding

1/4 component test and early system test

1/4 system test, all component in hand.” (Brooks, 1975, p. 20)

Brooks reasoned that the large portion of the project devoted to planning would ensure that the project scope would be well defined and easier to control. Further, dedication of a large portion of time to system test was equally important to make sure that adequate time to correct testing issues was built into the plan. I recalled successfully managing a large software project that produced a relocatable assembler, a new code generator for a Pascal compiler, and a linkage editor to produce the executable code by applying these ratios to the project plan. The project not only completed on time and on budget, but the resulting software remained in use without subsequent modifications for a period of more than ten years.

With this discovery in hand, I examined the target development life cycle and attempted to distribute the time proportionally in the same manner. Examination of existing project implementation historical records again proved useful in both determining and validating these numbers. Using this and the previous heuristic enabled me to successfully generate work and duration estimates for the entire project, starting from project initiation and running through project closing.

Next, by applying resources, the costs can be estimated. In fact, this is a little easier than most projects since enterprise software implementation projects are normally purely service oriented (of course for other project types, estimates of fixed and variable costs for materials, plant, and equipment can be added to the final estimate). This means that costs can be derived simply from multiplying the hours by the type and rate of the various resources normally applied to the project (Note: Travel and living costs estimates may need to be developed for consultants not based in the project area, however these are not difficult to derive. They should be presented as a separate cost not included in the estimates derived by the presented methodology). Exhibit 2 shows Figure 2-1 from the Guide to the Project Management Body of Knowledge. (PMI, 2000, p. 13) It reminds us that resources are usually lean at the start of a project (sometimes just the project manager and a systems architect to do initial planning), ramp up throughout project execution, and then again taper off as the project is closed.

Sample Generic Life Cycle

Exhibit 2: Sample Generic Life Cycle

Developing a Responsibility Assignment Matrix (see Exhibit 3) (PMI 2000, 111) and a staffing matrix (see Exhibit 4), the estimator can gather the types and costs of resources to be applied to the project. The project work and duration estimates are already in place and can be used to validate and cross-check the resource assignment.

Responsibility Assignment Matrix

Exhibit 3: Responsibility Assignment Matrix

Staffing Matrix

Exhibit 4: Staffing Matrix

With the resources and their billable rates (or for internal estimates, the fully burdened rate) known, one can now complete the estimate for work, duration, resources, and budget. Further, one can attach the expected accuracy and quote a range rather than a single point (e.g. -10%-+25% for a budget estimate).

Critical Success Factors and Challenges

According to Anthony Robbins, “Once you have mastered time, you will understand how true it is that most people overestimate what they can accomplish in a year – and underestimate what they can accomplish in a decade!” (Cyber Nation n.d.) It's quite obvious that estimating has an emotional human component to it, and based on my 30 years of experience, I believe that the following are some of the most critical success factors in estimating:

  • processes followed and estimate reviewed by experts,
  • input from all relevant sources – the team needs to create the estimate, not the project manager,
  • high level of technical knowledge – leverage the right subject matter experts,
  • buy-in (and signoff) from management, architects, and technical leads that work can be done as outlined,
  • don't just accept the sales estimate – re-estimate and re-estimate again once all the requirements are clear and the work is fully understood,
  • client must buy-in (and signoff) on scope, assumptions, and constraints,
  • proper resource allocation,
  • and, perhaps most important, when balancing the triple constraints, do not change the team provided level of effort (alternatives include reduced scope and schedule compression through crashing and fast tracking).

However achieving an accurate estimate and ensuring that the critical success factors are met is not always an easy task. Some of the challenges that Project Managers will face along the way include:

  • forming a dedicated team to work through the process: enterprise level projects take time to properly plan and estimate,
  • adequate time to checkpoint approach and solution with client prior to final submission (signoff is not a zero work task!),
  • adequate time for external reviews – the right subject matter experts need to be involved,
  • time to develop a mid-level plan, re-estimate, and take corrective actions as required,
  • and perhaps most important, maintaining awareness of other project dependencies in client environment – enterprise level projects have a large footprint on an organization and program-level management is required.

Refining Estimates Going Forward

To make sure that the final model that I developed was correct, I identified several implementations of various sizes that were deemed to be successful. Data was gathered, including the details of the modules and customizations made and the baselines and actuals of the project budget and plans. I entered the data into a spreadsheet implementation of our estimating methodology and compared the work, duration, cost, and resource estimates from the spreadsheet with the actual project experience. Where there were significant deviations, interviews with the managers and project managers involved assisted in determining reasons for the discrepancies. After a few minor adjustments, I successfully modeled several past projects and developed confidence in the estimating model that the team produced.

One key problem remaining, however, is the fact that there can be significant differences in implementation resources in the field. Not everyone can continuously operate at peak performance, and there are bound to be variations. The final step in the estimating methodology is to collect detailed historical data on new implementations going forward. Eventually a performance norm can be calculated and input as a further refinement to the estimates. I’ve hypothesized that a database corresponding to the model and recording estimates and actual performance can be used to take the model to the next level. In particular, the customization categories could be further subdivided with work estimates assigned to the new categories to provide a broader range of possibilities while refining the accuracy of estimates.

Conclusion

Estimating is important to large-scale enterprise software integration projects to keep them both manageable and successful. By realizing that the products involved share a common architecture, we can begin to define function points that allow us to build a part of an estimate in a bottom-up, objective fashion. Further, using common project heuristics and planning tools, we can use a top down, subjective allocation approach to complete the estimating model to include work, duration, costs, and resources. While a WBS is still the recommended process, this methodology can shorten WBS development time for extremely complex projects and still provide sufficient detail with which to provide an estimate and build a schedule.

Estimating large projects can be difficult and challenging, but by staying the course, following best practices, and applying this model, success is achievable. Further, because this model uses repeatable and explainable principles, it can be applied to re-estimating as project scope changes are introduced. Further, the deviations in estimates created by the scope changes are readily explainable to a customer, making the estimating model an invaluable tool in scope change management.

Once this model is applied and historical results are captured, analysis of the data can be used to tune the allocation model or develop a factor with which to adjust results. This allows an organization that applies this methodology to continuously improve its estimates and estimating practices. Other industries and projects can adopt this methodology for their large enterprise projects.

British Computer Society. (2003). Leading edge: size does matter. Retrieved August 1, 2003 from http://www.bcs.org.uk/publicat/ebull/jul03/leading.htm

Brooks, Frederick P. (1975). The mythical man-month: essays on software engineering. Reading, MA: Addison-Wesley Publishing Company

Cyber Nation International. (n.d.). Quotes to inspire you. Retrieved August 1, 2003 from http://www.cyber-nation.com/victory/quotations/authors/quotes_robbins_anthony.html.

International Function Point Users Group. (2002). Frequently asked questions. Retrieved August 1, 2003 from http://www.ifpug.org/about/faqs.htm.

Jones, Capers. (1998). Sizing up Software. Scientific American. 279 (6), 104-109.

Project Management Institute. (2000). A guide to the project management body of knowledge (PMBOK®) (2000 ed.). Newtown Square, PA: Project Management Institute.

Shneiderman, Ben. (1980). Software psychology: human factors in computer and information systems. Cambridge, MA: Winthrop Publishers, Inc.

Standish Group. (1999). CHAOS: A recipe for success. Retrieved August 1, 2003 from http://www.projectsmart.co.uk/docs/chaos_report.pdf.

Symons, Charles R. (1991). Software sizing and estimating: MkII FPA. Chichester, England: John Wiley & Sons.

Weinberg, Gerald M. (1971). The psychology of computer programming. New York, NY: Van Nostrand Reinhold.

This material has been reproduced with the permission of the copyright owner. Unauthorized reproduction of this material is strictly prohibited. For permission to reproduce this material, please contact PMI or any listed author.

Proceedings of PMI® Global Congress 2003 – North America
Baltimore, Maryland, USA ● 20-23 September 2003

Advertisement

Advertisement

Related Content

Advertisement

Publishing or acceptance of an advertisement is neither a guarantee nor endorsement of the advertiser's product or service. View advertising policy.