Estimating as an art--what it takes to make good art

Introduction

As a project manager one of the most important thoughts at project start-up is the reliability of the estimates. Too often Project Managers are faced with poor estimates due to incorrect usage of the estimating process.

According to the Standish Group CHAOS report of 2003:

  • 15% of software projects are terminated before they produce anything
  • 66% are considered to have failed

Of those that do complete, the average cost over-run is 43%. The lost dollar value for USA projects in 2002 was estimated at US$38bn with another US$17bn in cost over-runs.

The IT Industry has focussed on improving their process for managing projects, but we still experience projects that fall critically behind schedule and are way over budget. One of the reasons is poor estimating, the estimation output and tracking of the assumptions in the estimates..

The estimating process is there to produce more than just the end dollar number or the end effort estimate. The estimating process is there to ensure that we have planned the project with the correct scope, cost, earnings, resources and deadlines.

This paper will focus on Cost only from the perspective of labour cost, which is the most important contributor to cost in most software projects. Estimating will therefore focus on the estimates of effort hours.

Estimating as a process

The Estimating process is the combination of different techniques and solutions to drive a high quality result. An estimate based on a single method can never be considered the best estimate since it is not capable of being validated against a control provided by a different view.

A Guide to the Project Management Body of Knowledge (PMBOK® Guide) defines estimating by the following: “A quantitative assessment of the likely amount or outcome. Usually applied to project costs, resources, effort and duration and is usually preceded by a modifier (i.e., preliminary, conceptual, feasibility, order-of-magnitude, definitive). It should always include some indication of accuracy (e.g. ±x percent).(PMI p. 380)”

Estimating is mentioned in three KPA's within the PMBOK® Guide – Estimating resources, estimating duration and estimating cost.

The Estimating Process and other relegated process

Exhibit 1 – The Estimating Process and other relegated process

Inputs to different estimating techniques are, of course, different from technique to technique – but some techniques can have the same inputs in different forms. It is important to utilize this information to identify the best metrics and information to track and to utilise as historical information and possible reuse.

Inputs to estimates

There can be many inputs to estimates technique- The main ones (besides scope) is size, delivery units (i.e. defined project deliverables), WBS (i.e. task based) and historical information.

Size input to estimating technique

Function Point Analysis (FPA) provides a good size measure that depicts the software requirements by functionality. Source Lines of Code (SLOC) is also a frequently used size measure, but has the disadvantage of being heavily technology dependent. Both of these are recognized as industry benchmark size measures and should therefore be used in order to ensure sizing consistency across different projects. Sizing consistency is also needed in order to utilize historical information and parametric estimating techniques.

FP size quality usually depends on the detail and accuracy of the scope document. It is therefore important to assess the accuracy of the size estimate that is obtained.

An experienced Function Point Analyst can provide an accuracy number to the size metrics provided.

The FPA Process and the accuracy

Exhibit 2 - The FPA Process and the accuracy

The accuracy estimates above depend on the method used. Accuracy is improved with better documentation and greater levels of details documented by the analyst. Each is appropriate depending on the needs of the client and PM, but only the detail approach is truly Function Points. Four stages are recognised here

  1. The Ratio within this is calculated using historical information such as 28 FP per Logical File or other Rule of Thumb techniques.
  2. The Robust identifies all transactions and translates them into logical transactions.
  3. The limited identifies all transactions and assess size and complexity by utilizing assumptions.
  4. The detail is a full count using the defined approach in IFPUG Counting Practice Manual all the way down to complexity rating.

FPA can be carried out at different stages of the project

  • At the beginning of a project, in the Planning phase, to determine the development effort/duration/staff required.
  • As the project progresses, to control the changes included into the project and to take corrective action.
  • At the end of the project, when the application is already developed:
  • To have a base line for future application enhancements.
  • To record metrics that will be used for future developments, by adding these metrics to the organizational database.
  • To determine the effort that will be required to provide Production Support to this application.

One of the benefits – besides size – of FPA is that the method is a good independent peer review of the requirements. The level of accuracy of the FPA is an indication of the quality of the requirements. In addition the FPA breaks the functionality into data elements and transactions that can be linked to procedures, design documents etc. to ensure that what is specified is also delivered.

Delivery Units

Delivery Units are usually only valid within a single project or within similar projects. A delivery unit could be tracking Use Case diagrams from start of project to end of projects. A delivery unit is something that can be defined uniquely for the Project, and it can be technology and Project dependent. Delivery unit is a Project definition and needs to be associated with some type of size or complexity. By their nature delivery units do not provide data with historical use for future projects.

Remember that in the case of software development, Function Point or other industry recognized size measure should be the Organisational defined size metric for Software Project requirements.

A very useful technique is to ensure that the delivery unit is defined and measured within the project in order to easily identify changes to scope of the project, but that the size is then calculated in Function Points.

The Delivery Units and the relationship to FP size metrics

Exhibit 3 - The Delivery Units and the relationship to FP size metrics

Delivery units and size are usually closely related and one can calibrate the link between the delivery units and the FPA by analysing of the connection between the two. An example is to create a spreadsheet for the number of Screens developed in terms of low medium and high complexity thus creating an internal unit of size measure which is related to but not the same as functional size as measured by FPA. This approach will always be project specific, and is not repeatable across an organisation, but it is capable of being used to track scope variance, and this in turn may trigger the need to re-visit the estimate using more formal sizing techniques.

The most important aspect of the delivery units is that these can be tracked as a part of the delivery Process. Requirement traceability is about tracking from requirements to delivery –and requirement tracking is an important aspect of Scope control.

WBS

Task based estimating using the WBS structure is the most common approach to estimating and is well documented. There are few key factors which should be considered from an estimating perspective.

  • Ensure that there is used a common phase based WBS structure for the organisation to ensure that it is possible to compare project to project – and that way utilizing historical information for future estimates.
  • A good starting point can be had by defining relative effort by phase (ideally based on historical or failing that on external data) so that creating, say, and estimate of design effort, product by product, can be used as a multiplier for calculating the total effort for the project
  • In addition the phased percentage distribution is a good validation for a project to reconcile that the effort needed is actually depicted in the estimate.

Historical Information

Information should be collected across the lifecycle to indicate trends such as scope change and also to determine accuracy of the estimates. These data support the improvement of the future estimates. Historical information should consist of both measurement information such as size, effort, defects, CR, staff and duration, but in addition also influencer and characteristic information. The creation of an organisational database of historical data is a key resource for future estimates, and estimators should make use of the information in the database as an aid to future work. The better and more consistent the data definition and the collection process, then the more useful will be the data.

Estimating technique

The minute we examine the estimating technique the more we need to be aware that there are so many things that influence the time it takes to produce a software application.-During the estimate it is important to identify these influencers so that they are a part of the validation of the estimates. The below exhibit illustrates a section of the factors we must take into account when validating an estimate.

The influencers to effort/cost

Exhibit 4 - The influencers to effort/cost

Expert Judgement

Expert judgement is usually used in order to validate estimate outputs from the various processes as well as providing reconciliation of the estimates created. The expert judgement can also be used to actually estimate lower levels of task in a bottom.-up estimate. Sometimes bottom-up estimates are called expert estimates.

We always need the expert judgement – the issue is sometimes to get hold of the experts at the right time. There are many benefits for the quality of the estimates if it is validated by an estimating expert - even if it is late in the life cycle.

Bottom-up estimating

Be sure to evaluate and document all assumptions and constraints used in the estimating – such as specific resources utilized in the project that might have higher productivity then other staff, duration constraints, assumptions about peak staff. Any size measures that might have been thought about during the estimates – such as Design document should be used as input to the Bottom-up estimate.

These are usually done by any PM doing a bottom-up estimate – but unfortunately it seems they often forget to document them, and more important-they forget to track them.

It is important to remember that tracking is a very important part of ensuring good estimates continues being good estimates.

Analogues Estimating

Analogues Estimating

Exhibit 5 - Analogues Estimating

The analogous estimating technique uses information from similar projects to establish a cost estimate based on the data available. Analogous estimating needs to include expert judgement in order to establish reusability of the data. Analogous estimating is used where there is limited information about the project. It assumes that where broad similarities can be established with a previous project then it is possible to make assumptions about the cost, effort etc needed to deliver the new work.

The approach can also be used to utilize information from other projects to establish and validates estimates for a new project. E.g. If it is possible to find one project where you can get the distribution of the effort in percentage by phase – you can estimate an early phase in detail and then make an assumption that the rest of the phases to establish an estimate for the rest of the project by utilising the percentage.

Analogues estimating technique

Exhibit 6 - Analogues estimating technique

Be aware that analogous estimating is based on broad similarities based on limited data. It is a high level approach and can be seen as a stepping to stone to more detailed approaches such as parametric estimating. It is also worth mentioning that the % historical effort is a good validation of a bottom-up estimate based on a WBS. Just remember that the WBS highest level needs to reflect the phases used to calculate the historical effort. Usually if a historical database is not available, the availability of a previous project estimate is all that is needed in order to do an analogues estimate.

Parametric Estimating

Parametric estimating

Exhibit 7 - Parametric estimating

Duration constraints and defect removal quality are the factors with the most impact on the effort needed for the successful delivery of a software project. Tools are available to assist with the method of estimating and it is important that any tool chosen is one that, besides resources and staffing profile, can create a duration assessment with risk for the go-live date taking into consideration the expected number of defects. The testing effort will increase proportionally depending on how many defects are introduced and how close to a 0-defect application it is the goal to reach.

It is recommended that the parametric estimates produce several scenarios that show impact on staffing, effort and Quality if there are either staffs or duration constraints. Duration constraints that are unacceptable for quality and good productivity are often demanded by clients, so the scenarios can be used early to negotiate with the client on an optimum duration in order to have better productivity (lower cost) and higher quality (Lower mean time to defect).

Three Point estimating

Three Point estimating example

Exhibit 8 - Three Point estimating example

Three point estimating is a statistical technique which can be used effectively together with bottom up estimating. This technique uses the lowest estimate and the highest estimate to calculate the most likely result. Another way of using the three point estimating is to use the output from parametric estimating and using the accuracy of the input such as function points to develop a view of the possible range of costs. If the size estimate used for a parametric estimate is with a possible variance of 25%, scenarios can be created which illustrate the relative costs at the highest and lowest size values, assuming other parameters such as duration remain constant.

Accuracy of the estimates

The accuracy of the estimates depends on many factors. These include:

  • Reliability of the scope definition. The question is if the scope is documented using a consistent approach – such as use case? Is it documented to a level where we can actually identify the work into a level of detail needed to estimate – such as use case including alternatives and data information?
  • The quality of the documentation should be used as the input to the tracking of the project.
  • Effectiveness in tracking. Remember that estimating should be a recurrent factor for the entire project, and the more details about the product you produce the closer the estimates and the actual should get. Remember that an acceptable level of scope changes and even scope creep is always to be expected.
  • Determined trigger points for re-estimating. Without validating the impact of the changes to your project (and there will always be changes) you do not have control of your project. Creating defined trigger points will provide a consistent approach to determining the need to a re-estimate with the changed factors (bigger size, more peak staff, changed duration etc.). This in turn will help you to see if there is a reason to do a detailed replanning or if the project baseline estimate and the re-estimate are within reasonable thresholds.

Usually we would not use all of the estimating techniques, so it is always important to use the ones that suit the project characteristics the best. This is never simple so it is suggested that part of the expert judgement estimating is to have some staff that are always involved in estimating – an estimating team – in order to have consistent approach and utilise best practice regarding estimates.

Reconciliation of the estimates

The reconciliation process is where most benefit can be gained from the use of expert knowledge. It is not the purpose of this paper to cover reconciliation is detail, but it is important to remember that the goal is not to reach exactly the same result using different estimating techniques. This is actually an issue for some estimators – they will bend and break the rules using industry data in order to reach the same Effort/Cost numbers.

The aim of reconciliation estimates provided by different methods is to arrive at an answer which is most appropriate in the circumstances of the project. This is essence of the art of estimating. It uses multiple pieces of information to create the most likely cost of delivery of a project within the constraints defines as the individual estimates are built

The most important things to consider during reconciliation are

  • What is the accuracy of the different estimates?
  • Why is there a difference in the estimates?
    • Testing higher then industry average
    • New technology
    • Completeness of scope definition
    • Skills – very experienced within both business, tools and technology
  • Have all assumptions, constraints and risk been identified during the process for the estimates.
  • How do we track the estimates to see that it is a good estimate and to control when a re-estimating is needed?

Next Steps

The next step from the estimates is to develop the schedule including duration and assignment of staff. This process itself might cause the estimate to be repeated if it turns out that a lot of the staff members are not working full time – and therefore there is more overhead on meetings etc, or if the duration puts a constraint on the project so that the staffing profile shows more resources than the estimate recommended. Again – use the scenarios from parametric estimating to support the development of a good schedule and the accuracy of the estimate to add contingency...

The schedule will provide the final effort and the final staffing profile. This is then used together with the staffing rates to produce the staffing cost.

Tracking can now start on the assumptions and risks identified, the size information, the effort, staff and duration from the estimates as well as the ones incl. the contingency and the cost in the cost model.

Tracking is an integral part of project management and delivery. If this is the first time – start simple – track size, changes, defects, staff and effort – incl. your earned value. This basic set of measures together with documented trigger points for re-estimation is a good starting point for building estimating best practice.

ROI of estimating

So what happens when you start doing good estimates and introduce the support of an estimating expert in the estimates?

  • Peer review of the scope document by the Function Point counter as well an identification of scope quality.
  • Identification of critical tracking metrics
  • Clear identification of Risk impact of constrains – such as duration constrains
  • More reliable go-live dates (this can often prevent penalties).
  • Less contingency needed in the planning of the project due to better estimates.
  • Collection of historical data
  • Best Practice is moved from project to project by the estimator

Conclusion

  • Utilise more then one estimating technique
  • Remember to assess quality and accuracy of size and estimates and the impact of influencers and complexity
  • Use estimating experts to support the use of historical information, parametric tools and best practice for estimating.
  • Don't forget to document assumptions, constraints and risks
  • Track the estimates incl. size – not only tracking of cost.
  • Identify a and a threshold for re-estimating and track project progress using the threshold
  • Make sure to include a good Change Management process

References

Garmus, D. & Herron, D.(2001) Function Point Analysis; Measurement Practices for Successful Software Projects, Addison-Wesley

Garmus, D. & Herron, D.(1996), Measuring The Software Process: A Practical Guide To Functional Measurements, Prentice Hall

IFPUG (2002 April), IT Measurement; Practical Advice from the Experts, Addison-Wesley,

Evolving Standards in Function Point/Lines of Code Ratios (2003), Koni Thompson Houston; Presented to 18th International Forum on COCOMO and Software Cost Modeling

IFPUG, Guidelines to Software Measurement, Release 2, www.ifpug.org

IFPUG Function Point Counting Practice Manual, v. 4.2.1, IFPUG, www.ifpug.org

PMI, (2004) A Guide to the Project Management Body of Knowledge (PMBOK® Guide), Third Edition. Newtown Square, PA: Project Management Institute

The Standish Group (2003) Chaos report

© 2006, Christine Green
Originally published as a part of 2006 PMI Global Congress Proceedings – Madrid, Spain

Advertisement

Advertisement

Related Content

  • Project Management Journal

    The Dark Side of Projects member content locked

    By Locatelli, Giorgio | Kondstantinou, Efrosyni | Geraldi, Joana | Sainati, Tristano This article presents the dark side of projects, engaging project scholars and practitioners in discussions about sensitive, confusing, uncomfortable, challenging, and questionable phenomena.

  • Project Management Journal

    In Praise of Paradox Persistence member content locked

    By Gaim, Medhanie | Clegg, Stewart | Pina e Cunha, Miguel By analyzing paradoxes encountered in the construction of the Sydney Opera House project, we discuss how dialogical interactions enable options to emerge in the form of responses that were not…

  • Project Management Journal

    The Relationship Between Uncertainty and Task Execution Strategies in Project Management member content locked

    By Maes, Tom | Gebhardt, Karolin | Riel, Andreas Common project management methodologies do not consider project task uncertainty for determining appropriate task execution strategies.

  • Project Management Journal

    The Dark Side of Environmental Sustainability in Projects member content locked

    By He, Qinghua | Wang, Zilun | Wang, Ge | Xie, Jianxun | Chen, Zhen Using fraud triangle theory, this study investigated the effects of three types of factors that shape contractor greenwashing behaviors.

  • Project Management Journal

    Developing a Multidimensional Conception of Project Evaluation to Improve Projects member content locked

    By Rode, Anna Le Gerstrøm, Svejvig, Per | Martinsuo, Miia This study aims to explore and define project evaluation and reveal how it can promote continuous improvements within and across projects and organizations.

Advertisement