Introduction
This presentation will answer the question “Why should I do benchmarking of IT projects?” Many projects today involve Information Technology (IT) in some way or another. IT project delivery is notorious for being late and over budget. This paper outlines a benchmarking framework that has been used by the author over the past six years to measure IT project delivery against the company's own projects as well as similar projects run by other companies. The method, results and benefits of the benchmarking will be shown, which will assist readers in establishing their own benchmarking programs for IT projects. The benefits that have been derived from the benchmarking program will be highlighted to justify the costs involved in implementing and managing such a program.
Why do Benchmarking?
Benchmarking is: “The continuous, systematic search for and implementation of best practices, which lead to superior performance“. (The Benchmarking Centre). “Benchmarking is the process of comparing the performance of an organisation's business activities, typically a specific subset, against the performance of other organisations. Typically organisations seek to be compared with like organisations. Sometimes the comparisons might be based on best in class, whatever the industry sector. At other times, in order to better understand the capabilities of specific processes and technologies the comparisons may be based on deeper like-for-like characteristics”. (QuantiMetrics).
In today's world of cost control and efficiency any initiative that is undertaken must be justified by the potential benefits that it will bring to the organisation. If the potential benefits do not outweigh the costs and effort involved in producing the benefits, then the initiative should not be done at all. This general business rule applies equally to a benchmarking program.
Benchmarking of IT projects has the following potential benefits:
- Measuring cost of project delivery against industry benchmarks to assess whether the IT function is cost effective;
- Identifying aspects of project delivery that are problematic and can be improved so as to reduce the cost of project delivery in the future;
- Measuring the effect of improvements over time by charting the trend of the measurements over time;
- Demonstrating the relative value of the IT function to senior management, based on comparative costs of other organisations, as shown by the benchmark results;
- Comparing the measurements of different technologies used in projects, so as to decide on the most cost-effective technologies to use in the future;
- Using historical measurements of the organisation and other organisations to validate estimates, either internal or supplier sourced;
- Benchmarking can therefore help answer questions such as: Do you deliver projects on-time? Do you deliver projects within budget? Do you deliver quality products? Is your process predictable? How does your performance compare to that of your competitors? Is your performance world-class? What are the problem areas to be addressed? Are we improving?
- The implementation of steps to address the identified short-falls can then result in high performance project delivery, meaning: High productivity, high speed delivery, high quality (freedom from error), conformance to requirements, conformance to plans, and low cost.
These potential benefits could easily justify the costs of the benchmarking process and the limited overheads that it imposes on the process. Over the six years that we have been using the benchmarking process the costs for IT project delivery have reduced by 54% as a result of applying the recommendations flowing from the problem areas highlighted in the annual benchmarking assessments.
Benchmarking Against What?
Benchmarking requires that we have something to compare the actual project delivery measurements against. Some possible benchmarks to compare project delivery against are:
- The organisation's own projects from previous years;
- Projects from other organisations in the same industry (competitors);
- Projects from other organisations in the same country;
- Projects from other organisations in the world.
These comparisons can be extremely useful is gauging relative efficiency of IT project delivery and can be used over time to see if the delivery is improving or getting worse. Keep in mind that IT projects have specific characteristics which will affect delivery, such as the project size, project duration, team size, team skill level, programming language used, degree of change during the project, skill level of the users, project constrains such as schedule and cost, etc. These characteristics need to be considered when comparing with other projects.
How does the Process Work?
The process works as follows(Exhibit 1):
Exhibit 1 – Benchmarking Process (QuantiMetrics, 2004)
Training is conducted for the people who will be involved in the benchmarking process, such as the project managers of the selected projects. The required project data is collected and reviewed for accuracy and completeness. The data is then compared against the reference database of projects, based on the comparison parameters chosen, such as which organisations will be compared against. The resultant comparison data is then analysed and the benchmark metrics produced and interpreted. The results are documented in a comprehensive report, which is workshopped with management to identify action plans to address the problem areas highlighted in the analysis. The resultant recommendations and action plans for improvements for future projects are presented to executive management for approval.
The value and accuracy of the analysis depends heavily on the accuracy and extent of the reference database of projects. An organisation can build up their own reference database of projects over time, but this will obviously limit the benchmark comparisons to the organisation's own projects. Specialist benchmarking service suppliers can be used to supply a much more comprehensive reference database of projects. This database will typically contain many different types of projects from organisations across many industries and countries. The reference database also needs to be segmented according to the project characteristics in order to be able to compare projects with other projects of similar characteristics in order to make meaningful and objective comparisons. The reference database of the benchmarking supplier used contains over 5 000 projects from over 100 organisations worldwide.
The following data needs to be gathered for each project that is included in the benchmark analysis:
- Function Point Count for the total project
- Platforms, languages and tools used
- Planned and actual hours
- Planned and actual costs
- Errors during testing
- Errors during the first month of live operation
- Team and user experience in years
- Staffing peak by phase
- Comments about the project from the team and stakeholders (positive and negative)
What are the Metrics?
What measures and metrics can be used to assess IT project delivery? If we measure something, then it must show something that is useful for measuring and improving project delivery. For example, if we measured the total cost of the project by itself, then this is not really a useful measure on its own. The cost may be very high relative to most projects, but the important thing is what was delivered for the cost. It therefore becomes apparent that in order to compare projects we need to have some way of quantifying the relative “size” of projects. It is only when we have a common standard measure of size that we can do any comparisons between projects. How can one go about quantifying the size of an IT project? The most widely accepted measure of size of an IT development project is the use of Function Point Analysis (FPA).
Function Point Analysis
“First made public by Allan Albrecht of IBM in 1979, the FPA technique quantifies the functions contained within software in terms that are meaningful to the software users. The measure relates directly to the business requirements that the software is intended to address. Other business measures, such as the productivity of the development process and the cost per unit to support the software, can also be readily derived. The function point measure itself is derived in a number of stages. Using a standardized set of basic criteria, each of the business functions is a numeric index according to its type and complexity. These indices are totalled to give an initial measure of size which is then normalized by incorporating a number of factors relating to the software as a whole. The end result is a single number called the Function Point index which measures the size and complexity of the software product. In summary, the function point technique provides an objective, comparative measure that assists in the evaluation, planning, management and control of software production” (www.ifpug.org/about/about.htm).
“Function Point Analysis (FPA) is widely used method for measuring the size of business-systems software and the projects that deliver it. The size is measured from a logical, or user, point of view. The count is independent of the tools and technologies used (such as programming language, development methodology, technology or capability of the project team). “ (QuantiMetrics)
Put simply, FPA breaks the system requirements down into transactions and files. Transactions are categorised into Input (capture), Output and Enquiry type transactions. The complexity (Low, Medium or High) of each transaction determines the number of function points assigned to each transaction. Examples of logical transactions would be: Add_Customer, Query_Customer. Files are assigned function points based on the number of fields in the files (Low, Medium or High) and whether the file is read only or update. Examples of files are Customer_Master_File and Invoice_File.
Productivity (fp/SM)
The productivity measure is calculated by taking the total number of delivered function points (fp) for the system and dividing it by the total Staff Months (SM) of effort recorded for the project. The effort includes all effort expended in connection with the project, such as requirements definition, design, programming, unit testing, system testing, user acceptance testing, project management etc., up to system implementation.
Quality (errors/fp)
Quality is measured by taking the total number of errors divided by the number of function points for the delivered system. Two quality measures are taken: all testing errors and M1 (month 1) errors. M1 errors are the errors found during the first month of live operation of the system after its implementation.
Schedule Conformance Index (ScI)
Schedule conformance is the measure of the degree to which the project total effort hours conformed to the planned total hours.
Budget Conformance Index (BcI)
Budget conformance is the measure of the degree to which the project total cost conformed to the planned total cost. Project planning and control is assessed by comparing planned schedule and budgeted effort against actual schedule and effort used. Multiple plans can be assessed – those at project launch, after requirements analysis, after design, as well the latest customer-approved plan. (QuantiMetrics)
Cost ($/fp)
Costs are converted from the actual currency (at the then current exchange rate) to US Dollars so as to be able to compare projects across different countries. The project costs are stated as a cost per delivered function point, by taking the total project costs in US Dollars, divided by the total delivered function points.
Duration (Elapsed Months)
This measure compares the actual project total duration with the benchmark for similar projects.
Staffing Index (SI)
Staffing Index measures how many staff equivalents worked on the project and how this compares with the norm. The service that we subscribe to uses a normalised measure of staffing. This measure removes the effect of differing size of project output on effort and timescale used. Projects with high staffing indices are those that are completed n relatively short duration (relative to the benchmark), but with high effort (relative to the benchmark) and visa versa.
Staffing levels on software development projects critically affect the cost of development. If a business deadline determines the need to staff a project at a high level in order to be confident of delivering the system in a timely fashion, then the substantially higher cost of a time-pressured style of working may be justified. However, all too often, projects are conducted in this way when there is no such business deadline. (QuantiMetrics White Paper entitled : Management Styles in Application Development)
Functional Delivery Index (FdI)
This measure is referred to as “Normalised Efficiency”. This measure allows one to compare projects of differing size and differing time scale. Projects with high FdI are those that used a combination of low duration and low effort (relative to the benchmark).
Results of the Benchmarks
The first benchmark for the organisation was carried out in 1999 on a sample of 8 projects. The benchmark is not a static comparison, since the database of projects is constantly being updated with new project information, which affects the benchmark metrics from one year to the next.
Example of results (2004):
Quality has improved due to a greater focus on testing.
There is some evidence of planning problems.
Compared to other Insurance Companies most performance indices lie within the 1st and 2nd quartiles.
Higher volume of change than the benchmark.
There has been a shift to more up-front work i.e. in analysis and design phases.
Exhibit 2 below shows the summary diagram of the benchmark results for 2004:
Exhibit 2 – Benchmark Results 2004 (QuantiMetrics, 2005)
The darker footprint shows the organisation results for each of the metrics. The wider the footprint the better the result. The inner footprint is the benchmark derived from the reference database of similar projects. If the organisation footprint is outside the benchmark footprint, then it has exceeded that benchmark measure.
The 2004 results above show that all the measures exceed the benchmark, except for Budget Conformance Index (BcI) and Schedule Conformance Index (ScI).
Improvements Made and Benefits Derived
After each benchmarking cycle, an action plan was drawn up based on the recommendations flowing from the benchmark comparisons. These action items were then implemented on subsequent projects and the effects of the changes measured in the subsequent benchmarking cycle. Many improvements were implemented during this timeframe, resulting in benefits to the organisation, but some of the major improvements and benefits will be mentioned here.
One of the major focuses of the organisation during this 5 year period has been the reduction of operating costs. This has been evidenced by the driving down of the project costs, as shown by the Cost per Function Point metrics (see exhibit 3 below). This cost reduction has been made possible through the implementation of other recommendations made by the assessments, such as the increased focus on accurate and complete project planning and adherence to a disciplined standard project life cycle.
Exhibit 3 – Cost per Function Point
The Functional Delivery Index (FdI) is a composite metric that shows an overall “normalised efficiency” of overall project delivery, derived from the project measurements taken for the full sample of projects each year. A high FdI indicates a high normalised efficiency. Exhibit 4 below shows how the FdI has improved over the past 6 years. Historically it has been proven that a 2 point improvement in the FdI translates into roughly a 25% reduction in project delivery cost, dependant on time scales and project size. Generally this can be achieved over a two year period (QuantiMetrics). In the six year period the FdI has improved by 3.6 points, which translates roughly to a 45% reduction in cost, which correlates closely with the actual measured average cost reduction of 54% over the period.
Exhibit 4 – Functional Delivery Index
Vast improvements have been made in the quality of the systems during project execution, as evidenced by the improvements in the Month 1 Errors metrics shown in exhibit 5 below. The first assessment showed a definite problem in the quality of the delivered systems. This problem was addressed, resulting in a dramatic improvement in quality in the following assessment. However, the quality declined slightly for the ensuing 3 years, but then improved significantly again in the most recent assessment.
Exhibit 5 – Quality: Month 1 Errors
Conclusion
This paper has outlined the method, results and benefits of a benchmarking process that has been used by the particular organisation over the past six years. The results show a steady improvement in the comparative benchmark measures as improvements have been implemented based on the recommendations flowing from the weaknesses identified in the benchmark assessments. The organisation feels that, based on the quantification of these improvements, the cost and effort involved in maintaining the benchmarking process is more than justified.