Effort tracking--is it worth the effort?


Karen Doyle, Software Quality Assurance, Intel Shannon Ltd.


Effort tracking is an integral management tool for project leads and adds significant value if used efficiently. Despite this, many organisations find it difficult to implement, particularly when many think it nothing more than additional overhead with little value. This paper introduces a typical process overview and discusses the barriers to implementing a process in an attempt to give the reader an insight into how to actually realise the tangible benefits of effort tracking. Examples of these benefits will be presented from experiences working with a highly-focused software development team within a Fortune-500 multinational.


What management controls can you put in place to ensure that you will deliver the desired output in the planned timeframe? Some form of measurement of progress against the schedule is absolutely essential. It sounds extremely straightforward and obvious, yet many projects make the classic mistake of not having such a measurement system in place. Before you can keep a project on track, you have to be able to tell whether it's on track (McConnell, 1996). Introducing an effort-tracking system is by far the best way of providing the measurements that will enable the project management team to control the schedule, take data-driven decisions and even gather historical data for future projects.

Understanding the problem domain

What is effort tracking and why is it useful?

Effort tracking is primarily a management tool for schedule control. Specifically, it is the process of measuring the hours expended on both planned and unplanned activity. This is a crucial point as it enables the project manager to compare actual versus estimates for planned items in the Work Breakdown Structure (WBS), thus enabling schedule control. Moreover, it also facilitates the gathering of data for use in planning future projects.

In his article titled “Why is software late?” Van Genuchten holds that the amount of “other work” in the projects he studied was underestimated (Van Genuchten 1991). This observation can be found in many books and articles on the subject of software development. McConnell cites “Insufficient management controls” and “Omitting necessary tasks from estimates” as classic mistakes (McConnell, 1996, Sec. People #23). A robust effort-tracking process is invaluable in avoiding these classic mistakes. It can provide measurements for schedule control, data-driven decision-making and gathering historical data for continuous improvement in estimation and planning capabilities. Examples of these benefits will be explored deeper in subsequent sections, but first the process itself should be discussed.


Effort tracking within the project management framework

Effort tracking is primarily a schedule-control mechanism and, as such, it exists in the time knowledge area within the monitoring and controlling process group. There are also elements of the process that are relevant to the planning process group (e.g. gathering data for future planning). Exhibit 1 illustrates the context the effort-tracking process fits in within the project management framework. The numbers in parenthesis identify A Guide to the Project Management Body of Knowledge (PMBOK® Guide) process groups that the effort-tracking process interacts with (PMI, 2004, p 70).

Effort-tracking process within the project management framework

Exhibit 1 – Effort-tracking process within the project management framework


“Everything should be made as simple as possible, but no simpler.”—Albert Einstein.

This statement should always be to the fore when it comes to process definition in a software development environment, and is particularly relevant to effort tracking given that one of the main reasons for failure is overcomplicated process (May). Not paying due diligence to the process definition will almost certainly result in failure. The key consideration in process definition is to focus on analysis of the output first, i.e. decide what is required from the process in terms of data points and then determine what tools are available to do the job. For this reason, we begin the overview of the process with outputs first.


Before a single entry is logged, decide exactly what measurements are required. The outputs depicted in Exhibit 2 below illustrates some typical examples, e.g. you may need

  • Earned value metrics for schedule control.
  • Overhead percentages to facilitate prioritization of activities.
  • Distribution of other non-project activity for planning future projects.

Only when the outputs are defined are you in a position to determine the tools and/or techniques to generate that output. Frequently, breakdowns in the process have occurred because the emphasis has been placed on the tool first, thus dictating the process, when it should be the other way around.


There is a wide variety of tools now available to facilitate the collection of input for effort tracking. Once the desired outputs are understood, there are a number of secondary requirements that determine tool selection such as:

  • Team size – some tools scale for larger development teams better than others.
  • Organisation Structure – some tools cater for multiple projects, as is typical in matrix organisations.
  • Configuration – tools that can import MS project schedules versus plans in an excel spreadsheet.
  • Granularity of information – upper layers in a large organisation require higher abstraction of the data, e.g. hours versus quarters.
  • Reporting capabilities – some tools have excellent data presentation features; some use back-end databases that can be queried by SQL scripts and applications.

In selecting a tool, the recommendation would be to define a weighted selection criterion based on desired outputs and requirements. Select the tool that scores the highest across the range of categories.


In order to minimise the overhead of users entering data, the process must be lightweight and non-complex. Data is logged against planned project activities and other activities at regular pre-defined intervals, typically weekly. One item often overlooked is to ensure a common understanding of the input categories. If this is not the case and users are entering effort for similar activities but in different categories, any analysis based on that data may be inaccurate. The key point is not to overlook training at the outset.

Graphical representation of process flow

Exhibit 2 – Graphical representation of process flow

Exhibit 2 summarises a typical process flow. Users enter data at regular pre-defined intervals, the tool captures the data against pre-configured schedule and categories, the project manager analyses the output data to check for schedule variance and/or make decisions based on the output. Problem solved, there is now a closed loop measurement system for schedule control. In theory, it appears very straightforward, but in practice, there are often quite a few barriers to getting an effective process in place.

Barriers to implementing

If the process flow depicted in Exhibit 2 is so clear-cut, why is it frequently difficult to get an effective process in place? If the benefits are that worthy, why would the entire team not want this in place from the outset? In practice, it is viewed as a non-trivial routine task for both the project manager and the development team. Isn't it just another task for the team to do in an already process-heavy environment? This is the most frequent response from project managers and developers alike when the topic is raised. Also, it appears that the benefits are not immediately obvious, so why would a team want to institute it? This response is entirely understandable; it is based upon those individuals' experiences with similar processes that have failed miserably. There is a combination of factors that contribute to this impression:

  • Users don't believe that the data is being used – they frequently log input as the process requires but never see any actions or decisions being taken based on the data. It's like entering data in a “black hole”. It is time-consuming for people to enter data when they don't see any direct benefit in doing so, i.e. it is just another routine task they have to do.
  • Users don't trust how the data is being interpreted and/or used – the process should be task-oriented, not people-orientated; it should be used for progress against schedule and not as a performance management tool. If people feel that they are being monitored or evaluated, there appears to be an urge to comply with the norm. Never has the old adage of “garbage in, garbage out” been more applicable than when the user loses confidence in how the data is being used.
  • Poor tools and over-complicated process results in users, at best, just sending e-mail summaries of status. The net effect over time is a fragmented or incomplete database.

The best way to overcome these barriers is to actively demonstrate the benefits of the process in the first instance to team members, i.e. show them what is in it for them. The key to achieving an effective effort-tracking process is to secure buy-in from the users. Do this by keeping it as simple as possible and demonstrating how the data is used to the benefit of the project/team. Credibility and trust are paramount to success. The project manager must act and be seen to act on the data.


Schedule Control

It is often said, quite correctly, that the project manager's number one priority is to protect the critical path (Goldratt, 1997). It follows that schedule management must be considered one of the most important functions. It is also the one metric that senior management and other stakeholders most often enquire about. One of the major benefits of effort tracking is that it provides a closed loop feedback system for schedule analysis and control.

Exhibit 1 illustrates where the effort tracking fits within the context of schedule control. Incorporating the process in this fashion enables the continuous measurement of planned versus actual effort. The simple addition of a “percentage complete” attribute will facilitate the calculation of Earned Value metrics which can be used to gauge schedule variance and performance (PMI, 2004, p 70).

Having this type of information available on a regular basis is invaluable to the project manager. It gives him/her the one thing that we so often wish for in striving to manage the critical path, i.e. time to take corrective action based on the current measurements to meet the schedule baseline. Exhibit 3 shows one such example in practice. Proactive analysis of the data showed that poor scope control resulted in a drop in schedule performance against plan. Root causing this problem allowed the Project Manager to initiate formal change control for the additional work rather than missing the planned delivery date. This type of analysis would not have been possible without the effort tracking process being in place.

Using Earned Value to measure Schedule Variance

Exhibit 3 – Using Earned Value to measure Schedule Variance

Data-Driven Decision Making

“I used to be indecisive, but now I'm not so sure…”

The typical project manager is faced with making thousands of decisions over the course of a project. Some of these decisions can be made based on experience and intuition, but many cannot. Instead, they need to be based on accurate, factual data that all concerned can understand and row in behind. Tracking effort against plan is one means of providing that data.

The best way to illustrate this point is by means of an example. While previously working as part of a development team on a particular project, the project manager published the metrics that would be used to track progress against goals (e.g. percentage overhead). In addition, thresholds were defined for each metric upon which corrective action would be taken should the threshold value be reached. As the team were operating within a matrix organisation, there were a number of competing demands for people's time. Through use of a simple effort-tracking process, the project manager was able to demonstrate that certain thresholds had been reached or exceeded. Corrective action was instantaneous, there was no need for endless meetings debating the rationale for decisions/actions, the team understood that mid-course corrections were being taken based on data and immediately rowed in behind the decision. The project manager had succeeded in securing buy-in from the team.

Exhibit 4 below is an example of the data used to make the decision that certain activities in which we were engaged (e.g. training, support activities) were to be put on hold until such time as the non-project overhead indicator returned to an acceptable level. One of the key learning's from that experience was that not only did the project manager control the project through use of data-driven decisions, but the manner in which the data was used, and seen to be used, actually resulted in the fringe benefits of the entire team having confidence in the process. Use of the data was task-oriented, timely, and decisive thus resulting in better input, better analysis, improved predictability and ultimately project success.

Using pre-defined thresholds to trigger corrective action

Exhibit 4 – Using pre-defined thresholds to trigger corrective action

Another important indicator that this data can be used for is to gauge the level of multi-tasking or context switching that the team has to contend with. Context switching between tasks takes time and on average sequential processing gets you results faster (Spoisky, 2001). This is particularly relevant during periods of the project where laser focus on certain tasks or activities is required.

Managing and influencing the factors that cause change is crucial to the success of the project. If the project manager is not in a position to make mid-course adjustments quickly based on data, the project is doomed to failure. Effort tracking in isolation does not guarantee the success scenario just described. Goal clarity plays a huge role in decision making, but without it, decisions are based on gut feeling and instinct which often vary significantly from one project manager to another.

Improved Planning of Future Projects

“My interest is in the future because I am going to spend the rest of my life there…”

One of the most obvious benefits of an efficient effort-tracking process is the collection of data related to past performance and activities. When it comes to planning projects, there is no substitute for historical data, better again, historical data for a similar project developed by the same team. Put another way, ‘Today's weather may not be the same as tomorrows, but it's more likely to be the best single predictor'. Logging data of actual versus estimations yields important information for future effort estimates while logging effort for unplanned activity will identify tasks that should be included in subsequent project plans. This is particularly true for those operating within an iterative development process. “One of the most common sources of estimation error is forgetting to include necessary tasks in the project estimates” (McConnell, 2006, p 44). Taking steps to collect both of these data points results in what every project manager strives for i.e. better planning. Careful planning at the beginning of the project is perhaps the single most important factor that distinguishes success from failure (Schach, 1997, p 291).

An effective effort-tracking process not only tracks effort consumption against defined project tasks but also measures consumption of effort against traditional overhead activities to archive as historical data. Exhibit 5 illustrates an example of the distribution of other activity that one particular development team measured on a recent project. The same team used this historical data as input to subsequent planning sessions. The availability of this type of data is invaluable in overcoming the “systemic problem of under-estimation” in the software industry. (McConnell, 2006, p 27)

Typical distribution of other activity for historical data

Exhibit 5 – Typical distribution of other activity for historical data

Some of the benefits of gathering historical data are as follows (Construx, 2006)

  • Avoids Guesswork.
  • Avoids politically charged assumptions such as “team is below average”.
  • Accounts for organisation influences.
  • Is negatively correlated with cost & schedule overruns.
  • Provides the best indication of future productivity.

If people don't keep careful records of previous projects, they forget about the less visible tasks, but those tasks add up. Omitted effort often adds about 20 to 30 percent to a development schedule (Van Genuchten 1991). An example which illustrates this point combines all the benefits discussed in this paper into one argument. The project was using a schedule-control mechanism incorporating effort tracking as shown in Exhibit 1. The scrum process (Clifton & Dunlap, 2003) was also being used to decompose the high-level project schedule into several shorter frequency sprints. Project effort and overhead were logged weekly through the effort-tracking process. After approximately three or four sprints, one of the most influential members of the development team approached the project manager and requested that his availability to work on project tasks be reduced based on the data gathered over the previous sprints. It transpired that he had been spending more time mentoring junior members than originally planned. Once it was agreed this was a valid and necessary task, the appropriate adjustments for subsequent sprints were made. The team member was quite pleased that he could influence the planning using past data, which left him feeling quite empowered and motivated. The project manager was even more pleased. He knew at that moment that he had overcome one of the typical barriers to implementing effort tracking, i.e. buy-in from the team. The wheels were in motion for better input, better analysis and better predictability. The importance of predictability should not be underestimated; it's what 80% of executives value most (McConnell, 2006, p 29).


The examples referenced in this paper demonstrate that effort tracking is an integral management tool for project leads and adds significant value if used efficiently. Implementing an effective process requires an understanding of the typical barriers and how to overcome them. The key messages are to focus on process outputs rather than the tools, ensure it is task oriented, secure buy-in from the team and most importantly act and be seen to act on the data when available. Realise the benefits that are possible and actively demonstrate those benefits to the team. Ultimately, a project schedule steered by data-driven decisions should prove that it's worth the effort.


Clifton, M. & Dunlap, J. (2003, August). What is SCRUM? Retrieved on February 1st 2007, from http://www.codeproject.com/gen/design/scrum.asp

Construx Software. (2006, September) Software Estimation in depth. Bellvue, WA, USA

Goldratt, E.M. (1997) Critical Chain. Great Barrington, MA: The North River Press.

May, L.J. (1998) Major Causes of Software Project Failures Retrieved on February 1st 2007, from http://www.stsc.hill.af.mil/crosstalk/1998/07/causes.asp

McConnell, S. (2006) SoftwareEstimation. Redmond, WA: Microsoft Press.

McConnell, S. (1996). Classic Mistakes Enumerated. Retrieved on February 1st 2007, from http://www.stevemcconnell.com/rdenum.htm

Project Management Institute. (2000) A guide to the project management body of knowledge (PMBOK®) (2000 ed.). Newtown Square, PA: Project Management Institute.

Schach, S.R. (1997) SoftwareEngineering with Java. Singapore: McGraw-Hill.

Spoisky, J (2001, February 12). Human Task Switches Considered Harmful. Retrieved on February 1st 2007, from http://www.joelonsoftware.com/articles/fog0000000022.html

Van Genuchten, M. (1991, June) ‘Why is software late? An empirical study of reasons for delay in software development', IEEE transactions on Software Engineering. Retrieved on February 1st 2007, from http://is.tm.tue.nl/staff/mvgenuchten/mvgwhylate.pdf

© 2007, Intel Corporation
Originally published as a part of 2007 PMI Global Congress Proceedings – Budapest



Related Content