the science of uncertainty


ProjectLink Consulting

Estimation is at the heart of most project disciplines, and project cost and time overruns can often be traced back to inaccurate estimates.

Estimation requires human involvement to create a forecast that considers past projects, personal experience, and industry-specific knowledge and techniques. But the process of estimation is often subject to biases by the estimator.

This paper explores the problem of estimation inaccuracies from a cognitive psychological perspective. It looks at various research studies about the way in which the human brain deals with forecasting, and makes recommendations on how estimates can be improved.

Keywords: estimation, optimism bias, uncertainties, psychology


How often have you left the office on a Friday afternoon with a list of work assignments that you wanted to complete over the weekend? Did you reflect on the fact that although you were never able to finish everything on the list on previous weekends, you remained optimistic that this weekend would be different? Then, driving back to work on Monday morning, did you feel slightly defeated, since you managed to finish only half the items on your list?

Human endeavours, and specifically projects, frequently seem to suffer from a phenomenon called the “planning fallacy.” Kahneman and Tversky (1977) first proposed this phenomenon in which an estimate of the amount of time required to perform a future task is underestimated, resulting in the late completion of the task.

There are numerous examples of inaccurate estimates. In many cases inaccurate estimates are little more than an annoyance, but when large projects overrun their estimates, it can have disastrous consequences. There are many examples of failed projects. In this context, project failure could be late completion, budget overruns, or not achieving business benefits. There is no shortage of well-documented failed projects. Examples include World's Fair Expo 86 (Ross & Staw, 1986), Shoreham Nuclear Power Plant (Ross & Staw, 1993), Chicago Deep Tunnel Project (Staw & Ross, 1987), and large transportation projects (Flyvbjerg, Bruzelius, & Rothengatter, 2003).

Inaccurate estimates are not limited to project durations; they also apply to many other estimates, including resources, cost, performance, personal abilities, and business benefits (Lovallo & Kahneman, 2003; Sharot et al., 2012).


Most instances of overly optimistic estimates are ascribed to optimism bias. Optimism bias is a well-known phenomenon in humans, and was identified as early as 1925 (Lund, 1925). It refers to the inclination of individuals to believe that they are more likely to experience favourable events, and less likely to experience negative events, than other people (Tversky & Kahneman, 1974).

It is also well known that optimism is seated in an area of the brain called the inferior frontal gyrus, and that the level of optimism in a person can be manipulated through transcranial magnetic stimulation (Sharot et al., 2012).


To guide this discussion, this paper proposes an estimation process with a number of features (Exhibit 1).

The first feature is the estimation object. The object could be a task, an event, a system function, equipment, material, and so forth. The second feature is that each object has one or more attributes, such as duration, work effort, quantity of resources, and so forth. The third feature is that each attribute possesses an uncertain quantity, which can be estimated.

The fourth feature is the estimator. The estimator is the person who must determine the uncertain quantity for an attribute. The fifth feature is the estimation event. The estimation event is the cognitive process that the estimator performs to determine the quantity of an attribute. Two features influence the estimation event. Internal influencing factors are factors inherent in the estimator (e.g., experience, bias, personal interest, etc.). External influencing factors are risks events that may have an impact on the estimate.


Exhibit 1: Estimation features.

The last two features are the outcome of the estimation event, which is the estimated quantity, and the associated assumptions to which the estimated quantity is subject.

Applying this model to one of your possible weekend tasks, we have the following:

Object: Monthly financial report
Attributes: Duration, material, work hours
Uncertain quantity: Work hours required to write the report
Estimator: You
Internal influencing factors:
  • The work hours it took to write the report last month
  • Your perceived competence
  • Optimism that you can finish the task in a short time
External influencing factors: Interruptions from your family
Estimated quantity: 4 hours
  • No interruptions will occur
  • Similar workload to previous reports

The two features that affect the estimate most are the internal and external influencing factors. The effect of these factors is further discussed in the context of existing research.


Tversky and Kahneman (1974) proposed that two process are involved in the development of an estimate. The first is the inside view, which involves visualizing how the work will be done. In the inside view, the estimate of the uncertain quantity is determined through scenarios, narratives, or mental simulations of the attribute (Kahneman & Lovallo, 1993).

A number of research studies have explored the inside versus outside view of estimation.

Kucian, Von Aster, Loenneker, Dietrich & Martin (2008) used functional magnetic resonance imaging (fMRI) to show which parts of the brain are active during the estimation process.


Research findings suggest that people ignore past experiences and focus their attention on the planning of future events. The reason for this is that prediction elicits focus on future events; the prediction of a future event seems so different from a past event that people cannot compare them meaningfully, since they are cognitively different things. Something that was done in the past is fully visible, and something that is still to come is not (Buehler, Griffin, & Peetz, 2010).

A second problem with referencing past events is that disruptive events (such as a breakdown or scope change) are excluded from future estimates, since the past events are fully understood from a present-day perspective, and are often classified as unique, one-shot events (Buehler, Griffin, & Ross, 1994).

In an estimation experiment where participants had to verbalise the estimation process they followed while they performed estimates, 74% of participants had thoughts directed at the future, and did not foresee any problems arising. Three percent of participants considered that there could possibly be problems with the project, 7% referred to past experiences, and only 1% considered experiences of other people who had performed similar tasks.

Participants who were asked to deliberately recall previous similar tasks before making an estimate still suffered from the planning fallacy. However, when participants had to write down a plausible plan for performing the project whilst recalling past experiences, the planning fallacy was largely eliminated (Buehler et al., 1994).

Another aspect of estimation is the tendency to believe that our own past performances are the result of unique events, which we can easily explain. These events are then ignored in estimates, since estimators do not believe that the same misfortune will befall them again. The bad performances of others are often attributed to their general lack of competence as opposed to extraordinary events. Lessons from past projects are therefore often ignored or considered with a very low probability. The most common excuse for not using historical information is that the projects are not the same (Buehler et al., 1994).


Estimators who are motivated by incentives have a tendency to be overly optimistic about their estimates.

Buehler, Griffin, and MacDonald (1997) suggest that there are two factors that lead to over-optimism when incentives for early task completion are at stake:

  • Estimators have an exaggerated focus on singular, future plan–based scenarios.
  • Estimators focus on success, and do not consider competing tasks and other obstacles that may affect completion.

This is clearly a strong internal view.

Brunnermeier, Papakonstantinou, and Parker (2008) showed that incentives led to shorter time estimates for tasks but had no effect on the actual completion performance of the team. Underestimating the required duration or work of a task will not motivate the project team to work faster or smarter.

One may be tempted to think that using an independent unbiased expert (observer) to perform an estimate may solve the planning fallacy problem.

Buehler et al. (1997) found that outside observers tend to have a much more conservative view of estimates, but when observers are incentivised for faster completion times, they suffer from the planning fallacy, and underestimate the durations. However, by presenting the observer with the past performance of the actor (the person or team who will be doing the work), most of the incentive optimism effect is removed.

It is argued that incentives change the attention of the estimator (actor or observer) to focus on the development of a successful plan (planning for success), and focus less on the things that can possibly go wrong.


One of the reasons put forward to explain why we do not consider past negative events is the way in which the brain captures memories.

Gruber, Ritchey, Wang, Doss, and Ranganath (2016) suggest that we retain detailed memories for only a small proportion of the events of each day. Their research found that the brain optimises which memories are stored, and tends to store those memories that may be most important for obtaining rewards in the future. In the case of estimation, those would be memories of successes, and the memories of what led to failures would be vague. This exacerbates the inability of estimators to consider the negative events that affected past projects.


When one understands the factors that lead to bad estimates, it is possible to devise methods to improve the process. Even though there are many instances of failed projects, there is also a substantial number of successful projects from which we can learn.


Reference class forecasting involves referencing similar past projects in the development of the project estimate (Lovallo & Kahneman, 2003). It essentially changes the focus of the estimate from an inside to an outside view.

Flyvbjerg (2008) recommends reference class forecasting as a specific way to deal with estimating optimism bias. This method essentially removes the project planner's reliance on his or her own experience when performing estimates.

The effect of optimism bias has become so prevalent in some sectors that specific corrective measures have been adopted. Following the Mott MacDonald (2002) report on large procurement projects in the United Kingdom, HM Treasury included optimism bias in their The Green Book: Appraisal and Evaluation in Central

Government, Treasury Guidance (HM Treasury, 2011) and Supplementary Green Book Guidance: Optimism Bias (HM Treasury, 2002). These guidance books specifically address optimism bias about four project parameters—capital costs, works duration, operating costs, and under-delivery of benefits (HM Treasury, 2011, p. 85)—and propose actions that should be taken to reduce unwarranted optimism.


Estimate reviews by independent estimators will remove optimism bias. However, it is important that reviews should not be incentivised to get a specific project outcome. Reviews should focus on the outside view of projects—the risks events that could occur (Buehler et al., 1997).


Kruger and Evans (2004) suggest that the simple act of unpacking projects into their constituent deliverables and tasks highlights the true size of the work involved and reduces the planning fallacy.

This type of decomposition is common in project management through the development of the work breakdown structure (WBS), and should be used in combination with the other techniques. However, organisations should ensure that the project team follows good WBS development techniques. The WBS itself may be an important part of the documents that are independently reviewed.


The review of several research sources that deal with estimation makes it clear that there are deep-seated psychological challenges when humans develop estimates. Our brains are designed to be optimistic about the future, and we have a tendency to forget unpleasant past events.

However, there are several methods that can be used to counter over-optimism and to reduce the planning fallacy effect. From these recommendations the Optimism Bias Grid in Appendix A was developed. Project teams can use it to determine whether their estimates contain unnecessary optimism.



Werner is the founder of ProjectLink Consulting and has been involved in project management for the past 20 years. He works in a wide range of industries, including IT, engineering, mining, and telecommunications. He is a part-time lecturer at the University of Pretoria in South Africa. Werner holds a PhD in Project and Programme Management from SKEMA in France, and his area of research was on the psychology of project termination decisions. His current research is in the area of neuroeconomics.



Brunnermeier, M. K., Papakonstantinou, F., & Parker, J. A. (2008). An economic model of the planning fallacy (NBER Working Paper No. 14228). Cambridge, MA: National Bureau of Economic Research.

Buehler, R., Griffin, D., & MacDonald, H. (1997). The role of motivated reasoning in optimistic time predictions. Personality and Social Psychology Bulletin, 23(3), 238–247.

Buehler, R., Griffin, D., & Peetz, J. (2010). The planning fallacy: Cognitive, motivational, and social origins. In M. P. Zanna & J. M. Olson (Eds.), Advances in experimental social psychology (Vol. 43, pp. 1–62). San Diego, CA: Academic Press.

Buehler, R., Griffin, D., & Ross, M. (1994). Exploring the “planning fallacy”: Why people underestimate their task completion times. Journal of Personality and Social Psychology, 67(3), 366.

Flyvbjerg, B. (2008). Curbing optimism bias and strategic misrepresentation in planning: Reference class forecasting in practice. European Planning Studies, 16(1), 3–21.

Flyvbjerg, B., Bruzelius, N., & Rothengatter, W. (2003). Megaprojects and risk: An anatomy of ambition. Cambridge, UK: Cambridge University Press.

Gruber, M. J., Ritchey, M., Wang, S., Doss, M. K., & Ranganath, C. (2016). Post-learning hippocampal dynamics promote preferential retention of rewarding events. Neuron, 89(5), 1110–1120. doi:10.1016/j.neuron.2016.01.017

HM Treasury. (2002). Supplementary green book guidance: Optimism bias. Norwich, UK: TSO.

HM Treasury. (2011). The green book: Appraisal and evaluation in central government, Treasury guidance: Norwich, UK: TSO.

Kahneman, D., & Lovallo, D. (1993). Timid choices and bold forecasts: A cognitive perspective on risk taking. Management Science, 39(1), 17–31.

Kahneman, D., & Tversky, A. (1979). Intuitive prediction: Biases and corrective procedures. Management Science, 12, 313–327.

Kruger, J., & Evans, M. (2004). If you don't want to be late, enumerate: Unpacking reduces the planning fallacy. Journal of Experimental Social Psychology, 40(5), 586–598.

Kucian, K., Von Aster, M., Loenneker, T., Dietrich, T., & Martin, M. (2008). Development of neural networks for exact and approximate calculation: A fMRI study. Developmental Neuropsychology, 33(4), 447–473.

Lovallo, D., & Kahneman, D. (2003). Delusions of success: How optimism undermines executives’ decisions. Harvard Business Review, 81(7), 56–67.

Lund, F. H. (1925). The psychology of belief. The Journal of Abnormal and Social Psychology, 20(1), 63.

Mott MacDonald. (2002). Review of large public procurement in the UK. Norwich, UK: TSO.

Ross, J., & Staw, B. M. (1986). Expo 86: An escalation prototype. Administrative Science Quarterly, 31, 274–297.

Ross, J., & Staw, B. M. (1993). Organizational escalation and exit: Lessons from the Shoreham nuclear power plant. Academy of Management Journal, 36(4), 701–732.

Sharot, T., Kanai, R., Marston, D., Korn, C. W., Rees, G., & Dolan, R. J. (2012). Selectively altering belief formation in the human brain. Proceedings of the National Academy of Sciences, 109(42), 17058–17062.

Staw, B. M., & Ross, J. (1987). Behavior in escalation situations: Antecedents, prototypes, and solutions. In B. M. Staw & L. L. Cummings (Eds.), Research in organizational behavior (Vol. 9, pp. 39–78). Greenwich, CT: Jai Press.

Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131.


The Optimism Bias Grid gives an indication of the likelihood that optimism bias exists in the time and/or cost estimate of a project. The grid should be completed using the information in the tables below. The relative area covered is an indication of the amount of risk that the estimate is biased.



The level to which the estimated work was broken down. More detailed breakdowns lead to less optimism due to the visibility of the complexity.

The percentage of activities have durations that are longer than 10 working days and/or costs that combine 10 or more unique cost elements.

1: < 20%
2: 20%–40%
3: 40%–60%
4: 60%–80%
5: >80%


The focus of the estimate results. Internally focussed estimates lead to more optimism bias.

1: Estimate is primarily based on external data from previous similar projects by other estimators/teams.

3: Estimate based on a mix of the estimator's experience, or data obtained by the estimator, and previous similar projects by other estimators/teams.

5: Estimate primarily based on estimator's experience, or data obtained by the estimator.


1: Entire estimate was externally validated by independent, non-incentivised estimators.

3: External validation of some parts of the project estimate.

5: No external validation of the estimate.


Incentives for meeting targeted costs or schedules lead to more optimism.

1: The project team and estimators are directly incentivised to meet predetermined target dates or costs.

3: The project team and estimators are indirectly incentivised to meet predetermined target dates or costs.

5: There are no incentives for the project team or estimators to meet predetermined target dates or costs.

© 2016, WG Meyer, PhD
Originally published as part of the 2016 PMI® Global Congress Proceedings – Barcelona, Spain



Related Content

  • Project Management Journal

    The Dark Side of Projects member content locked

    By Locatelli, Giorgio | Kondstantinou, Efrosyni | Geraldi, Joana | Sainati, Tristano With this Special Issue and online collection, we aim to open the space for discussion (and more research!) on the dark side of projects and invite you to join our efforts.

  • Project Management Journal

    A Qualitative Analysis of Unethical Behaviors in Projects member content locked

    By Sarhadi, Mehrdad | Hasanzadeh, Sogand This research critically reviewed project ethics under the philosophical paradigm change from modernism to late modernism.

  • Project Management Journal

    The Relationship Between Uncertainty and Task Execution Strategies in Project Management member content locked

    By Maes, Tom | Gebhardt, Karolin | Riel, Andreas Common project management methodologies do not consider project task uncertainty for determining appropriate task execution strategies.

  • Project Management Journal

    The Dark Side of Projects member content locked

    By Locatelli, Giorgio | Kondstantinou, Efrosyni | Geraldi, Joana | Sainati, Tristano This article presents the dark side of projects, engaging project scholars and practitioners in discussions about sensitive, confusing, uncomfortable, challenging, and questionable phenomena.

  • Project Management Journal

    In Praise of Paradox Persistence member content locked

    By Gaim, Medhanie | Clegg, Stewart | Pina e Cunha, Miguel By analyzing paradoxes encountered in the construction of the Sydney Opera House project, we discuss how dialogical interactions enable options to emerge in the form of responses that were not…