Garbage in, garbage out? Collect better data for your risk assessment
David T. Hulett, Ph.D.
Janice Y. Preston, PMP, CPA
A risk is a condition or event that is uncertain and, if it occurs, will have an impact on a project objective. The purpose of risk assessment is to improve your chances of reducing risk and making the project successful. To do this, you need to identify risks and evaluate their impact on project objectives. Risk identification is described in Chapter 11 of A Guide to the Project Management Body of Knowledge (PMBOK® Guide). Risk assessment involves an iterative process that helps you assess the severity of risk events, prioritize them and identify additional risks.
Collecting better data is the best way to improve your risk assessment. What interview techniques should you use? How do you combat biases? How can you develop an effective interview questionnaire? How can you develop criteria to measure probability and impact? What should you do with the data to convert it to information useful for project risk management? These questions are addressed in this paper.
Key to Good Risk Assessment
The key to good risk assessment is to gather good data. Have you been on a project where “obvious” risks were missed up front? Have you worked with a project team that focused on risks that were not the most critical? Useful data will help you unearth the most important risks and help you determine if more detailed quantification is required. It will help you develop responses to risks and it will guide you in establishing contingency plans and allowances.
Structuring the problem is a crucial first step. Have certain risks been identified during the feasibility or business case studies? Have you met with your key team members to identify major categories of risk? Don’t stop at identifying a “vendor problem.” Do you expect it to be related to delivery or quality? Use all your experience and that of your team while you’re thinking through areas of weakness in the project.
Then, gather data that relates to important issues that may throw the project off course. This takes the most time in any risk assessment. It takes the most resources, expert help, time and attention of project managers, team leaders, experts in the subject. The data you collect may be from objective or subjective sources. Both can be valuable in an effective risk assessment.
Objective vs. Subjective Data
Objective data is typically based on measured results from past projects. For the data to be useful, the projects should have similarities in terms of project technology and scope, duration, cost, fit within the organization, or types of risks. Projects for comparison should be recent, with a reliable accumulation of data. Sometimes, there may be published benchmarks that would be relevant to your project.
It’s likely that very little objective data will be available and you will have to rely on subjective data and judgments. The main approaches to developing data for a risk analysis involve interviews and research into historical data, whether collected by the company or published in research reports. Unfortunately, many project participants report that they do not maintain data from past projects, those data are not accessible or they do not highlight risks. For this reason, careful and sometimes extensive interviews are required to develop the data. Gathering information from experienced project team members and managers has benefits and pitfalls.
•Improved understanding of the project’s problems and potential solutions
•Improved estimates for cost and schedule
•Increased cohesion of the teams
•Improved communication among project participants, contractor and owner.
•Not understanding the biases of interviewees—e.g., interviewing subcontractors who may use it to pad their estimates
•Interfering with the conduct of the project
•Taking too much time
•Chasing changing data—revisions of schedules or cost estimates can confound the risk analysis.
Use the structure you’ve developed to help you focus on areas of weakness. Keep your antenna up for something new and unusual. Recognize that information frequently comes with biases that must be recognized and combated.
Using Interviews to Gather Data
Interview questions concern future events and it is difficult to collect data on events before they occur. There’s no substitute for experience when you’re gathering subjective data. Interview people close to and expert about the project, the technology, and outside influences. Since “there are no facts about the future,” the estimates are viewed as probabilistic statements. Interviewees are asked to use their judgment to develop risk scales of probability and impact, or about optimistic, most likely and pessimistic ranges of possible outcomes. These questions may present a problem since few interviewees have participated in a risk analysis and some are not comfortable using judgment. Several steps and tactics must be used to gather data that are useful, and as accurate and free of bias as possible.
Choose the right people. The people to interview are, generally, those who are expert on the project under review. These are main project participants, such as chief engineers or team leaders. Other potential interviewees to consider would be the customer, regulators if regulation were seen as a potential risk, and outside experts. Sometimes interviewees are biased, often because they have made the estimates that are being examined or because diverging from the estimates could jeopardize their careers. This bias sometimes disqualifies the project manager because project managers are usually both committed to a particular outcome and subjected to such intense pressure from the customer that pessimism is just not permitted. In some cases this bias can be overcome by interviewing experts from the company who are not participating on the project or by getting an independent reviewer to weigh in with information from other industries. Refer to past projects and industry studies if available and relevant.
Brief the interviewees. The participants should be advised about the way a risk analysis works. They should be advised how their data will be used in combination with data provided by other interviewees, so they know they cannot scuttle the project all by themselves if they admit there is risk in their particular project area. They are usually promised a briefing on the results. They need to know about the common biases, why they arise, and that bias in providing judgmental data is common. In particular, discussing the pessimistic possibilities can be difficult and even a little painful the first time, and the interviewer will have to press them to consider extremes to try to combat this bias.
Set the correct tone. Interviewees need to be informed that “This is an exercise in honesty.” This statement is important when there is institutional bias that is often in the optimistic direction to hold down estimated cost or shorten the estimated schedule. The risk interview may be the first time the interviewees have been asked for their own opinion, and where someone (the interviewer) encourages them to think “outside the box” that their institution has adopted as received doctrine. Many interviewees appreciate this request for candor and even use the risk interview as a way to communicate their fears for the project to management. Sometimes, having several people in the room creates the atmosphere of honesty and synergy getting to the best estimates.
Common Biases in Interviews
Common biases encountered over many risk interviews include the following (These biases are discussed in a classic article, Amos Tversky and Daniel Kahneman, 1974):
•Organizational bias—Make the project look better than it really is. This arises in bidding situations, and may also occur when estimates are established too early in the project and the designs are incomplete. More complete planning exposes risks of not meeting scope or extending schedules. Admitting this to customers, financing organizations, political bodies (for public projects), or senior management may be unthinkable. Finding the right people to interview who are willing to buck this system may be possible but difficult.
•Unfamiliarity with risk analysis—People are reluctant to participate. This often shows up in criticisms, such as: “This is number fumbling and I won’t participate,” “Your guess is as good as mine,” or “If I cannot find the data in a data base, I just can’t answer the questions.” This bias may occur because interviewees are used to being asked for single number estimates, not ranges of uncertainty. Experience with risk interviews and skillful questioning help combat this bias.
•Anchoring and adjusting—The first estimate sticks. This is particularly common among people who have made the estimate in the first place. They “anchor” on that estimate and then have trouble admitting or estimating a truly extreme pessimistic outcome. Optimistic possibilities are easier to believe. Risk is underestimated unless the interviewer can challenge these underestimates and show that they are not truly extreme.
•Representative bias—New information may not be representative of the project’s true situation. Do early problems automatically represent a failed project? We all know people who overreact to new data even if it were counter to well-studied results. Combating this bias requires understanding where the bias is coming from and challenging its validity by comparing it with the prior conclusions.
•Availability bias—Surprising current events receive undue attention. If the event is dramatic or recent, it may achieve unwarranted importance in the estimate and bias the results toward that event. The interviewer should recognize any overemphasis on a dramatic event and suggest that the interviewee examine all data in forming the risk assessment.
•Fear of the disaster—The scenario is too frightful to consider. Here, the interviewee refrains from talking about the “disaster scenario” because it is too difficult, e.g., everyone gets fired and the company goes under. A disaster scenario is a risk to be considered specifically and must be examined.
There are other methods to reduce bias, including analysis of past projects. One ethic in risk interviewing is to examine historically bad and good results. Often the company believes that an event that happened, even in a very recent project, could not possibly happen again. But history has a way of repeating itself. If history is not available, you might be able to bring to the table results from industry studies to put some realism into the process, indicating that the project might not be alone in experiencing pessimistic results.
The idea is to admit the possibility of optimistic and of pessimistic results. The latter, pessimistic results, are harder to admit and yet more important to recognize if they are to be addressed in risk management.
Risk Interview Strategies for Success
Provide enough time. One problem for risk interviewer and interviewee alike is that the time available for the interview is limited. If that time is spent trying to examine all risky project elements, it will not be sufficient. Hard feelings and poor data result from interviews that are obviously rushed.
One strategy is to admit that gathering risk will take time. Scheduling sufficient time for these interviews often means three hours or more for some of the most valuable project participants, after they have attended a briefing on risk analysis methodology.
Prepare the data. The interview will be more productive if the interviewer has prepared the data forms beforehand. This means separating out the data that each interviewee will be responsible for.
Highlight potential high-risk elements. Another important step is trying to highlight those items that have potentially more risk than others, on the Pareto rule that most of the risk can be found in a surprisingly-small number of project elements. This “triage” of project elements may be as simple as sorting the data elements into size categories on the assumption that the larger the element, the greater potential for risk. While this is not a good rule in all cases, it focuses the interviewee on the main risks and allows the least risky elements to be ignored in the interest of gaining quality data and time. Another way to segregate the key risk elements is to do a two-step process, sorting the elements with the participant before the interview.
Pre-interview work. Questionnaires greatly assist the interview process and produce high-quality results if implemented within a positive atmosphere. The interview proceeds best when the interviewees have reviewed and completed questionnaires beforehand. This is not a substitute for the interview, for these pre-interview data usually underestimate risk, but advance consideration by interviewees facilitates the process.
Developing an Effective Risk Interview Questionnaire
The purpose of a risk interview questionnaire is to identify relevant risk events and prioritize them. They may be prioritized into high risks, which must be dealt with, moderate risks, which might be dealt with now or have a contingency plan, and low risks, which can be managed by the project team if they occur.
The most difficult work in gathering data is to specify the questions in the questionnaire. This includes making sure they are scaled correctly and making sure people with similar knowledge and background would answer the questions about the same way. Qualitative risk assessment criteria must be distinguished from each other, not overlapping, so different, knowledgeable people can replicate the results. This takes time and testing, but the results will be worthwhile and may be useful for other projects.
If you choose to develop a questionnaire to be used prior to or in conjunction with interviews, follow a few of these guidelines.
•Use clear, unambiguous language. If necessary, provide a legend of common words, phrases or acronyms used in the organization.
•Have someone outside the project review your questionnaire for clarity.
•Identify risk event categories and describe what is included in each category.
•Identify specific high-risk events for each category.
•Allow space for customizing the questionnaire or for new items that are not included in the questionnaire.
Usually, a questionnaire would be divided into categories of risks to help both interviewer and interviewee progress in an efficient fashion. Risk categories or sources of risk may be developed in many ways and should be specific to the organization, business unit and project. Risk categories may be defined in terms of (1) internal or external factors, (2) short-term or long-term, (3) external unpredictable, external unpredictable, internal non-technical, technical, legal, (4) specific sources that are common in your projects. Typical risk categories for a software integration project may include project scope definition, hardware requirements, software requirements, project staffing, resources, vendors and contractors, testing, production impact, facilities, and corporate support.
These are not the only sources of risk. For some projects, there may be external risks including competition, economic factors, and regulatory permitting or country risks. For others, project management risks may be serious, including poor scheduling, biased estimating, or inadequate project management capability. Another category of risk deserving of consideration is organizational risk, including inadequate or poorly trained staff, organizational stability, financial status, inability to make decisions, locational difficulties, poor morale, etc.
After identifying sources of risk, consider using numerical values for scaling. This requires definitions for each of the values. When establishing scales, use the whole range from >0 to <1. Work to make the scales valid, e.g., .2 is twice the weight of .1. If this is effective, then the Probability/Impact matrix can be evaluated by multiplying the scores together. If a numeric scaling cannot be done, use qualitative measures such as low, medium or high. These may be refined further by adding very low (below low) and very high (above high). These measurements must be well specified in words.
Exhibit 1. Sample Questionnaire (Partial)
The questionnaire is often a living document, with people adjusting the questions, and adjusting the scores that goes with the answers as they find out how the interviewees are answering.
Relative Scoring Criteria for Probability and Impact
One well-established way to assess these two dimensions of risk is to establish ground rules early, and then apply them to specific project elements. Establishing independent rules for project elements makes it easier to apply the same criteria and scoring mechanisms to various project elements and helps combat bias in interviewing. Lists like these have been suggested in methodologies published by military agencies and by some companies.
Risk probability. The probability of a risk occurring can come from various sources. There are conditions that characterize the project that would indicate more or less likelihood of a risk occurring. These conditions can be put into words and applied across the board to various project elements. In Exhibit 1, the probability of a technical risk occurring is assigned a numerical value based on the understanding and development level of the technology.
If scientific or basic research is still required to resolve technical problems, the likelihood of a technical risk occurring is quite high. In the exhibit, that pegs the likelihood at 90%. If conceptual design formulation is possible and in hand the risk is slightly less likely to occur. Demonstration of functions, but not the entire system, reduces the risk likelihood further. Pilot scale test successfully completed is still less risky than if a full-scale test has been performed because of risks that occur during integration to full scale. Finally, very little technical risk can be expected if the process is operational.
The probability criteria table for a vendor and supplier risk is shown in Exhibit 3. The main dimensions are the number of sources available, the extent and character of experience with the vendors.
Exhibit 2. Probability of Technical Risk
Exhibit 3. Probability of Vendor or Supplier Risk
Exhibit 4. Impact of Risk on Performance Objective
Exhibit 5. Impact of Risk on Total Project Schedule Objective
For each category of risk, a table such as those above can be devised. Selecting the words that are used and setting their implied probability of material risk is a learning process and may be an iterative effort as the interviews proceed.
Impact on project objectives. Risks can impact the ability to achieve project objectives. The objectives may be many, but they can often be summarized as the cost, schedule and performance objectives—the “iron triangle” of project objectives.
As with risk probability, criteria can be defined to quantify the potential impact on the three main project objectives if the risk event occurs. Although risk impacts may be positive or negative, we focus here on adverse impacts. Logically, a risk event may have an impact on scope, cost and time, but the impact may be uneven. For this reason, it is important to assess the impact of a risk event on each project objective separately. Thus, a problem in testing software, which has a significant probability, may impact schedule success more than technical success. Of course, for some projects, schedule is more important than the other objectives and a company may be willing to pay a lot to avoid schedule delays.
Exhibit 6. Impact of Risk on Total Project Cost Objective
Exhibit 7. Probability and Impact Matrix for Technical Risk and Cost
Impact criteria tables are shown in Exhibits 4–6 for three project objectives.
Develop Risk Ranking Scores
There are several ways to combine the likelihood and impact. The simplest one is to multiply the numerical scales (see Exhibit 7).
The organization would develop its own assessment of the severity of risk by assigning to all those risk events that achieve scores above, say, .25 as high and anything below .09 as low. (The method in the PxI table comes from several places including Harold Kerzner, 1998, p. 885. Another way to combine the numbers would be to follow a different equation: Risk = P + I – (P*I). This equation has traditions in private companies and can also be found in Kerzner, 1998, p. 887.)
Ordinal vs. cardinal assessment of impact. There is a school of thought that contends that impacts cannot be measured in a way that allows specific numbers to be attached. In the literature this asserts that impacts can be described in relative terms (ordinal numbering, in sequence of relatively increasing impacts) only. To go further and claim that impacts can be given numbers that imply that one consequence is, say twice (.6) or three times (.9) as bad as another (.3) (cardinal numbers) is more than an organization or individual can describe.
One way to address this issue is to keep the impact (and maybe even the probability) scales ordinal such as low, moderate, and high impact. Questions like those in the impact tables above would still have to be developed to distinguish between the categories of relative impact. Categories such as very low, low, low moderate, moderate, high moderate, high and very high would be substituted for the numerical scores that are attached to the words in the example.
Using Risk Category Results
One common way to help project managers conduct risk management is to sort risk elements into groups of increasing risk. Often risks are sorted into those that are high risk, moderate risk and low risk to the project objectives.
The dimensions of risk events, probability and impact, are the same in a continuous distribution that might be prepared for a Monte Carlo simulation. The curve in Exhibit 9 uses the triangular distribution to illustrate this point.
•The vertical axis represents “relative likelihood” and is associated with each possible outcome, e.g., cost or duration as shown in the exhibit. This is similar to probability discussed above, and the relative probabilities sum to 1.0 under the curve.
•The horizontal axis represents possible outcomes of a risky event. The measure of impact may be viewed as the spread from optimistic to pessimistic, or the standard deviation of the distribution, or some other measure such as a percentile.
The key to good risk assessment is to gather good data. While objective data may be preferable, it’s frequently not available. The best technique for collecting subjective data is to interview project participants, stakeholders and subject matter experts, based on a questionnaire developed for the project. This will help you avoid many types of biases. If you have refined descriptions of scoring criteria for probability and impact, the questionnaire will be more useful. Use the results of your risk scoring to rank risks and to identify risks requiring quantitative analysis such as the Probability/Impact matrix or Monte Carlo simulation.
Exhibit 8. Risks Sorted by Categories Used in Risk Management
Kerzner, Harold. (1998). Project Management, 6th edition, p. 885. John Wiley.
Tversky, Amos, & Kahneman, Daniel. (1974, September 26). Judgment under Uncertainty: Heuristics and Biases. Science, pp. 1124–31.
Proceedings of the Project Management Institute Annual Seminars & Symposium
September 7–16, 2000 • Houston, Texas, USA
This standard focuses on the “what” of risk management, including: core principles; fundamentals; and life cycle.