A matter of life and death

risk management helps search and rescue

img

KPB, Project-Benefits, France

Abstract

There is one clear objective of risk management: to increase the probability of success (PMI 2009, ISO 2009, Standards Australia/Standards New Zealand 2004). In some endeavors, the measures of success are somewhat theoretical. For search and rescue, there is a more stark reality: success is defined as finding the human target in good condition. This, by definition, is the risk response stage of the overall risk management process—a stage that is often skated over rapidly in the literature (“just do what you've planned to do”). In this paper, however, the application and value of sophisticated risk management tools to plan and then incorporate results from the field are clearly explained in connection with the critical nature of this challenge. However, the approach is fully applicable to managing risk and making decisions under uncertainty in all projects, programs, and portfolios. This paper merely makes the need for it more obvious—a matter of life and death, in fact! The main lesson linked to PMI's risk management approach are highlighted (inside boxes) in the text.

If It's an Issue, Why Is It Uncertain?

Many publications on risk management stress the fact that the difference between risks and issues is that risks are uncertain (PMI, 2013, PMI, 2009, ISO, 2009), but this chapter will analyze the role of uncertainty in addressing an issue situation. Is this inconsistent?

Of course, the answer is no. Although the situation is certain (for example, we know that a person has been missed), the outcome of the potential responses is uncertain; it also relies on an understanding of levels of uncertainty in order to optimize the way in which the response strategy is addressed. Put another way: we know the issue, but we need to maximize the possibility (opportunity) of achieving a successful outcome. Resolution of this situation is therefore an exercise in risk management, and carries valuable insights for all professionals involved in uncertain endeavors and who want to take maximum control of the situation.

This also provides an opportunity to discuss the concepts of ontological, epistemic, and aleatoric uncertainty and understand how they can all be put to good use for understanding and addressing uncertain circumstances in most projects most of the time.

Search and Rescue as a Specialization

“Search and rescue” (“SAR”) is a set of services often coordinated and provided by volunteer groups and/or the armed forces or other government agencies. Over time, a set of effective and science-based practices has been added to the in-depth field knowledge and experience exhibited by the search and rescue teams. . The current paper is largely inspired by the Canadian Ground Search and Rescue Manual (Justice Institute of British Columbia, 1999).

This paper considers the first (search) part of search and rescue—that is to say, the localization of the missing person or group of people. Naturally, the second part—rescue—must also take all of the types of risk into account as well, in order to maximize the safety of the rescue team while optimizing the chances of effective rescue of the victim without endangering bystanders. The many methods of rescue are very technical and are beyond the scope of the current analysis. The rest of this paper therefore focuses on what is known as “search theory.” The initial analysis of this subject was based on the “Search Theory” section of the document “SAR Management in BC” (2010). The mathematical basis is provided in Ebersole (2011).

What Does Search Theory Deliver?

Although the goal of any search is to find the missing target in the shortest possible time, this statement does not provide a means of prioritizing strategies or of measuring progress toward a successful outcome. A more complete statement of the goal, including a measurable element, could be defined as “Progressively improve the focus of the search so as to reliably discover the subject in as short a time as possible.” Search theory techniques answer this need. As will be explained in this paper, they:

  • Increase the likelihood of finding the subject alive
  • Provide a structured approach for addressing the unknown
  • Provide a provable planning methodology
    • Defining areas to be searched
    • Prioritizing searches
    • Optimizing resources allocations
    • Determining tactical details (e.g. track spacing for searches)
    • Incorporating new information as it becomes available
  • Incorporate process tracking and effectiveness measurements
  • Provide an overall decision-support and audit trail
  • Allow effective handover to alternative coordinating resources as the situation dictates.

… in fact, they provide capabilities needed in a similar way in all projects, programs, and portfolios.

Making the Best Decisions

As in all complex situations, decision-making for SAR requires a number of trade-offs, the result of which may not be intuitively obvious. For example, it may be better to search a fairly low-probability area first in preference to a higher-probability one, because it can be easily and rapidly eliminated if the subject is not found.

Formalizing the SAR Process

Although not explicitly described in the SAR documents, the main steps of the SAR approach have been mapped onto a diagram similar to PMI's cycle of risk management processes, as shown in Exhibit 1, in order to provide a link to the standard risk management processes referenced earlier.

Exhibit 1 – The Steps in the SAR process mapped onto the PMI risk management processes.

A short journey around this diagram will help explain the various steps and compare and contrast them with PMI's Project Risk Management processes:

img  Define Process Framework [plan risk management] defines the known facts and the administrative framework and constraints within which the search will operate. This can of course be updated as time goes by—for example, as additional resources become available.

img  Initial Evaluation [identify risks] makes use of the facts of the situation in order to define the search area and how to subdivide it for effective planning. The initial values of likelihood and probability corresponding to this are also evaluated.

img  Analysis and Prioritization [analyze risks] marks the start of one planning cycle that includes Action Planning, followed by Execution of the planned actions, and completes with a re-evaluation of the situation in the case where the actions were unsuccessful. The Analysis and Prioritization step not only proposes an optimized use of resources, it also determines the revised values for probability of area (POA) to be used in the next planning cycle if the current one is unsuccessful.

img  Action Planning [plan risk responses] reviews the output of the Analysis and Prioritization step, adjusts it if necessary based on the reality in the field, and translates it into concrete actions, using the established SAR best practices.

img  Execution, Monitoring, and Control [control risks] relies on the field expertise of the participants and command and control skills of the search controller and his team.

img  Likelihood Re-evaluation takes the results of the actual search and the revised values of estimates from the previous Analysis and Prioritization step to develop a new set of values to be used in the next Analysis and Prioritization step.

This process does not map directly onto any in PMI's Risk Management Knowledge Area; however, the capabilities and tools it provides should be considered by all risk management practitioners in the pursuit of continuous improvement and progressive elaboration. It could be a useful addition to A Guide to the Project Management Body of Knowledge (PMBOK® Guide) and to other PMI standards.

These steps are applied below.

The Process Framework

The first step, as in any endeavor, is to make sure that the administrative and logistical infrastructure is in place and that the resources are known and under control. Ensure that the core team knows and understands how to work together based on formal tools for behavior analysis of the target, geographical segmentation, likelihood evaluation, and how to drive and evaluate the analysis and prioritization tool. The core team can then be mobilized to carry out the initial evaluation as a basis for analysis and prioritization to be used in action planning.

Where Does Uncertainty Come in? Evaluation

There are several uncertain elements to be addressed (for an explanation of the types of uncertainty, see Hillson, 2012):

  • Where is the subject?
    • Epistemic uncertainty
  • Within the search area, what is the relative likelihood of each region?
    • Ontological uncertainty
  • Given that we are searching the correct region, what are the chances of detecting the subject?
    • Aleatoric uncertainty
  • If a search does not discover the subject, how should the relative likelihoods be adjusted?
    • Ontological uncertainty

Although these concepts are explained below in the context of SAR, they are fully relevant—but all-too-frequently overlooked— in the general context of Project Risk Management: each needs a different tool and a different approach for evaluation, integration, and (highly important since risk management is by definition unpredictable) real-time updating based on ongoing experience.

The first question is addressed by use of special historical studies carried out to analyze and document the behaviors of various categories of people in various critical situations. These studies provide information about possible directions and actions the subject is likely to take and the distance normally covered in a given time (Koester, 2008). This can then be used to determine the total region to be searched.

To address the second question, this “search area” then needs to be subdivided into “regions” over which the probability is fairly uniform; for example, a rambler will do his best to avoid very steep escarpments, so those should be specified as regions separate from smoother areas or mountain ridges.

The answer to the third question depends mainly on two factors: the type of terrain and the density of the search. For example, in a flat area of short grass in daylight, you could adopt a much looser search pattern for a missing person than in bad light in wooded terrain, to have the same probability of detecting the subject. This may require further subdividing regions: “segments” of consistent detectability are defined within regions, and of a size that is compatible with being searched in one shift by a search team.

The fourth question is addressed further on in this paper as part of the re-evaluation process, and is key to risk response management in all projects—but is normally not applied correctly, if at all.

Given these uncertain elements in the search—i.e. where to look, how to look—the next step is to find some way of quantifying them, in order to estimate and optimize the probability of success in any search.

Uncertain Location

The measure of uncertainty in this case can best be based on an initial assessment by experts in the field. Although the SAR literature uses the term “probability of area” (POA), this terminology seems to imply a misleading degree of scientific reliability for the numbers, whereas they are in fact the outcome of informed debate. The term “likelihood of region” would be preferable since, a) “likelihood” is more indefinite than “probability” and b) by definition, this number needs to be estimated for each region. One effective way of estimating this likelihood is to use either a mathematical approach for averaging opinions in order to reach a consensus (“Modified, Simplified Mattson Method”) or to start from the following team-based, more intuitive approach.

 

Both of these methods are of course used in many other project environments and allow achieving consensus rapidly.

Analog likelihood evaluation approach

The participants initially work individually.

The first step is to rank the areas from most likely to least likely.

Then, on a chart with one row per region, indicate where along a line from “very likely” to “very unlikely” you would situate the corresponding region. This will then give each participant a diagram similar to Exhibit 2.

Ranking regions by likelihood.

Exhibit 2 – Ranking regions by likelihood.

You can then work as a team to compare and adapt the separate diagrams and achieve (as far as possible) consensus on a single diagram or a small number of diagrams. Then you need to quantify.

Quantifying the likelihood

Before you start, you need to have one more question answered: what is the probability that the subject is not anywhere within the area—i.e., not in any of the regions? Arrive at a consensus or take a simple average of the answers; call that number ROW (for “rest of the world”).

The remaining percentage (100%-ROW) can then be subdivided between the regions on the diagram as follows:

  • graduate the range from very likely to very unlikely from 100 to 0.
  • evaluate the value of each position
  • normalize these values so that their sum comes to (100-R0W).

If you were not able to agree on a single diagram in the previous step, do this evaluation for each diagram and take the average of the values between the diagrams. This should give you a result such as shown in Exhibit 3.

Estimated “probabi lity of area” (POA).

Exhibit 3 – Estimated “probabi lity of area” (POA).

The closing step is to review the final figures, adjust them if necessary based on consensus, and then use them as your starting values. The fact that they are not “accurate” is not critical, as they will be adjusted based on the outcome of each search, as explained later.

Uncertain detection

As in many other project situations, uncertainty is modeled using probabilistic techniques (Gaussian distribution, joint probability).

The estimation of probability of detection (POD) is based on a necessary simplifying assumption: that the probability of missing the subject plotted against distance from the searcher is sufficiently well approximated by a normal Gaussian (bell-shaped) curve.

The practical advantage of this is that it is then mathematically straightforward to calculate the probability of detection based on the chosen search width and the standard deviation of the curve. In search and rescue, a distance of two standard deviations is called as the “effective search width” (ESW). The term “coverage” is the ratio of the ESW to actual search width (i.e. the value C=ESW/SW). The higher the coverage, therefore, the higher the probability of detection. Mathematically, this corresponds to the formula:

POD = 1-exp(-C)

The ESW depends not only on the terrain but also on the speed at which the search is carried out, so these two values need to be recorded together. Practical experiments in similar terrain can be carried out, or prior knowledge is needed in order to obtain this information.

Uncertain success as a key to planning

Since success for a given area requires both that the subject is in that area and that we detect the subject, once we have a value for each of probability of area and probability of detection, it is easy to calculate the probability of success (POS) as the product of these two quantities, i.e.,

POS = POA x POD

The probability of success of a set of searches is the sum of their individual probabilities.

This set of values and formulae provides the input from which to optimize the use of scarce search resources, as explained next.

Analysis and Prioritization

Because of the number of different options and the non-linearity of the associated metrics, tools such as decision trees (PMI, 2013, Chapter 11) cannot be used. A more sophisticated simulation and optimization is required in such situations. Many professionals tend to avoid simulation—preferring simplicity over reality!

Available data

The starting point for this process is based on the following information:

  • the regions as defined
  • the number of resources available
    • and the length of each search period or shift
      • in some cases this may depend on the region being searched
  • the corresponding values per region of
    • POA
    • Searchable area
    • ESW and corresponding speed of search
    • access and egress times to and from the region
  • the formulae as defined above
    • POD
    • POS

Based on this, the goal is to define how to search one or more regions in the next search period in order to maximize the overall probability of success. This entails the use of computerized support; the details of the input and control data for the optimization are given in the appendix.

The optimization

The output of the optimization will be a list of the areas to be searched and the relevant parameters retained, as shown in Exhibit 4. The constraint values are highlighted in pink.

The output of one optimization step.

Exhibit 4 – The output of one optimization step.

The key information for the action planning step is highlighted in green:

  • the areas to be searched
  • the team size for each region
  • the sweep width to be used in the region

This then provides the information for action planning.

Before that, it is important to highlight a major dilemma: it is “intuitively obvious” that you should concentrate on the region with the highest POA, but we have only recommended putting just over a quarter of our resource out there: because of the exponential nature of POD with respect to coverage, the benefit of increasing coverage (which entails increasing the team size) reduces rapidly beyond a given point due to the exponential nature of the function. However, if we fail to find the target in this search and the target then turns up to have been in this area (and may be dead by then), we will most probably be blamed—although our strategy does give enough resources for this area to have the highest POS and almost highest POD.

 

So is doing “the right thing” based on hard logic actually the right thing for the team and most of the other stakeholders? This is a human issue that is often implicit in other project environments but is not often recognized.

So what is the right thing? That will determine the actual values for the team size allocations, and these must be entered into the table—as an audit trail and future planning, if necessary.

Action Planning

The proposed optimized set of searches needs to be reviewed by the search director and revised and adapted based on the actual situation in the field. As with using any tool, this data should be considered to be advisory rather than mandatory.

Various trade-offs can be considered and evaluated using the same spreadsheet. Once the actual plan has been agreed, the corresponding spreadsheet should be stored, as it will be used for the control and re-evaluation steps.

Execution, Monitoring, and Control

The various teams are dispatched as specified in the agreed plan, and report back to the coordination center. If the information received matches the hypotheses in the plan, no action needs to be taken. However, with an operation of this type, much that happens will not have been correctly forecast—and quite possibly not even foreseen. To take a few examples:

  • the forecast speed cannot be maintained
  • access to the sites takes longer than expected
  • some members of the team are unavailable
  • some clues are discovered
  • other additional information becomes available.

When this happens, the search controller has to decide whether the information warrants an immediate change to the plan, or whether this can wait until the next search cycle.

The response actions that can be considered normally entail reassigning manpower between regions, or redefining regions or segments.

 

From the risk and issue management side, as in all projects, this requires three separate steps:

  • record the situation at the time where the plan is to be changed
  • integrate additional information into the model based on the current situation
  • re-evaluate the options
    • this may entail redefining regions and segments, as well as effective search width and speed.

The first of these steps closes the execution, monitoring, and control process; the others are the responsibility of re-evaluation.

When the current plan is suspended, it is likely that some of the searches will not have been completed. How should the plan reflect this?

The most precise approach is to remodel the situation by splitting uncompleted regions or segments into two parts—a searched and an unsearched part. The POA for each part should be distributed according to the area of each one; the unsearched part should be entered onto the plan as having no resources assigned. This modified model is then used as the starting point for the next round of searches.

In the example (Exhibit 5) shown below, when the search period ended, Region A had not been completed because the actual speed over the ground was about ¾ of what was expected. The region has been split into “A ok” and “A not” the POA values have been adapted pro rata of the searched area, and the search speed for both parts of A has been updated.

Search plan after update to reflect incomplete search of Region A.

Exhibit 5 – Search plan after update to reflect incomplete search of Region A.

This should then be used in the re-evaluation step—unless, of course, one of the searches was successful!

Likelihood Re-evaluation

The planning for the next search period needs to be based on data that is as valid as possible. For this reason, all of the new information needs to be integrated into the model:

  • The areas unsuccessfully searched
  • Clues discovered
  • Game-changing information
  • Parameters to be modified

Update based on search information

This is the classic Bayes problem (Moore, 2001): how to adapt the “prior” estimate of likelihood based on additional information. In this case, there are two separate questions:

1)   Given that we searched area X without finding the subject, what value should we take for POA of X?

2)   Given that we did not search Y, but did not find the subject in the other searches, how should we adapt POA of Y?

a. There is one important point to note: this is where the ROW (rest of the world) region becomes useful. As searches compete unsuccessfully, the POA of ROW will automatically be adjusted upward. It therefore provides a good indication of the overall progress of the set of searches.

Adjusting the POA of a searched region

The reasoning behind the resulting formula is developed in the Appendix and is:

new searched POA = old POA x (1- POD) / (1 – POD x POA)

This is not intuitively obvious, and many SAR teams use the (faulty) formula: new searched POA = old POA x (1- POD), which leads to an underestimate of the remaining probability after an unsuccessful search and possibly a faulty strategy for the next round of searches. The intuition error comes from overlooking that fact that we have to exclude the case where the person was in the area (POA) and was found (POD).

 

Misunderstanding—or rejection—of Bayesian analysis is a common and potentially serious problem is many project environments since, as has been demonstrated, personal, intuitive evaluation is frequently unreliable. Simplicity being preferred to reality once again?

The revised estimate of POA for the searched areas will affect the POA of areas that have not been searched since the total sum of all POAs (including ROW) must remain at 100%. This is explained next.

Adjusting the POA of an unsearched region

As has been mentioned, the reduction in the likelihood in the searched areas must be compensated by a corresponding gain in the unsearched areas. If the cumulative likelihood of the searched areas has decreased by an amount R, then the unsearched POAs should be increased pro rata of their value to share this amount.

new unsearched POA = (old unsearched POA) x (1+R)

where R = (sum of changes in all searched POAs) / (sum of all unsearched POAs)

Exhibit 6 below provides an example of this update after the first search shown in Exhibit 4. The new values of POA are given in the final row of the chart.

Exhibit 6 – Search plan showing the updated POAs assuming the searches failed.

Those new POAs can then be fed into the optimization program to provide guidance for the next search as shown in Exhibit 7.

Stage 2: The updated search strategy—compare and contrast this with Stage 1 in Exhibit 6.

Exhibit 7 – Stage 2: The updated search strategy—compare and contrast this with Stage 1 in Exhibit 6.

Other updates

There are other reasons to update the current likelihood values:

  • finding a clue
  • additional information, that affect constraints or assumptions

The effect of these on the likelihood estimates and the resulting optimization for the next search are analysed in the Appendix.

Parameters to be modified

New information, such as new shift times and total resources available, need to be inserted into the model for the next cycle, once all of the other changes mentioned above have been incorporated.

Decision time

These re-evaluation calculations provide the basis for the next cycle, starting from analysis and prioritization.

The first decision to be taken is whether, based on the current information on the status of the situation, a new set of searches should be carried out within the specified region, or whether the basic hypotheses need to be changed—for example, the missing subject never went where he said he was going. This is normally not up to the search director, but is a decision to be taken at a higher, strategic level.

One very useful output of the calculations made during the re-evaluation step is the new value assigned to the ROW probability: as the searches fail to discover the subject, the POA of the searched areas decreases, whereas, since it is never searched, this particular “rest of the world” region has a progressive increase in likelihood. For example, as shown in Exhibit 6, the ROW likelihood is multiplied by a factor of about 3.5 after the first round of searches. Using the same approach to run optimal searches another four times can be shown to produce a ROW POA, indicating that the likelihood of the subject being out of the search area is more than double the likelihood of his being within it. In this way, the search coordination team can decide on a cut-off value at which to revise the initial hypotheses on which the search strategy was based.

If the decision is taken to launch another round of searches, the latest data from the re-evaluation step should be reviewed and potentially modified based on specific recommendations from the coordination team, and the next analysis and prioritization step undertaken.

Conclusion

This study has shown where the various tools for managing uncertainty can add value to experience and intuition to address what can be a life-and-death issue.

 

They are also directly applicable to all projects and provide a mechanism for adjusting and re-evaluating prior assumptions. The approach is based on people, statistics, and assumption-based reasoning:

  • People: the epistemic uncertainty of determining the initial overall search area was addressed using team-related knowledge.
  • Statistics: aleatoric uncertainty made use of probability calculations based on statistical distributions to determine the detectability of a subject under various conditions.
  • Assumptions: ontological uncertainty as to the likelihood of the subject's being in various regions was addressed using Bayesian calculations to update the estimated likelihood, as more information becomes available.

The need for an additional process in the risk management cycle in order to ensure effective progressive elaboration (or “agility”) of the ongoing plan has also been demonstrated.

All of these formal approaches are designed to support but never to replace the experience, knowledge, and skill of the people in the field. On the contrary, by providing rational suggestions, the corresponding tools can allow the experts to devote more time and effort to fruitful discussions and effective decision-making. Linking this with the structured process (Exhibit 1) simplifies coordination, reporting, and tracking for all of the stakeholders.

Although the title of this paper is “Risk Management Helps Search and Rescue,” the converse has also been demonstrated: Search and rescue is a valuable case study providing clear, practical lessons that should be integrated into every project manager's risk management mindset.

References

Ebersole, M., Lovelock, D., O'Çonnor, D., & Toman, R. J. (2011) A manual for experienced inland search personnel. Win CASIE II, May 2011.

Hillson, D. (2012) Risk is more than uncertain future events. Retrieved on February 14, 2014 from http://www.riskdoctor.com/docs/73Riskismorethanuncertainevents.pdf

ISO (International Organization for Standardization). (2009) Risk management – Principles and guidelines ISO 31000:2009.

Justice Institute of British Columbia. (1999) Ground search and rescue (GSAR) manual (2nd ed.). Retrieved on August 21, 2012 from http://www.jibc.ca/sites/default/files/emd/pdf/SAR100%20GSAR%20Participant%20Manual.pdf

Koester, R. J. (2008) Lost person behavior. Charlottesville, VA: dbS Productions LLC.

Moore, A. W. (2001) Bayes nets for representing and reasoning about uncertainty. Retrieved from http://www.autonlab.org/tutorials/bayesnet09.pdf

PMI. (2009). Practice standard for project risk management. Newtown Square, PA: Author.

PMI. (2013). A guide to the project management body of knowledge (PMBOK® guide) – Fifth edition. Newtown Square, PA: Author.

SAR, Canada (2010) SAR management in BC (private copy).

Standards Australia/Standards New Zealand (AS/NZS). (2004) Risk management guidelines: Companion to AS/NZS 4360:2004.

 

Appendix: Details of the Calculations

Setting up the optimization

Objective and approach

As explained above, the objective at this step is to determine the optimal search strategy given the current assumptions and level of knowledge. Mathematically, this entails defining:

  • the constants used in the calculation

    ➢ The constants are the parameters that are taken as inputs to the objective function in a given cycle.

  • the variable parameters to be adjusted

    ➢ The variables define the choices open to the search director.

  • an objective function to be optimized

    ➢ This is a function involving the constants and the variables.

  • the constraints to be obeyed

    ➢ These may be additional conditions that can exclude some of the initially potential options.

  • The method for performing the optimization

    ➢ This would normally be specified as part of the framework.

The constants

In any cycle, the following are evaluated or specified before invoking the analysis and prioritization process. They are:

  • the defined set of regions or segments
  • the area of each
  • the effective sweep width (W) for each segment
  • the duration of a search period (D) for the segment
  • the sum of access and egress times for the segment (E)
  • the agreed or recalculated “probability of area” (POA) of each
  • the agreed or recalculated POA of the ROW

The variable parameters

In order to make the optimization feasible, the approach is to have a single category of variable parameters to be adjusted. This section shows have to use the number of resources (N) assigned to each area as the control variable with the other variable parameters as dependent variables that can be calculated from the value of N.

The number of resources applied to a given area in a given time determines the track width, as follows:

When defining a search in a given area, to be completed in a given time, the following equation provides the link between the number of people and the track spacing: the area covered in a given period (swept area) is:

SA = S x T x V x N

where SA is the swept area, S is track spacing, and T is time spent searching. V is the search speed and N is the number of resources assigned to that search. For any search therefore, the constants area (A) and effective search period (D-E) should be used, to give:

N = A / (S x (D – E)) or

S = A / ((D-E) x N)

The function to be optimized: total POS

As mentioned above, the goal is to determine the conditions that give the greatest value for the sum of the values of POS for each region or segment. In any cycle, the decision can be taken not to apply any resources to a given in region. By definition, the corresponding POS for that region will be zero.

The relevant formula for each POS is as follows:

a)   POS = POA x POD

b)    POD = 1 – exp (-C)

POA is a fixed parameter, whereas POD depends on the coverage C. By definition,

C = S / W

where S is the actual sweep width (a dependent variable) and W is the effective sweep width (a constant for a given area).

Since the variable under the search director's immediate control is the number of resources (N) assigned to each region or segment, this will be used as the variable controlling the optimization function as follows to calculate the POS per area:

  • Calculate the track spacing: S = A / ((D-E) x N)
  • Calculate the coverage (C) from this and the effective sweep width (W): C = W / S
  • Work out the probability of detection at this width as POD = 1 – exp (-C)
  • Work out probability of success POS using this value of POD and the current relevant value of the probability of area (POA), as POS = POA x POD

Then sum all of the regional POS to give the total POS for the given deployment plan.

The constraints to be obeyed

One ever-present constraint is the total number of resources available. This must therefore be specified to the optimization function as follows:

  • The number or resources for each team must be:
    • an integer
    • non-negative
    • no greater than the size of the resource pool
  • The sum of all resources deployed must be no greater than the total resource pool.

There are a number of other constraints that can be required, such as:

  • maximum number of searches (teams deployed) that can be run in parallel
    • This might be dictated by the capacity of the coordinating team.
  • minimum number of people in any team
    • This may be based on operational or safety considerations.

Each such constraint can make the optimization slower or even impossible given the set of available tools.

Adjusting the POA of a searched region

Using the standard formula or working from the Bayesian analysis table in Exhibit 7:

Analysis table for a failed search

Exhibit 8: Analysis table for a failed search

P(A | fail) = P(A) x ( Prob(fail if in A) / Prob (fail) )

now:

  • P(A) is the initial POA
  • Prob(fail) is 1-POD since failure is equivalent to non-detection
  • Prob(fail if in A) is made up of two parts
    • being in A and not finding = POA * (1-POD)
    • not being in A = (1-POA) * 1
    • add these two and simplify to get (1- POA*POD)

So:

new searched POA = old POA x (1- POD) / (1 – POD x POA)

The revised estimate of POA will affect the POA of areas that have not been searched since the total sum of all POAs (including ROW) must remain at 100%.

Other reasons to update probabilities: finding a clue

Adjusting POA for the region containing a clue

This should be carried out after the update based on the search has been carried. It should once again use the Bayesian formula:

P(in A | clue found) = P(in A) x P(Clue | A) / P (Clue)

and P(Clue) is P(in A) x P(Clue | in A) + (1-P(A)) x P(Clue | not in A)

That is to say that the probability of area should be modified by a factor that is the ratio of the chances finding the clue in general versus the chances of finding the clue if the subject is in A—i.e., by the amount that a subject's presence in A would increase the likelihood of finding such a clue. This can be applied to the following example.

Assume that, in searching an area, we discover a recently-discarded box of cigarettes of the type the subject smokes.

First we need to evaluate the probability of finding a cigarette box of this type. Assume that 10 cigarette packets have probably been dropped there since the subject disappeared and we have a 20% chance of detecting one with the type of search we carried out; it is not a popular brand, and is used by 10% of people.

If he is not there, then we expect there to be 10% x 10 = 1 packet, with a chance of finding it being 20%.

If he is in the area, we can assume he will have dropped his packet so that should double or chances of finding this type of packet to 40%

This makes:

P(Clue) = POA x 40% + (1-POA) x 20% = 1+20%* POA, so:

P(in A | Clue found) = 40% x POA / (1 + 20% x POA) = 2 x POA ( 1 – 1 / (1 + 20% x POA).

Note, of course, that the fact that he was in the area and dropped a packet does not guarantee that he would still be in the area, so we should then adjust the result to find the likelihood that he is still in the area if he was in the area—and this requires estimating how old the clue is, and other factors. This is probably better adjusted manually from the calculated value based on discussion and experience.

Adjusting the POA for the other regions when a clue was found

The likelihood values of the regions in which the clue was not found are adjusted in the same way as was described for an unsearched region above:

new POA = (old POA) x (1+R)

where R = (change in POA of region containing the clue) / (sum of POAs of non-clue regions)

The result is shown in the last row (“new POA”) in Exhibit 9, assuming the clue was found in Region A during the first round of searches.

The result of adjusting for the clue found in region A

Exhibit 9: The result of adjusting for the clue found in region A

Game-changing information

It can happen that major new information comes in that requires reworking a number of the key assumptions, such as the total search area, for example. In this case, it can be necessary to restart the whole process by reworking the basic framework. Existing information from the searches so far may be useful at this stage, but it can carry the risk of perpetuating faulty assumptions made before this new information became available.

This material has been reproduced with the permission of the copyright owner. Unauthorized reproduction of this material is strictly prohibited. For permission to reproduce this material, please contact PMI or any listed author.

© 2014, Crispin (“Kik”) Piney, B.Sc, PgMP
Originally published as a part of the 2014 PMI Global Congress Proceedings – Dubai, UAE

Advertisement

Advertisement

Related Content

Advertisement

Publishing or acceptance of an advertisement is neither a guarantee nor endorsement of the advertiser's product or service. View advertising policy.