Cooke’s procedure for expert judgement

Taken from Burgman, M., 2005. Risks and decisions for conservation and environmental management. Cambridge University Press, pp.116-117

Consensus and closure not required

Cooke’s framework does not aim at consensus or closure. He begins from the assumption that expert judgements are inherently diverse and that such diversity is legitimate. The method therefore avoids privileging uniformity. Instead, it rests on three pillars: accountability through transparency, empirical calibration of expert performance, and scoring rules that encourage honesty by maximising scores when experts state their true probability assessments (Burgman, 2005, p. 119)

The procedure has 15 steps:

  1. Definition of case structure: this is achieved by creating a document that specifies all the issues to be considered during the expert judgement exercise. It provides information on where the results of the exercise will be used, and outlines the physical phenomena and models for which expert assessment is required.
  2. Identification of target variables: a procedure is used to select variables for expert elicitation, to limit them to a manageable number. Variables are included if they are important and if historical data are insufficient or unavailable.
  3. Identification of query variables: if target variables are not appropriate for direct elicitation, surrogate ‘query’ variables are constructed that ask for observable quantities, using questions formulated in a manner consistent with the experts’ knowledge base.
  4. Identification of performance variables: performance (seed) variables are supported with experimental evidence that is unknown to the experts, but known to the analyst, usually from within or closely associated with the domain of the enquiry at hand.
  5. Identification of experts: as large a list as possible is compiled of people whose past or present field contains the subject in question and who are regarded by others as being knowledgeable about the subject.
  6. Selection of experts: a subset is selected by a committee based on rep- utation, experimental experience, publication record, familiarity with uncertainty concepts, diversity of background, awards, bal- ance of views, interest in the project and availability.
  7. Definition of elicitation format: a document is created that gives the questions, provides explanations and the format for the assess- ments.
  8. Dry run exercise: two experienced people review the case structure and elicitation format documents, commenting on ambiguities and completeness.
  9. Expert training: experts are trained to provide judgements of un- certain variables in terms of quantiles for cumulative distributions, anchoring their judgements to familiar landmarks such as the 5th, 50th and 95th quantiles.
  10. Expert elicitation session: each expert is interviewed individually by an analyst experienced in probability together with a substantive expert with relevant experience. Typically, they are asked to provide subjective judgements for the query variables as quantiles of cumulative distributions.
  11. Combination of expert assessments: estimates are combined to give a single probability distribution for each variable. Experts may be weighted equally, or by assigning weights reflecting performance on seed questions.
  12. Robustness and discrepancy analysis: robustness may be calculated for experts by removing their opinions from the data set, one at a time, and recalculating the combined functions. Large informa- tion loss suggests that the results may not be replicated if another study was done with different experts. A similar evaluation may be conducted for seed variables. Discrepancy analysis identifies the items on which the experts differ most.
  13. Feedback: each expert is provided their assessment, an informative- ness score derived from the robustness analysis, weights given to their opinion, and passages from the final report in which their name is used.
  14. Post-processing analysis: aggregated results may be adjusted to give appropriate distributions for the required input parameters.
  15. Documentation: this involves the production of the formal report.

Also see: