Presentation: Expert Elicitation Process

Lee Abramson of the US NRC spoke on the expert elicitation process that will be followed in this exercise. Some of the specific key points from his presentation and subsequent discussion are outlined below:

• Key word is “formal” use of expert judgment. Engineers practice informal expert judgment every day.

• It was emphasized that elicitation is a structured process and that the process requires experienced practitioners to conduct the exercise. This is not a “do it yourself” activity.

Discussion: A question was raised if the results of this elicitation or past elicitations could be used as a baseline for future efforts, in much the same way that Bayesian analysis is performed. Lee Abramson indicated that there is no natural means of updating results from prior elicitations based on recent experience or new data. However, it may be appropriate to use the results of a prior elicitation as starting point for future elicitation.

• The need for comprehensive documentation was also stressed to ensure that the process approach, issues, analysis techniques, results and uncertainties are clear. Additionally, follow-on work to refine the results requires comprehensive documentation in order to understand the basis of the initial study.

• The need for an expert panel with a broad range of expertise and experiences was expressed.

Also all of the stakeholders (both utilities and regulators) must be represented.

• There are two methods of elicitation: group and individual. The problem with group sessions (versus individual sessions) is that often group dynamics lead to domination of one or two individual opinions. The results then no longer represent everyone’s input.

• Elicitation team for this exercise consists of

• Normative expert — Lee Abramson

• Substantive experts — Alan Kuritzky, Ken Jaquay, Rob Tregoning, others?

• Recorder — Paul Scott

• Documenter — Paul Scott (could be same as recorder)

• Panel members need to provide rationale for answers so others can see why certain panelists came up with certain answers. In that way other panelists have the option of changing their answers based on feedback from the group. The panel will largely be provided this feedback at the wrap-up meeting. Panel members can revise answers to any question at any time.

Discussion: It was asked if the response will be weighted in any way to account for expertise in a given area. Lee Abramson replied that the analysis will use unweighted responses so that everyone’s response is judged equally. With this size of panel, weighting should not substantially affect the final results. The elicitation will also query the panel member’s uncertainty for each answer. If inordinate uncertainty exists, then the response may be downgraded. Also, the rationale provided by each panelist will help determine if responses need to be weighted.

• Types of biases present in elicitation processes:

• Motivational biases (i. e., social pressure or group pressure to make a certain decision). These need to be recognized and avoided at all costs.

• Cognitive biases — biases can occur when people have developed an initial answer and more data becomes available which require the initial answer to be modified. Typically people underestimate the impact of the new data. This bias is referred to as anchoring. The elicitation structure will be developed in an attempt to minimize these biases. For instance, initial estimates of the total LOCA frequencies will not be asked.

• Background biases (i. e., what an individual might see as reasonable, or would expect, based on his background.) For example, an experimentalist might see a high probability of failure of a piping based on the number of experiments he has run in which he saw a failure, but typically the test conditions were such that similar conditions in the field are highly unlikely to ever occur. This bias is natural, but it is important to get each individual to consider all variables which affect the result and break them down into meaningful pieces.

• People are more than likely to underestimate the true uncertainty, by a factor of 1/2.

• People are more likely to anchor on median value, not on the extremes.

• Goal is to make the questions as unambiguous as possible (very precise) and to focus questions on the major issues affecting the LOCA analysis.

• The uncertainty range will be queried during the elicitation by asking for the “number” such that there is 5% chance that the true response is less than this number. A separate number will be provided for the UB such that there is also a 5% change that the true response is higher than this number. This corresponds to the 90% coverage interval of the variable.

• Purpose of elicitation panel members is to come up with individual answers, not a consensus.

Discussion: There was quite a bit of discussion and confusion about the definition of the coverage interval. Lee Abramson said that the uncertainty range (difference between higher and lower response to a given question) should cover the true number for that variable 90% of the time. The true value should fall below the lower response 5% of the time and the true value should land above the higher response 5% of the time. However, Lee cautioned against making the coverage interval inordinately large just to capture uncertainty. If this occurs, the coverage interval contains little useable information.

B-6