Surely one of most salutary developments to result from the intense focus on corruption over the past two decades is the growing use of corruption risk assessments by public and private entities alike. Risk assessments were first employed in the 17th century to assess the likelihood a steam engine would explode and refined over the years to address risks as varied as the meltdown of a nuclear reactor or climate change. A corruption risk assessment estimates the chances a government agency or private corporation will experience one or more types of corruption. Just as assessing the risks of an engine explosion or reactor melt-down is an indispensable prerequisite for designing measures to mitigate if not eliminate these risks, a corruption risk assessment provides the critical information public and private sector decisionmakers need to design practicable corruption prevention programs.
A plethora of guides explaining how to conduct a corruption risk assessment are posted on the internet (examples for the public sector here and here; for the private sector here, here, and here). All recite the standard method for assessing risks of any kind found in text books and government reports. First, all conceivable forms of corruption to which the organization, the activity, the sector, or the project might be exposed is catalogued. Second, an estimate of how likely it is that each of the possible forms of corruption will occur is prepared and third an estimate of the harm that will result if each occurs is developed. The fourth step combines the chances of occurrence with the probability of its impact to produce a list of risks by priority.
The critical steps are the second and third. If the estimate of where bribery is likely occur or its impact if it does occur is wrong, prevention efforts will not be properly targeted. That happened to the U.K. insurance firm Aon Limited. Thinking bribery was more likely to occur in its U.K. operations than those overseas, it put the bulk of its enforcement efforts into preventing bribery in the U.K. Because, as the U.K. financial regulator found, it “failed properly to assess … the higher risks presented by some of the countries in which [its overseas] divisions operated,” it thus spent little time overseeing its non-U.K. agents. That mistake was costly. When it was revealed that many non-U.K. agents had paid bribes, the U.K. Financial Services Authority, the U.S. Department of Justice, and Securities Exchange Commission all brought enforcement actions.
Given the importance of accurate estimates of bribery risk and impact to developing a corruption risk assessment, one would expect “how to” guides to explain ways to improve the accuracy of the estimates. Especially since risk assessments in other areas do.
But at least the “how to” guides for corruption risk assessment I have reviewed don’t. A more thorough review by Transparency International’s Any McDevitt didn’t find any either. In his Topic Guide to Corruption Risk Assessment, he concluded that “guidance on how to assess the specific level of risk is often weak or nonexistent” and that the “basis on which judgements are made is not always explicit.”
Corruption is not the only type of risk where judgments of probabilities are employed. All sorts of risk assessments – to the environment, from the failure of complex engineering systems — rely in the first instance on subjective judgements. What distinguishes these assessments from corruption risk assessments, or again at least the ones McDevitt and I have reviewed, is that in these areas academic learning is built into the methods used to elicit probability judgments.
For example, several “how to” guides for private sector risk assessment suggest convening a workshop or group meeting where participants can discuss each other’s estimates and arrive at a consensus. But forcing estimators to reach a consensus can, as Professor Granger Morgan explains in a paper for the National Academy of Sciences, produce all sorts of errors, such as the opinion of the eldest or most senior group member being adopted regardless of what others believe. A 1991 analysis of the most common method for developing a consensus estimate found the “consensus [was] not based on genuine agreement [but] strong group pressure to conformity.” Professor Morgan cites several works that offer advice on how to avoid group think in developing estimates. The only mention of this problem in the “how to” guides I havefound, however, is a brief reference in the UN Global Compact Guide; it notes that “occasionally” (my emphasis) the estimate “may reflect a dominant viewpoint or a level of bias” and that where this is so “an objective facilitator” can remedy the problem” (p. 30). How to recognize if an estimate suffers from such a problem and how to fix if it does is not discussed.
The most significant learning meriting attention when estimating probabilities in corruption risk assessments is that spawned by Amos Tversky and Daniel Kahneman’s landmark paper, “Judgment Under Uncertainty: Heuristics and Biases.” Through simple experiments and close observations, the two identified a host of mental short-cuts individuals take when making the kinds of estimates required for a corruption risk assessment. Their ground breaking contribution was to show how these short-cuts often produce biased or erroneous judgements. For example, the two found that individuals regularly overestimate the likelihood of the occurrence of events with which they are familiar or which they can easily recall while at the same time they underestimate or ignore those that are remote in time or experience.
The implications for corruption risk assessments are obvious. The far greater attention to bribery in the press and in discussions about corruption make it far more likely estimators will identify bribery as likely to occur and be damaging if it does than conflicts of interest. Conflict of interest is a more complex concept and revelations of corrupt conflicts rarely garner sensational, if any, headlines. Yet recent work in the Czech Republic summarized on this blog would suggest conflicts of interest are a greater threat to procurement integrity than bribery.
Kahneman and Tversky finding’s may also explain why Aon’s risk assessment effort was so wrong. Aon’s greater familiarity with the opportunities for bribery in the U.K. and its lack of appreciation of bribery risks overseas may explain why it overestimated the risk in the U.K. and underplayed the risks elsewhere.
Kahneman and Tversky’s work has sparked an entire field devoted to finding ways to improve the accuracy of judgments like those that go into estimating the probabilities different forms of corruption will occur and if they do, how harmful they will be. Is there any reason why this work should not inform corruption risk assessments? Or am I missing something?