Those who work in the anticorruption field are likely familiar with the frequent citation of quantitative estimates of the amount and impact of global corruption. Indeed, it has become commonplace for speeches and reports about the corruption problem to open with such statistics—including, for example, the claim that approximately US$1 trillion in bribes are paid each year, the claim that corruption costs the global economy US$2.6 trillion (or 5% of global GDP) annually, and the claim that each year 10-25% of government procurement spending is lost to corruption. How reliable are these quantitative estimates? This is a topic we’ve discussed on the blog before: A few years back I did a couple of posts suggesting some skepticism about the US$1 trillion and US$2.6 trillion numbers (see here, here, here, and here), which were followed by some even sharper criticisms from senior GAB contributor Rick Messick and guest poster Maya Forstater.
This past year, thanks to the U4 Anti-Corruption Resource Centre, I had the opportunity to take a deeper dive into this issue in collaboration with Cecilie Wathne (formerly a U4 Senior Advisor, now a Project Leader at Norway’s Institute for Marine Research). The result of our work is a U4 Issue published last month, entitled “The Credibility of Corruption Statistics: A Critical Review of Ten Global Estimates.” (A direct link to the PDF version of the paper is here.)
In the paper, Cecilie and I identified and reviewed ten widely-cited quantitative estimates concerning corruption (including the three noted above), tried to trace these figures back to their original source, and assess their credibility and reliability. While the report provides a detailed discussion of what we found regarding the origins of each estimate, we also classified each of the ten into one of three categories: credible, problematic, and unfounded.
Alas, we could not rate any of these ten widely-cited statistics as credible (and only two came close). Six of the ten are problematic (sometimes seriously so), and the other four are, so far as we can tell, entirely unfounded. Interested readers can refer to the full report, but just to provide a bit more information about the statistics we investigated and what we found, let me reproduce here the summary table from the paper, and also try to summarize our principal suggestions for improving the use of quantitative evidence in discussions of global corruption:
Summary of findings
Statistic | Our assessment | Conclusion |
US$1 trillion in bribes is paid worldwide every year | Problematic | The most one could legitimately say is that ‘some estimates in the early 2000s found suggestive evidence that the amount paid in bribes each year was probably somewhere between US$600 billion and US$1.76 trillion’. Even that would be pushing the limits of what one can credibly conclude from the data. A recent attempt to update this figure put the estimate at ‘about US$1.5 to US$2 trillion’. However, the methodology behind this calculation has not been published. |
US$2.6 trillion in public funds is stolen/embezzled every year | Unfounded | No organisation or researcher has even purported to estimate the annual amount corruptly stolen or embezzled at US$2.6 trillion. The recent appearance of this statistic in speeches and reports from leading organisations appears to reflect a misinterpretation or misrepresentation of a statistic on a related but distinct matter in a 2018 speech by the UN secretary-general. |
Corruption costs the global economy US$2.6 trillion, or 5% of global GDP, each year | Unfounded | This statistic appears to have no basis whatsoever, and may have been based on a misreading of a problematic analysis on a different matter. No organisation or advocate should cite this statistic under any circumstances. |
Corruption, together with tax evasion and illicit financial flows, costs developing countries US$1.26 trillion each year | Problematic | This statistic is based on a Global Financial Integrity estimate of the outflow of illicit funds (from all sources) from developing countries in 2008. However, the GFI estimates are for illicit financial flows overall, not only flows due to corruption and tax evasion. In addition, illicit outflows in 2008 appear to be substantially above the mean for the period. If an organisation were to use the GFI data for a narrower claim – along the lines of, ‘In the early 2000s, illicit financial flows from developing countries, including but not limited to the proceeds of corruption and other illegal activities, were estimated at roughly US$660 billion per year’ – then we would assess the statistic as credible, notwithstanding legitimate concerns about GFI’s methodology. Those wishing to use GFI’s estimates for illicit flows, though, would be better off using more recent GFI estimates. |
10%–25% of government procurement spending is lost to corruption each year | Problematic | While the statistic may be plausible, it appears to be based on subjective perceptions and unexplained extrapolations from unidentified or unrelated data. Still, it might be appropriate to note something along the lines of, ‘International development officers working in the early 2000s conjectured that roughly 10%–15% of public procurement spending was lost to corruption.’ |
10%–30% of the value of publicly funded infrastructure is lost to corruption each year | Unfounded | None of the prominent organisations that have cited this statistic provide enough information to trace the claim back to its original source. While we identified a few possible sources, none of them provided a reliable evidentiary foundation for the estimate. |
20%–40% of spending in the water sector is lost to corruption each year | Unfounded | While the amount lost to corruption in the water and sanitation sector may be in this range, we could not trace this estimate to anything other than unsubstantiated guesses and assessments of certain projects, assessments that tended to focus on related but different issues. |
Up to 30% of development aid is lost to fraud and corruption each year | Problematic | We could not locate a reliable source for the estimate as usually framed. However, one leading candidate as the source for this statistic – the audit results for the UN Global Fund – could be cited if presented appropriately. The audit findings should not be presented as indicating that ‘up to 30% of all development aid is lost to corruption’. Instead: ‘An independent audit of projects sponsored by the UN Global Fund to Fight AIDS, Tuberculosis and Malaria found that 30%–67% of the funds were misspent, often due to corruption.’ |
Customs-related corruption costs World Customs Organization members at least US$2 billion per year | Problematic | While this estimate appears in an academic study, the statistical techniques used to generate the estimate are not presented sufficiently clearly to assess the methodology, and may have serious flaws. Organisations that want to cite this statistic should include appropriate caveats to acknowledge the uncertainty of the estimates and should avoid attributing them to organisations like the OECD or WTO. |
1.6% of annual deaths of children under 5 years of age (over 140,000 deaths per year) are due in part to corruption | Problematic | The form in which the claim most often appears suggests far more precision and certainty than is warranted. If the statement were made in more general terms – ‘Researchers have found strong evidence that corruption increases child mortality rates’ – we would rate the claim as credible. |
As for our constructive suggestions, we conclude the paper with five recommendations for improving the use of corruption statistics (or, for that matter, any quantitative statistics) in public-facing reports and statements. I’ll quote this set of recommendations directly from the report:
- Always trace back to (and, in written documents, cite and/or link to) the original source. Before citing a quantitative statistic in a public document or speech, one should always trace the statistic back to its origin. Sometimes a source will be cited in document A, but document A got the source from document B, which got it from C, etc. Always try to locate the original source for the statistic in question and attribute it to that source, not to some intermediate source that cites the statistic (possibly inaccurately). If it is impossible to locate the original source, the statistic should not be cited. If the original source says something vague like ‘Studies have shown that…’, without actually referencing a specific study, the statistic should not be cited. If the original source says something like ‘According to World Bank estimates…’, but does not reference a specific World Bank document or data set, the statistic should not be cited.
- Read the original source carefully. A non-specialist need not scrutinise the source the way an academic might. Given the inherent difficulty in measuring hidden activities like corruption, all estimation techniques will be open to questions and criticisms. And sometimes the original source will be based on non-public data, making independent assessment impossible. Notwithstanding these important caveats, before an organisation or one of its officials cites a corruption statistic in a public document or speech, someone in the organisation should read the original source carefully to make sure that the approach to estimation is basically understandable and sensible. At the very least, someone must verify that the quantitative statistic is actually based on some sort of quantitative analysis and is not simply a guess expressed in quantitative form. It is also important to confirm that that the original source is estimating the same quantity that the statistic purports to measure – to make sure, for example, that an estimate of the total cost of corruption to the global economy is not actually from a source that is estimating the total amount of global money laundering, or that an estimate of the amount of public procurement spending lost to corruption is not actually from a source that estimates the size of the kickbacks that private contractors pay to public officials. Rigorous academic evaluation is not obligatory, but basic due diligence is.
- Do not conflate an author’s institutional affiliation with the institution’s official findings. … [T]he credibility of certain statistics is artificially enhanced when they are presented as if they were the official estimates of a reputable institution, like the World Bank or IMF, when in fact the statistics in question were produced by someone employed by (or consulting for) that organisation. When relying on estimates in, for example, a World Bank working paper, one should say, ‘A World Bank working paper estimated…’ rather than ‘the World Bank estimated…’, unless it is clear that the document in question represents the organisation’s official findings.
- Do not exaggerate certainty, precision, or generality. Simplification is necessary in an advocacy or policy context, but oversimplification is a problem. Often the original source for a corruption statistic will be limited to a certain time, country, region, or sector. Additionally, the original source will often acknowledge considerable uncertainty about the estimate (or the uncertainty will be obvious, even if it goes unacknowledged). As statistics are repeated from source to source, these important caveats tend to drop away, creating a misleading impression of a precise number that can be generalised to a broad (often global) context. This can and should be avoided by briefly noting the limits on the scope of the statistic and acknowledging the uncertainty. Doing so might lead to fewer ‘global’ statistics…. We might not be able to say, with any reasonable degree of confidence, what percentage of infrastructure spending is lost to corruption each year. But we could perhaps find several evocative examples of specific countries or programmes where a rigorous evaluation produced a more reliable estimate of corruption-related loss rates in those programmes. Those individual examples can be just as powerful in making the rhetorical point about corruption’s destructive effects.
- Avoid ‘decorative’ statistics and focus instead on evidence of significant effects or associations. This is perhaps our broadest and potentially most controversial recommendation. We suggest that international organisations, donor agencies, civil society groups, and others reconsider their penchant for ‘decorative’ quantitative statistics. Rather than peppering reports and speeches with large-sounding numbers and percentages, we suggest that these influential organisations focus on empirical evidence of statistically and substantively significant correlations between corruption and other variables of interest, especially when those correlations can be plausibly interpreted as reflecting a causal relationship. Rather than trying to quantify, for example, the amount that corruption costs the global economy each year (in absolute or percentage terms), a report or speech that wants to make the point that corruption has a significant adverse economic impact could cite the extensive research literature finding that corruption is associated with lower per capita incomes, higher inequality, and more frequent macroeconomic crises. We might not be able to say, with any reasonable degree of certainty, how many annual child deaths are due to corruption, but we can cite numerous statistical studies … as support for the proposition that there is a strong correlation between corruption and child deaths, as well as a range of other adverse health outcomes. Shifting the focus from (unreliable) global descriptive statistics to empirical evidence of causal effects would also effect a productive shift in the discourse from general descriptions of the problem to consideration of consequences and causes.
A much-needed and exemplary piece of empirical research and analysis. It will become part of the assigned readings for my Anti-Corruption Law courses.