President Biden’s “Fishy” Corruption Statistics Called Out

Thanks to GAB Editor-in-Chief Matthew Stephenson, readers of this blog have known for years not to believe the many numbers thrown around about the global cost of corruption.  As he has shown in a series of posts, (hereherehere, and here) and in a 2021 paper for the U4 Anticorruption Resource Centre with Cecilie Wathne, these estimates are, not to put too fine a spin on it, baloney. Or what I have somewhat scatologically termed WAGs (Wild A** Guesses).

Unfortunately, White House staff apparently (and disappointingly) neither read GAB nor follow U4’s work. That is the only explanation for why they would have let President Biden say at the launch the other day of the Indo-Pacific Economic Framework for Prosperity that “corruption saps between 2 to 5 percent of global GDP.”

Fortunately, Washington Post crack fact checker Glenn Kessler didn’t let the President’s citation of what his paper termed a “fishy statistic” go unchallenged. Relying on Matthew’s and Cecile’s paper, backed up by a chat with U.S T.I. director Gary Kalman, Kessler termed the 2-5 percent statistic “so discredited” that it should have never been “uttered by the president of the United States.” The White House, he wrote, must in the future do a better job of vetting such “dubious” data.

While I trust White House staff will, I hope the error in no way hope cools theirs or the president’s commitment to upping America’s anticorruption game. After all, as the president also said at the Indo-Pacific launch, corruption “steals our public resources,. . . exacerbates inequality [and] hollows out a country’s ability to deliver for its citizens.”  All unequivocally true. No fishy data required. QED

How Reliable Are Global Quantitative Corruption Statistics? A New U4 Report Suggests the Need for Caution

Those who work in the anticorruption field are likely familiar with the frequent citation of quantitative estimates of the amount and impact of global corruption. Indeed, it has become commonplace for speeches and reports about the corruption problem to open with such statistics—including, for example, the claim that approximately US$1 trillion in bribes are paid each year, the claim that corruption costs the global economy US$2.6 trillion (or 5% of global GDP) annually, and the claim that each year 10-25% of government procurement spending is lost to corruption. How reliable are these quantitative estimates? This is a topic we’ve discussed on the blog before: A few years back I did a couple of posts suggesting some skepticism about the US$1 trillion and US$2.6 trillion numbers (see here, here, here, and here), which were followed by some even sharper criticisms from senior GAB contributor Rick Messick and guest poster Maya Forstater.

This past year, thanks to the U4 Anti-Corruption Resource Centre, I had the opportunity to take a deeper dive into this issue in collaboration with Cecilie Wathne (formerly a U4 Senior Advisor, now a Project Leader at Norway’s Institute for Marine Research). The result of our work is a U4 Issue published last month, entitled “The Credibility of Corruption Statistics: A Critical Review of Ten Global Estimates.” (A direct link to the PDF version of the paper is here.)

In the paper, Cecilie and I identified and reviewed ten widely-cited quantitative estimates concerning corruption (including the three noted above), tried to trace these figures back to their original source, and assess their credibility and reliability. While the report provides a detailed discussion of what we found regarding the origins of each estimate, we also classified each of the ten into one of three categories: credible, problematic, and unfounded.

Alas, we could not rate any of these ten widely-cited statistics as credible (and only two came close). Six of the ten are problematic (sometimes seriously so), and the other four are, so far as we can tell, entirely unfounded. Interested readers can refer to the full report, but just to provide a bit more information about the statistics we investigated and what we found, let me reproduce here the summary table from the paper, and also try to summarize our principal suggestions for improving the use of quantitative evidence in discussions of global corruption: Continue reading

The Persistence of Phony Statistics in Anticorruption Discourse

Early last month, UN Secretary General António Guterres delivered some brief opening remarks to the Security Council at a meeting on the relationship between corruption and conflict. In these remarks, Secretary General Guterres cited a couple of statistics about the economic costs of corruption: an estimate, attributed to the World Economic Forum (WEF), that the global cost of corruption is $2.6 trillion (or 5% of global GDP), as well as another estimate, attributed to the World Bank, that individuals and businesses cumulatively pay over $1 trillion in bribes each year. And last week, in her opening remarks at the International Anti-Corruption Conference, former Transparency International chair Huguette Labelle repeated these same figures.

Those statistics, as I’ve explained in prior posts (see here and here) are bogus. I realize that Secretary General Guterres’ invocation of those numbers shouldn’t bother me so much, since these figures had no substantive importance in his speech, and the speech itself was just the usual collection of platitudes and bromides about how corruption is bad, how the international community needs to do more to fight it, that the UN is a key player in the global effort against corruption, blah blah blah. Ditto for Ms. Labelle–her speech used these numbers kind of like a rhetorical garnish, to underscore the point that corruption is widespread and harmful, a point with which I very much agree. But just on principle, I feel like it’s important to set the right tone for evidence-based policymaking by eschewing impressive-sounding numbers that do not stand up to even mild scrutiny. Just to recap: Continue reading

The 2016 CPI and the Value of Corruption Perceptions

Last month, Transparency International released its annual Corruption Perceptions Index (CPI). As usual, the release of the CPI has generated widespread discussion and analysis. Previous GAB posts have discussed many of the benefits and challenges of the CPI, with particular attention to the validity of the measurement and the flagrant misreporting of its results. The release of this year’s CPI, and all the media attention it has received, provides an occasion to revisit important questions about how the CPI should and should not be used by researchers, policymakers, and others.

As past posts have discussed, it’s a mistake to focus on the change in each country’s CPI score from the previous year. These changes are often due to changes in the sources used to calculate the score, and most of these changes are not statistically meaningful. As a quick check, I compared the confidence intervals for the 2015 and 2016 CPIs and found that, for each country included in both years, the confidence intervals overlap. (While this doesn’t rule out the possibility of statistically significant changes for some countries, it suggests that a more rigorous statistical test is required to see if the changes are meaningful.) Moreover, even though a few changes each year usually pass the conventional thresholds for statistical significance, with 176 countries in the data, we should expect some of them to exhibit statistical significance, even if in fact all changes are driven by random error. Nevertheless, international newspapers have already begun analyses that compare annual rankings, with headlines such as “Pakistan’s score improves on Corruption Perception Index 2016” from The News International, and “Demonetisation effect? Corruption index ranking improves but a long way to go” from the Hidustan Times. Alas, Transparency International sometimes seems to encourage this style of reporting, both by showing the CPI annual results in a table, and with language such as “more countries declined than improved in this year’s results.” After all, “no change” is no headline.

Although certain uses of the CPI are inappropriate, such as comparing each country’s movement from one year to the next, this does not mean that the CPI is not useful. Indeed, some critics have the unfortunate tendency to dismiss the CPI out of hand, often emphasizing that corruption perceptions are not the same as corruption reality. That is certainly true—TI goes out of its way to emphasize this point with each release of a new CPI— but there are at least two reasons why measuring corruption perceptions is valuable: Continue reading

On Theory, Data, and Academic Malpractice in Anticorruption Research

I’m committed (probably self-servingly) to the idea that academic research is vital to both understanding and ameliorating corruption. I sometimes worry, though, that we in the research community don’t always live up to our highest ideals. Case in point: A little while back I recently asked my law school’s library to help me track down some research papers on corruption-related topics, including a working paper from a few years ago, co-authored by a very well-known and influential corruption/good-governance researcher. I’d seen the paper cited in other articles but couldn’t find it. The library staff couldn’t find it either, and emailed the authors directly to ask if a copy of the paper was available. Here is a verbatim reproduction of this famous professor’s response:

Thanks for your email. Unfortunately, we decided not to finish this paper since we could not get the data to fit our theory[.]

I have to say, I found this response a bit troubling.

Now, to be fair, maybe what this person (whose first language is not English) actually meant was that he and his coauthor were unable to locate the data that would allow a meaningful test of the theory. (In other words, perhaps the statement “We could not get the data to fit our theory” should be understood to mean: “We could not acquire the sort of data that would be necessary to test our theory.”) But boy, much as I want to be charitable, it sure sounds like what this person meant was that he and his coauthor had tried to slice and dice the data in lots of different ways to get a result that fit a predetermined theory (so-called “Procrustean data torturing”), and that when they couldn’t get nature to confess, they spiked the paper rather than publicizing the null findings (contributing to the so-called “file drawer problem”).

Now, again, maybe that latter reading is wrong and unfair. Maybe the more charitable interpretation is actually the correct one. But still, it’s worrying. Even if this case was not, in fact, itself an illustration of the data torturing and the file-drawer problem, I’m sure those things go on in anticorruption research, just as they do elsewhere. Lots of scholars (including the author of the above email) have their own pet theories about the best way to promote high-quality governance, and spend quite a bit of time advising governments and NGO reformers on the basis of these (allegedly) evidence-based theories. But for the results of academic research to be credible and useful, we all need to be very careful about how we go about producing our scholarship, and to be careful not to let our findings — or our decisions about what projects to pursue, publish, and publicize — be unduly determined by our preconceived notions.

Chill Out: Fine-Tuning Anticorruption Initiatives to Decrease Their Chilling Effect

Who is “harmed” by aggressive anticorruption crackdowns? The most obvious answer is corrupt bureaucrats, shady contractors, and those who benefit from illicit flows of money. And while there are concerns about political bias and other forms of discrimination in the selection of targets, in general most of us rightly shed few tears for corrupt public officials and those who benefit from their illicit acts. But aggressive anticorruption crackdowns may have an important indirect cost: they may have a chilling effect on legitimate, socially beneficial behavior, such as public and private investment in economically productive activities. Although chilling effects are often discussed in other areas, such as with First Amendment rights in the United States, there is little discussion of it in the anticorruption context. That should change.

For example, in Indonesia, recent efforts to crack down on corruption appear to have stunted simultaneous measures to grow the economy through fiscal stimulus. As this Reuters article relates, “Indonesian bureaucrats are holding off spending billions of dollars on everything from schools and clinics to garbage trucks and parking meters, fearful that any major expenditure could come under the scanner of fervent anti-corruption fighters.” Nor is Indonesia the only example. In April 2014, Bank of America estimated that China’s corruption crackdown would cost the Chinese economy approximately $100 billion that year. One can challenge that estimate (as Matthew has discussed with respect to other figures used in reports on the cost of China’s anticorruption drive), but the more general notion that aggressive anticorruption enforcement can have a chilling effect on both public and private investment, which in turn can have negative macroeconomic impacts, is harder to rebut.

Taking this chilling effect seriously does not imply the view that corruption is an “efficient grease” or otherwise economically beneficial. The point, rather, is that although corruption is bad, aggressive measures to punish corruption may deter not only corrupt activities (which we want to deter) but also legitimate activities that might entail corruption risks, or be misconstrued as corruption. So, if we think that corruption is bad but that anticorruption enforcement might have an undesirable chilling effect, what should we do? Continue reading

Assessing Corruption: Do We Need a Number?

As GAB readers are aware, I’ve occasionally used this platform to complain about widely-repeated corruption statistics that appear to be, at best, unreliable guesstimates misrepresented as precise calculations—and at worst, completely bogus. (The “$1 trillion in annual bribe payments” figure would be an example of the former; the “corruption costs the global economy $2.6 trillion per year” is an example of the latter.) I recognize that, in the grand scheme of things, made-up statistics and false precision are not that big a deal. After all, the anticorruption community faces 1,634 problems that are more important than false precision, and in any event 43% of all statistics quoted in public debates are completely made up. Yet my strong instincts are that we in the anticorruption community ought to purge these misleading figures from our discussions, and try to pursue not only the academic study of corruption, but also our anticorruption advocacy efforts, using a more rigorous and careful approach to evidence.

But perhaps I’m wrong about that, or at least naïve. A few months ago, after participating in a conference panel where some of the other speakers invoked the “corruption costs $2.6 trillion” figure, I was having a post-panel chat with another one of the panelists (an extremely smart guy who runs the anticorruption programs at a major international NGO), and I was criticizing (snarkily) the tendency to throw out these big but not-well-substantiated numbers. Why, I asked, can’t we just say, “Corruption is a really big problem that imposes significant costs?” We’ve got plenty of research on that point, and—a few iconoclastic critics aside—the idea that corruption is a big problem seems to have gained widespread, mainstream acceptance. Who really cares if the aggregate dollar value of annual bribe payments is $1 trillion, $450 billion, $2.3 trillion, or whatever? Why not just say, corruption is bad, here’s a quick summary of the evidence that it does lots of damage, and move on? My companion nodded, smiled, and said something along the lines of, “Yeah, I see what you’re saying. But as an advocate, you need to have a number.”

We didn’t get to continue our conversation, but that casual remark has stuck with me. After all, as I noted above, this person is extremely smart, insightful, and reflective, and he has lots of experience working on anticorruption advocacy at a very high level (a kind of experience that I, as an Ivory Tower academic, do not have). “As an advocate, you need to have a number.” Is that right? Is there a plausible case for continuing to open op-eds, speeches, policy briefs, and so forth with statements like, “Experts estimate that over $1 trillion bribes are paid each year, costing the global economy over $2.6 trillion,” even if we know that those numbers are at best wildly inaccurate? (This question, by the way, is closely related to an issue I raised in a post last year, that arose out of a debate I had with another advocate about the legal interpretation of the UN Convention Against Corruption.)

I thought I’d use this post as an opportunity to raise that question with our readers, in the hopes of both getting some feedback (especially from our readers with first-hand experience in the advocacy and policymaking communities) and provoking some conversations on this question, even if people don’t end up writing in with their views. And to be clear, I’m not just interested in the narrow question of whether we should keep using the $2.6 billion or $1 trillion estimates. I’m more generally curious about the role (and relative importance) of seemingly precise “big numbers” in anticorruption advocacy work. Do we really need them? Why? And is what we gain worth the costs?

It’s Time to Abandon the “$2.6 Trillion/5% of Global GDP” Corruption-Cost Estimate

In my post a couple weeks back, I expressed some puzzlement about the source of the widely-quoted estimate that corruption costs the global economy approximately $2.6 trillion, or roughly 5% of global GDP. I was hoping that someone out there in GAB Reader-Land would be able to point me to the source for this figure (as several GAB readers helpfully did when I expressed similar puzzlement last year about the source for the related estimate that there are approximately $1 trillion in annual bribe payments). Alas, although several people made some very insightful comments (some of which are in the public comment thread with the original post), this time it seems that nobody out there has been able to point me to a definitive source.

I’ve done a bit more poking around (with the help of GAB readers and contributors), and here’s my best guess as to where the $2.6 trillion/5% of GDP number comes from: Continue reading

A Quick (Partial) Fix for the CPI

A regular readers of this blog know, I’ve been quite critical of the idea that one can measure changes in corruption (or even the perception of corruption) using within-country year-to-year variation in the Transparency International Corruption Perceptions Index (CPI). To be clear, I’m not one of those people who like to trash the CPI across the board – I actually think it can be quite useful. But given the way the index is calculated, there are big problems with looking at an individual country’s CPI score this year, comparing it to previous years, and drawing conclusions as to whether (perceived) corruption is getting worse or better. Among the many problems with making these sort of year-to-year comparisons is the fact the sources used to calculate any individual country’s CPI score may change from year to year, and the fact that a big, idiosyncratic movement in an individual source can have an outsized influence on the change in the composite score. (For more discussion of these points, see here, here, and here.) Also, while TI does provide 90% confidence intervals for its yearly estimates, the fact that confidence intervals overlap does not necessarily mean that there’s no statistically significant difference between the scores (an important point I’ll confess to sometimes neglecting in my own prior discussions of these issues).

Although there are lots of other problems with the CPI, and in particular with making over-time CPI comparisons, I think there’s a fairly simple procedure that TI (or anybody working with the TI data) could implement to address the problems just discussed. Since TI will be releasing the 2015 CPI within the next month, I thought this might be a good time to lay out what I think one ought to do to evaluate whether there have been statistically significant within-country changes in the CPI from one year to another. (I should say up front that I’m not an expert in statistical analysis, so it’s entirely possible I’ve screwed this up in some way. But I think I’ve got the important parts basically right.) Here goes: Continue reading

More on the “News from Nowhere” Problem in Anticorruption Research

One of my all-time favorite academic papers — which should be required reading not only for those who work on anticorruption, but on any topic where people casually throw around statistics — is Marc Galanter‘s 1993 article News from Nowhere: The Debased Debate on Civil Justice. Professor Galanter’s paper doesn’t have anything directly to do with international corruption. Rather, he sets out to debunk a series of widely-held but mostly-false beliefs about civil litigation in the United States, and in the process he traces the origins of many of the statistics often cited in debates about that topic. He finds that many of these statistics come from, well, nowhere. Here’s my favorite example: Around the time Professor Galanter was writing, it was common to hear claims that the civil justice system costs $80 billion in direct litigation costs; indeed, that figure appeared in an official report from the President’s Council on Competitiveness. The report’s only source for that estimate, however, was an article in Forbes; Forbes, in turn, had drawn the figure from a 1988 book by Peter Huber. But Huber himself hadn’t done any direct research on the costs of the system. Rather, Huber’s only source for the $80 billion figure was an article in Chief Executive magazine, which reported that at a roundtable discussion, a CEO claimed that “it’s estimated” (he didn’t say by whom) that insurance liability costs industry $80 billion per year. So: A CEO throws out a number at a roundtable discussion, without a source, it gets quoted in a non-scholarly magazine, repeated (and thus “laundered”) in what appears to be a serious book, and then picked up in the popular press and official government reports as an important and troubling truth about the out-of-control costs of the US civil justice system.

I thought about Galanter’s book the other day when I was reading the Poznan Declaration on “Whole-of-University Promotion of Social Capital, Health, and Development.” The Declaration itself is about getting universities to commit to integrating anticorruption and ethics into their programs; I may have something to say about the substance of the declaration itself in a later post. But the following assertion in the Declaration caught my eye: “Despite the relative widespread implementations of anti-corruption reforms and institutional solutions, no more than 21 countries have enjoyed a significant decrease in corruption levels since 1996, while at the same time 27 countries have become worse off.” Wow, I thought, that seems awfully precise, and if it’s true it’s very troubling. Despite the fact that I spend a fair amount of time reading about the comparative study of corruption, that statistic is news to me. It turns out, though, that it’s news from nowhere. Continue reading