The Persistence of Phony Statistics in Anticorruption Discourse

Early last month, UN Secretary General António Guterres delivered some brief opening remarks to the Security Council at a meeting on the relationship between corruption and conflict. In these remarks, Secretary General Guterres cited a couple of statistics about the economic costs of corruption: an estimate, attributed to the World Economic Forum (WEF), that the global cost of corruption is $2.6 trillion (or 5% of global GDP), as well as another estimate, attributed to the World Bank, that individuals and businesses cumulatively pay over $1 trillion in bribes each year. And last week, in her opening remarks at the International Anti-Corruption Conference, former Transparency International chair Huguette Labelle repeated these same figures.

Those statistics, as I’ve explained in prior posts (see here and here) are bogus. I realize that Secretary General Guterres’ invocation of those numbers shouldn’t bother me so much, since these figures had no substantive importance in his speech, and the speech itself was just the usual collection of platitudes and bromides about how corruption is bad, how the international community needs to do more to fight it, that the UN is a key player in the global effort against corruption, blah blah blah. Ditto for Ms. Labelle–her speech used these numbers kind of like a rhetorical garnish, to underscore the point that corruption is widespread and harmful, a point with which I very much agree. But just on principle, I feel like it’s important to set the right tone for evidence-based policymaking by eschewing impressive-sounding numbers that do not stand up to even mild scrutiny. Just to recap: Continue reading

The 2016 CPI and the Value of Corruption Perceptions

Last month, Transparency International released its annual Corruption Perceptions Index (CPI). As usual, the release of the CPI has generated widespread discussion and analysis. Previous GAB posts have discussed many of the benefits and challenges of the CPI, with particular attention to the validity of the measurement and the flagrant misreporting of its results. The release of this year’s CPI, and all the media attention it has received, provides an occasion to revisit important questions about how the CPI should and should not be used by researchers, policymakers, and others.

As past posts have discussed, it’s a mistake to focus on the change in each country’s CPI score from the previous year. These changes are often due to changes in the sources used to calculate the score, and most of these changes are not statistically meaningful. As a quick check, I compared the confidence intervals for the 2015 and 2016 CPIs and found that, for each country included in both years, the confidence intervals overlap. (While this doesn’t rule out the possibility of statistically significant changes for some countries, it suggests that a more rigorous statistical test is required to see if the changes are meaningful.) Moreover, even though a few changes each year usually pass the conventional thresholds for statistical significance, with 176 countries in the data, we should expect some of them to exhibit statistical significance, even if in fact all changes are driven by random error. Nevertheless, international newspapers have already begun analyses that compare annual rankings, with headlines such as “Pakistan’s score improves on Corruption Perception Index 2016” from The News International, and “Demonetisation effect? Corruption index ranking improves but a long way to go” from the Hidustan Times. Alas, Transparency International sometimes seems to encourage this style of reporting, both by showing the CPI annual results in a table, and with language such as “more countries declined than improved in this year’s results.” After all, “no change” is no headline.

Although certain uses of the CPI are inappropriate, such as comparing each country’s movement from one year to the next, this does not mean that the CPI is not useful. Indeed, some critics have the unfortunate tendency to dismiss the CPI out of hand, often emphasizing that corruption perceptions are not the same as corruption reality. That is certainly true—TI goes out of its way to emphasize this point with each release of a new CPI— but there are at least two reasons why measuring corruption perceptions is valuable: Continue reading

On Theory, Data, and Academic Malpractice in Anticorruption Research

I’m committed (probably self-servingly) to the idea that academic research is vital to both understanding and ameliorating corruption. I sometimes worry, though, that we in the research community don’t always live up to our highest ideals. Case in point: A little while back I recently asked my law school’s library to help me track down some research papers on corruption-related topics, including a working paper from a few years ago, co-authored by a very well-known and influential corruption/good-governance researcher. I’d seen the paper cited in other articles but couldn’t find it. The library staff couldn’t find it either, and emailed the authors directly to ask if a copy of the paper was available. Here is a verbatim reproduction of this famous professor’s response:

Thanks for your email. Unfortunately, we decided not to finish this paper since we could not get the data to fit our theory[.]

I have to say, I found this response a bit troubling.

Now, to be fair, maybe what this person (whose first language is not English) actually meant was that he and his coauthor were unable to locate the data that would allow a meaningful test of the theory. (In other words, perhaps the statement “We could not get the data to fit our theory” should be understood to mean: “We could not acquire the sort of data that would be necessary to test our theory.”) But boy, much as I want to be charitable, it sure sounds like what this person meant was that he and his coauthor had tried to slice and dice the data in lots of different ways to get a result that fit a predetermined theory (so-called “Procrustean data torturing”), and that when they couldn’t get nature to confess, they spiked the paper rather than publicizing the null findings (contributing to the so-called “file drawer problem”).

Now, again, maybe that latter reading is wrong and unfair. Maybe the more charitable interpretation is actually the correct one. But still, it’s worrying. Even if this case was not, in fact, itself an illustration of the data torturing and the file-drawer problem, I’m sure those things go on in anticorruption research, just as they do elsewhere. Lots of scholars (including the author of the above email) have their own pet theories about the best way to promote high-quality governance, and spend quite a bit of time advising governments and NGO reformers on the basis of these (allegedly) evidence-based theories. But for the results of academic research to be credible and useful, we all need to be very careful about how we go about producing our scholarship, and to be careful not to let our findings — or our decisions about what projects to pursue, publish, and publicize — be unduly determined by our preconceived notions.

Chill Out: Fine-Tuning Anticorruption Initiatives to Decrease Their Chilling Effect

Who is “harmed” by aggressive anticorruption crackdowns? The most obvious answer is corrupt bureaucrats, shady contractors, and those who benefit from illicit flows of money. And while there are concerns about political bias and other forms of discrimination in the selection of targets, in general most of us rightly shed few tears for corrupt public officials and those who benefit from their illicit acts. But aggressive anticorruption crackdowns may have an important indirect cost: they may have a chilling effect on legitimate, socially beneficial behavior, such as public and private investment in economically productive activities. Although chilling effects are often discussed in other areas, such as with First Amendment rights in the United States, there is little discussion of it in the anticorruption context. That should change.

For example, in Indonesia, recent efforts to crack down on corruption appear to have stunted simultaneous measures to grow the economy through fiscal stimulus. As this Reuters article relates, “Indonesian bureaucrats are holding off spending billions of dollars on everything from schools and clinics to garbage trucks and parking meters, fearful that any major expenditure could come under the scanner of fervent anti-corruption fighters.” Nor is Indonesia the only example. In April 2014, Bank of America estimated that China’s corruption crackdown would cost the Chinese economy approximately $100 billion that year. One can challenge that estimate (as Matthew has discussed with respect to other figures used in reports on the cost of China’s anticorruption drive), but the more general notion that aggressive anticorruption enforcement can have a chilling effect on both public and private investment, which in turn can have negative macroeconomic impacts, is harder to rebut.

Taking this chilling effect seriously does not imply the view that corruption is an “efficient grease” or otherwise economically beneficial. The point, rather, is that although corruption is bad, aggressive measures to punish corruption may deter not only corrupt activities (which we want to deter) but also legitimate activities that might entail corruption risks, or be misconstrued as corruption. So, if we think that corruption is bad but that anticorruption enforcement might have an undesirable chilling effect, what should we do? Continue reading

Assessing Corruption: Do We Need a Number?

As GAB readers are aware, I’ve occasionally used this platform to complain about widely-repeated corruption statistics that appear to be, at best, unreliable guesstimates misrepresented as precise calculations—and at worst, completely bogus. (The “$1 trillion in annual bribe payments” figure would be an example of the former; the “corruption costs the global economy $2.6 trillion per year” is an example of the latter.) I recognize that, in the grand scheme of things, made-up statistics and false precision are not that big a deal. After all, the anticorruption community faces 1,634 problems that are more important than false precision, and in any event 43% of all statistics quoted in public debates are completely made up. Yet my strong instincts are that we in the anticorruption community ought to purge these misleading figures from our discussions, and try to pursue not only the academic study of corruption, but also our anticorruption advocacy efforts, using a more rigorous and careful approach to evidence.

But perhaps I’m wrong about that, or at least naïve. A few months ago, after participating in a conference panel where some of the other speakers invoked the “corruption costs $2.6 trillion” figure, I was having a post-panel chat with another one of the panelists (an extremely smart guy who runs the anticorruption programs at a major international NGO), and I was criticizing (snarkily) the tendency to throw out these big but not-well-substantiated numbers. Why, I asked, can’t we just say, “Corruption is a really big problem that imposes significant costs?” We’ve got plenty of research on that point, and—a few iconoclastic critics aside—the idea that corruption is a big problem seems to have gained widespread, mainstream acceptance. Who really cares if the aggregate dollar value of annual bribe payments is $1 trillion, $450 billion, $2.3 trillion, or whatever? Why not just say, corruption is bad, here’s a quick summary of the evidence that it does lots of damage, and move on? My companion nodded, smiled, and said something along the lines of, “Yeah, I see what you’re saying. But as an advocate, you need to have a number.”

We didn’t get to continue our conversation, but that casual remark has stuck with me. After all, as I noted above, this person is extremely smart, insightful, and reflective, and he has lots of experience working on anticorruption advocacy at a very high level (a kind of experience that I, as an Ivory Tower academic, do not have). “As an advocate, you need to have a number.” Is that right? Is there a plausible case for continuing to open op-eds, speeches, policy briefs, and so forth with statements like, “Experts estimate that over $1 trillion bribes are paid each year, costing the global economy over $2.6 trillion,” even if we know that those numbers are at best wildly inaccurate? (This question, by the way, is closely related to an issue I raised in a post last year, that arose out of a debate I had with another advocate about the legal interpretation of the UN Convention Against Corruption.)

I thought I’d use this post as an opportunity to raise that question with our readers, in the hopes of both getting some feedback (especially from our readers with first-hand experience in the advocacy and policymaking communities) and provoking some conversations on this question, even if people don’t end up writing in with their views. And to be clear, I’m not just interested in the narrow question of whether we should keep using the $2.6 billion or $1 trillion estimates. I’m more generally curious about the role (and relative importance) of seemingly precise “big numbers” in anticorruption advocacy work. Do we really need them? Why? And is what we gain worth the costs?

It’s Time to Abandon the “$2.6 Trillion/5% of Global GDP” Corruption-Cost Estimate

In my post a couple weeks back, I expressed some puzzlement about the source of the widely-quoted estimate that corruption costs the global economy approximately $2.6 trillion, or roughly 5% of global GDP. I was hoping that someone out there in GAB Reader-Land would be able to point me to the source for this figure (as several GAB readers helpfully did when I expressed similar puzzlement last year about the source for the related estimate that there are approximately $1 trillion in annual bribe payments). Alas, although several people made some very insightful comments (some of which are in the public comment thread with the original post), this time it seems that nobody out there has been able to point me to a definitive source.

I’ve done a bit more poking around (with the help of GAB readers and contributors), and here’s my best guess as to where the $2.6 trillion/5% of GDP number comes from: Continue reading

A Quick (Partial) Fix for the CPI

A regular readers of this blog know, I’ve been quite critical of the idea that one can measure changes in corruption (or even the perception of corruption) using within-country year-to-year variation in the Transparency International Corruption Perceptions Index (CPI). To be clear, I’m not one of those people who like to trash the CPI across the board – I actually think it can be quite useful. But given the way the index is calculated, there are big problems with looking at an individual country’s CPI score this year, comparing it to previous years, and drawing conclusions as to whether (perceived) corruption is getting worse or better. Among the many problems with making these sort of year-to-year comparisons is the fact the sources used to calculate any individual country’s CPI score may change from year to year, and the fact that a big, idiosyncratic movement in an individual source can have an outsized influence on the change in the composite score. (For more discussion of these points, see here, here, and here.) Also, while TI does provide 90% confidence intervals for its yearly estimates, the fact that confidence intervals overlap does not necessarily mean that there’s no statistically significant difference between the scores (an important point I’ll confess to sometimes neglecting in my own prior discussions of these issues).

Although there are lots of other problems with the CPI, and in particular with making over-time CPI comparisons, I think there’s a fairly simple procedure that TI (or anybody working with the TI data) could implement to address the problems just discussed. Since TI will be releasing the 2015 CPI within the next month, I thought this might be a good time to lay out what I think one ought to do to evaluate whether there have been statistically significant within-country changes in the CPI from one year to another. (I should say up front that I’m not an expert in statistical analysis, so it’s entirely possible I’ve screwed this up in some way. But I think I’ve got the important parts basically right.) Here goes: Continue reading