Guest Post: Contesting the Narrative of Anticorruption Failure

Today’s guest post is from Robert Barrington, currently a professor of practice at the University of Sussex’s Centre for the Study of Corruption, who previously served as the executive director of Transparency International UK, where he worked for over a decade.

I have read with great interest the recent exchange of views between Professor Bo Rothstein and Professor Matthew Stephenson on the academic study of corruption and anticorruption. As an anticorruption practitioner who now works within an academic research center, I was particularly struck by how their exchange (Professor Rothstein’s initial post, Professor Stephenson’s critique, and Professor Rothstein’s reply) surfaced some extremely important issues for anticorruption scholarship, its purposes, and its relationship to anticorruption practice.

I find it hard to agree with Professor Rothstein’s analysis, but this is before even looking at his points of difference with Professor Stephenson. My main beef with Professor Rothstein’s analysis is with his starting assumption of widespread failure. Like so many prominent scholars who study corruption, he proceeds from the premise that pretty much all of the anticorruption reform activity over the last generation has failed. He asserts that “[d]espite huge efforts from international development organizations, we have seen precious little success combating corruption,” that anticorruption reform efforts have been a “huge policy failure,” and sets out to explain “[w]hy …  so many anti-corruption programs [have] not delivered[.]” Professor Rothstein then offers three main answers, which Professor Stephenson criticizes.

In taking this downbeat view, Professor Rothstein is not alone. The scholarship of failure on this subject lists among its adherents many of the most prominent academic voices in the field. Professor Alina Mungiu-Pippidi has framed as a central question in corruption scholarship, “[W]hy do so many anticorruption reform initiatives fail?” Professor Michael Johnston asserts that “the results of anticorruption reform initiatives, with very few exceptions, have been unimpressive, or even downright counter-productive.” Professor Paul Heywood, notable for the nuance he generally brings to anticorruption analysis, asserts that there has been a “broad failure of anticorruption policies” in developing and developed countries alike. And many scholars proceed to reason backwards from that starting point of failure: If anticorruption reform efforts have been an across-the-board failure, it must be because anticorruption practitioners are doing things in the wrong way, which is because they are proceeding from an entirely wrongheaded set of premises. The principal problems identified by these scholars, perhaps not coincidentally, are those where academics might have a comparative advantage over practitioners: use of the wrong definition of corruption, use of the wrong social science framework to understand corruption, and (as Professor Rothstein puts it) locating corruption in the “wrong social spaces.”

That so many distinguished scholars have advanced something like this assessment makes me wary, as a practitioner, of offering a different view. But I do see things differently. In my view, both the initial assessment (that anticorruption reform efforts have been an across-the-board failure) and the diagnosis (that this failure is due to practitioners not embracing the right definitions and theories) are incorrect; they are more than a little unfair, and potentially harmful. I want to emphasize that different take should not be considered as an attack on eminent scholars, but a genuine effort to tease out why, when presented with the same evidence, some academics see failure, while many practitioners see success. Here goes: Continue reading

How Reliable Are Global Quantitative Corruption Statistics? A New U4 Report Suggests the Need for Caution

Those who work in the anticorruption field are likely familiar with the frequent citation of quantitative estimates of the amount and impact of global corruption. Indeed, it has become commonplace for speeches and reports about the corruption problem to open with such statistics—including, for example, the claim that approximately US$1 trillion in bribes are paid each year, the claim that corruption costs the global economy US$2.6 trillion (or 5% of global GDP) annually, and the claim that each year 10-25% of government procurement spending is lost to corruption. How reliable are these quantitative estimates? This is a topic we’ve discussed on the blog before: A few years back I did a couple of posts suggesting some skepticism about the US$1 trillion and US$2.6 trillion numbers (see here, here, here, and here), which were followed by some even sharper criticisms from senior GAB contributor Rick Messick and guest poster Maya Forstater.

This past year, thanks to the U4 Anti-Corruption Resource Centre, I had the opportunity to take a deeper dive into this issue in collaboration with Cecilie Wathne (formerly a U4 Senior Advisor, now a Project Leader at Norway’s Institute for Marine Research). The result of our work is a U4 Issue published last month, entitled “The Credibility of Corruption Statistics: A Critical Review of Ten Global Estimates.” (A direct link to the PDF version of the paper is here.)

In the paper, Cecilie and I identified and reviewed ten widely-cited quantitative estimates concerning corruption (including the three noted above), tried to trace these figures back to their original source, and assess their credibility and reliability. While the report provides a detailed discussion of what we found regarding the origins of each estimate, we also classified each of the ten into one of three categories: credible, problematic, and unfounded.

Alas, we could not rate any of these ten widely-cited statistics as credible (and only two came close). Six of the ten are problematic (sometimes seriously so), and the other four are, so far as we can tell, entirely unfounded. Interested readers can refer to the full report, but just to provide a bit more information about the statistics we investigated and what we found, let me reproduce here the summary table from the paper, and also try to summarize our principal suggestions for improving the use of quantitative evidence in discussions of global corruption: Continue reading

An Inside View of Corruption in a Tax and Customs Agency

Information derived from the direct observation of corrupt behavior provides insights no other source can match.  From first-hand reports of the number and amount of bribes Indonesian truck drivers paid to traverse different provinces, Barron and Olken reached important conclusions about centralized versus decentralized bribery schemes. Data Sequiera and Djankov gathered from South African and Mozambican clearing agents on bribery at their nations’ ports and border posts allowed the two to show how differences in tariff rates and uncertainties over the expected bribe amount affected firms’ behavior. The resourcefulness these and other researchers displayed in compiling direct evidence of corruption and the thoughtful, sometimes counter-intuitive conclusions their analysis yielded are summarized in this first-rate review essay by Sequiera.

As rich a source of learning on corruption as it is, collecting direct observation data is no mean feat.  Those committing corruption crimes don’t generally invite nosy observers to watch and record their actions. That is why it was especially welcome when a friend and colleague shared the parts of an interview with the head of a Latin American customs and tax agency that touched on corruption. The agency head’s insider view, though informed by training as a professional economist and a background in academia, offers nothing close to what readers can take from Barron and Olken, Sequiera and Djankov, and other full-blown academic studies.  Nonetheless, what he reports raises interesting, provocative issues of use to reformers and to those looking for hypotheses worth testing.

The portion of the interview dealing with corruption, anonymized to protect the source, is below.  Would other insiders please come forward?  Again, it is doubtful your observations will be anywhere near as valuable as the data the Barrons, Olkens, Sequieras, and Djankovs of  the world have so cleverly and painstakingly collected, but in an information scarce environment, all contributions are welcome. GAB would be more than happy to publish what you have observed about corruption in your organization with safeguards to protect your identity. Continue reading

Must the IMF Quantify Grand Corruption? A Friendly-But-Skeptical Reply to Global Financial Integrity

The World Bank and IMF held their annual meetings last week, and it appears from the agenda that considerable attention was devoted to corruption—an encouraging sign that these organizations continue to treat this problem as both serious and relevant to their work. But does addressing the corruption problem effectively require that these organizations make more of an effort to quantify the problem? In a provocative post last week on Global Financial Integrity’s blog, Tom Cardamone (GFI’s President) and Maureen Heydt (GFI’s Communications Coordinator) argue that the answer is yes. In particular, they argue that the IMF should “undertake two analyses”: First the IMF “should conduct an annual assessment of grand corruption in all countries and publish the dollar value of that analysis.” Second, the IMF “should conduct an opportunity cost analysis of [] stolen assets”—calculating, for example, how many hospital beds or vaccines the stolen money could have purchased, or how many school teachers could have been hired.

This second analysis is more straightforward, and dependent on the first—once we know the dollar value of stolen assets (or grand corruption more generally), it’s not too hard to do some simple division to show how that money might otherwise have been spent. So it seems to me that the real question is whether it indeed makes sense for the IMF to produce an annual estimate, for each country, of the total amount stolen or otherwise lost to grand corruption.

I’m skeptical, despite my general enthusiasm for evidence-based policymaking/advocacy generally, and for the need for more and better quantitative data on corruption. The reasons for my skepticism are as follows: Continue reading

New Podcast Episode, Featuring Oz Dincer

A new episode of KickBack: The Global Anticorruption Podcast is now available. This week’s episode features an interview with Professor Oguzhan “Oz” Dincer, the Director of the Institute for Corruption Studies at Illinois State University. In the interview, Professor Dincer and I discuss a range of topics, including new approaches to the challenges of measuring corruption, the concept of “legal corruption,” the role of cultural factors in influencing corrupt behavior (both internationally and within the United States), and troubling developments related to political corruption in Turkey.

You can find this episode, along with links to previous podcast episodes, at the following locations:

KickBack is a collaborative effort between GAB and the ICRN. If you like it, please subscribe/follow, and tell all your friends! And if you have suggestions for voices you’d like to hear on the podcast, just send me a message and let me know.

The Case of the Missing Exports: What Trade Discrepancies Mean for Anticorruption Efforts

In 2017, the Republic of Georgia sent $272 million in exports to its neighbor, Azerbaijan. The same year, Azerbaijan reported receiving $74 million—that’s not a typo—in imports from Georgia. Goods worth $198 million seemingly disappeared before they reached Azerbaijani customs. The gap is a big deal. Azerbaijan taxes imports just above 5% on average (weighted for trade), which means its treasury missed out on collecting roughly $10 million in tariffs—0.1% of all government spending in that year—from just a single trading partner.

Many factors could explain the gap (see, for example, here, here, and here). Shippers might have rerouted goods to other destinations, the two countries’ customs offices might value goods differently, or the customs offices could have erred in reporting results or converting them to dollars. But one reason Azerbaijan’s reported imports are so low—not only here, but systemically across trade partners and years—is corruption and associated tariff evasion. Many traders likely undervalue and/or underreport their imports when going through Azerbaijani customs, and the sheer magnitude of the trade gap suggests the complicity or collusion of the authorities. The corruption involved might be petty (e.g., an importer bribing a customs officer to look the other way, or a customs officer pocketing the tax and leaving it off the books) or grand (e.g., a politician with a side business using her influence to shield imports from inspection; see here). A similar dynamic might also be at work in exporting countries: companies may undervalue exports to limit their income tax liability, possibly paying bribes to avoid audits.

Though Azerbaijan may be an extreme case, it is not unique. Economists have examined these export gaps (sometimes called “mirror statistics”) and have found similar discrepancies in, for example, Hong Kong’s exports to China, China’s exports to the United States, and Cambodia’s imports from all trading partners. Most recently, economists Derek Kellenberg and Arik Levinson compared trade data across almost all countries over an eleven-year time period, finding that “corruption plays an important role in the degree of misreports for both importers and exporters.” For lower-income countries, Professors Kellenberg and Levinson showed a positive relationship between a country’s level of perceived corruption, as measured by Transparency International’s Corruption Perceptions Index (CPI), and its underreporting of imports. The authors also showed a strong positive relationship between perceived corruption and the underreporting of exports across all countries.

Mirror statistics are an imperfect measure of customs corruption, to be sure, but they can serve two useful purposes in fighting this sort of corruption, and anticorruption reformers should pay more attention to this type of data. Continue reading

Some Good News and Bad News About Transparency International’s Interpretation of its Latest Corruption Perceptions Index

In my post last week, I fired off a knee-jerk reaction to Transparency International’s latest Corruption Perceptions Index (CPI). My message of that post was simple and straightforward: We shouldn’t attach much (or perhaps any) importance to short-term changes in any individual country or region’s CPI score, and the bad habit of journalists—and to some extent TI itself—of focusing on such changes is both misleading and counterproductive.

Since I was trying to get that post out quickly, so as to coincide with the release of the CPI, I published it before I’d had a chance to read carefully all of the material TI published along with the new CPI, and I promised that once I’d had a chance to look at those other materials, I would follow up if I had anything else to say. I’ve now had that chance, and I do have a few additional thoughts. The short version is that the way TI itself chose to present and discuss the implications of the 2018 CPI, in the accompanying materials, is both better and worse than I’d originally thought.

So, first, the bad news: Continue reading

A Reminder: Year-to-Year CPI Comparisons for Individual Countries are Meaningless, Misleading, and Should Be Avoided

Today, Transparency International released its new Corruption Perceptions Index (CPI) for 2018. At some point, hopefully soon, I’ll have time to look closely at the new data and accompanying materials, and if I have something to say about it, I’ll post it here. But that will probably take a while, and since the media coverage of the CPI is usually pretty intense in the first few days after the release, and dissipates in a week or two, I wanted to get out at least one post right now, on the day of the release, with a plea to everyone out there–especially journalists, but civil society activists and others as well:

DO NOT COMPARE ANY GIVEN COUNTRY’S CPI SCORE TO LAST YEAR’S SCORE TO MAKE CLAIMS ABOUT WHAT’S HAPPENING IN THE FIGHT AGAINST CORRUPTION.

Just don’t do it. Don’t. I know the temptation can seem overwhelming. Who’s up? Who’s down? Things are getting better! Things are getting worse! Nothing is changing! So many stories can be written based on these changes (or non-changes).

But these sorts of comparisons are virtually all completely useless, and probably counterproductive. Continue reading

The Persistence of Phony Statistics in Anticorruption Discourse

Early last month, UN Secretary General António Guterres delivered some brief opening remarks to the Security Council at a meeting on the relationship between corruption and conflict. In these remarks, Secretary General Guterres cited a couple of statistics about the economic costs of corruption: an estimate, attributed to the World Economic Forum (WEF), that the global cost of corruption is $2.6 trillion (or 5% of global GDP), as well as another estimate, attributed to the World Bank, that individuals and businesses cumulatively pay over $1 trillion in bribes each year. And last week, in her opening remarks at the International Anti-Corruption Conference, former Transparency International chair Huguette Labelle repeated these same figures.

Those statistics, as I’ve explained in prior posts (see here and here) are bogus. I realize that Secretary General Guterres’ invocation of those numbers shouldn’t bother me so much, since these figures had no substantive importance in his speech, and the speech itself was just the usual collection of platitudes and bromides about how corruption is bad, how the international community needs to do more to fight it, that the UN is a key player in the global effort against corruption, blah blah blah. Ditto for Ms. Labelle–her speech used these numbers kind of like a rhetorical garnish, to underscore the point that corruption is widespread and harmful, a point with which I very much agree. But just on principle, I feel like it’s important to set the right tone for evidence-based policymaking by eschewing impressive-sounding numbers that do not stand up to even mild scrutiny. Just to recap: Continue reading

Guest Post–Assessing Corruption with Big Data

Today’s guest post is from Enestor Dos Santos, principal economist at BBVA Research.

Ascertaining the actual level of corruption is not easy, given that it is usually a clandestine activity, and much of the available data is not comparable across countries or across time. Survey data on corruption experience can be helpful, but it is often limited to very specific kinds of corruption (such as petty bribery). Researchers and analysts have therefore, quite reasonably, tended to rely on subjective corruption perception data, such as Transparency International’s well-known Corruption Perceptions Index (CPI). (The CPI aggregates corruption perception data from a variety of other sources, mostly expert assessments.) But conventional corruption perception measures (including those use to construct the CPI) have well-known problems, including limited coverage (with respect to both years and countries) and relatively low frequency (usually annual). And they rely on the perceptions of a handful of experts, which may not necessarily be representative. These limitations mean that while traditional perception measures like the CPI may be useful for some purposes, they are not as helpful for others, such as measuring the impact of individual events or news reports on corruption perceptions, or how changes in corruption perceptions affect government approval ratings.

To address these concerns, a recent study by BBVA Research, entitled Assessing Corruption with Big Data, offered an alternative, complementary type of corruption perceptions measure, based on Google web searches about corruption. To construct this index, we examined all web searches classified by Google Trends in the “Law and Government” category for individual countries, and calculated the proportion of those searches that contain the word “corruption” (in any language and including its misspellings and synonyms). Our index, which begins in 2004, covers more than 190 countries and, unlike traditional corruption indicators, is available in real-time and with high-frequency (monthly). Moreover, it can be reproduced very easily and at very low cost.

Here are some of our main findings: Continue reading