- Soundcloud
- Stitcher
- iTunes
- Spotify
- Google Podcasts
- Apple Podcasts
- Pocket Cases
- Overcast
- Castbox
- Radio Public
Tag Archives: measurement
UNODC Statistical Framework to Measure Corruption: Comments Requested
Within the global anticorruption community, no topic has generated as much discussion as the measurement issue. Start with the most basic of questions. Is there an agreed upon definition of corruption? Get by the heated objections to claim there is none, and next consider: are there ways to measure something that by its nature is clandestine? Take for granted clever social scientists can, then ask if these measures are comparable. Across time? Different nations?
The methodological and epistemological debates over such questions have raged in the academy for millennia. But as corruption has gained ever more salience as a policy issue, the debate has ranged far beyond the academy. Just ask any political leader forced to explain to citizens why his or her country scored poorly on some corruption-rating scale.
The United Nations Office on Drugs and Crime has now brought needed clarity to the debate. At the request of the 189 state parties to the U.N. Convention Against Corruption, it has published the first draft of a comprehensive statistical framework to measure corruption (here) with a form for providing comments (here).
Bearing in mind my bias, I contributed (very slightly and with more comments promised), I think the draft is a first class piece of work. Two of many reasons why.
Continue readingSome Reflections on the Meaning of Anticorruption “Success”
Last month, we had a spirited debate in the anticorruption blogosphere about the conceptualization of corruption, academic approaches to the study of the topic, and the relationship between research and practice. (The debate was prompted by provocative piece by Bo Rothstein, to which I replied; my critical reaction prompted a sur-reply from Professor Rothstein, which was followed by further contributions from Robert Barrington, Paul Heywood, and Michael Johnston.) I’ve been thinking a bit more about one small aspect of that stimulating exchange: How do we, or should we, think about evaluating the success (or lack thereof) of an anticorruption policy or other intervention? I was struck by the very different assessments that several of the participants in last month’s exchange had regarding whether the anticorruption reform movement had been “successful,” and this got me thinking that although part of the divergence of opinion might be due to different interpretations of the evidence, part of what’s going on might be different understandings of what “success” does or should mean in this context.
That observation, in turn, connected to another issue that’s been gnawing at me for a while, that I’ve been having trouble putting into words—but I’m going to take a stab at it in this post. My sense is that when it comes to defining and measuring “success” in the context of anticorruption reform (and probably many other contexts too), there’s a fundamental tension between two conflicting impulses: Continue reading
Guest Post: Succeeding or Failing… at What?
Today’s guest post is from Michael Johnston, Professor of Political Science Emeritus at Colgate University:
A bracing and long overdue debate has surfaced recently on this and other blogs, focusing primarily upon the issue of whether anticorruption efforts have failed but also raising important questions about definitions, theory, analytical methods and—not least—the norms of scholarly discourse. Entries by Bo Rothstein, Matthew Stephenson, Robert Barrington, and Paul Heywood offer searching critiques and a number of cautionary tales that I will certainly take to heart.
The discussions raise many more questions than I can analyze in this short discussion, but as for the issue that launched the exchange—whether many or most anticorruption efforts have failed—my answer is to raise another question: How would we know? To that I add a critical follow-up: If we were to see significant success, what might it look like? The first question, I suggest, has no single clear-cut answer, and never will. As for the second: In my view success would not revolve around levels of corruption, but about the prevalence of justice. Continue reading
Guest Post: Contesting the Narrative of Anticorruption Failure
Today’s guest post is from Robert Barrington, currently a professor of practice at the University of Sussex’s Centre for the Study of Corruption, who previously served as the executive director of Transparency International UK, where he worked for over a decade.
I have read with great interest the recent exchange of views between Professor Bo Rothstein and Professor Matthew Stephenson on the academic study of corruption and anticorruption. As an anticorruption practitioner who now works within an academic research center, I was particularly struck by how their exchange (Professor Rothstein’s initial post, Professor Stephenson’s critique, and Professor Rothstein’s reply) surfaced some extremely important issues for anticorruption scholarship, its purposes, and its relationship to anticorruption practice.
I find it hard to agree with Professor Rothstein’s analysis, but this is before even looking at his points of difference with Professor Stephenson. My main beef with Professor Rothstein’s analysis is with his starting assumption of widespread failure. Like so many prominent scholars who study corruption, he proceeds from the premise that pretty much all of the anticorruption reform activity over the last generation has failed. He asserts that “[d]espite huge efforts from international development organizations, we have seen precious little success combating corruption,” that anticorruption reform efforts have been a “huge policy failure,” and sets out to explain “[w]hy … so many anti-corruption programs [have] not delivered[.]” Professor Rothstein then offers three main answers, which Professor Stephenson criticizes.
In taking this downbeat view, Professor Rothstein is not alone. The scholarship of failure on this subject lists among its adherents many of the most prominent academic voices in the field. Professor Alina Mungiu-Pippidi has framed as a central question in corruption scholarship, “[W]hy do so many anticorruption reform initiatives fail?” Professor Michael Johnston asserts that “the results of anticorruption reform initiatives, with very few exceptions, have been unimpressive, or even downright counter-productive.” Professor Paul Heywood, notable for the nuance he generally brings to anticorruption analysis, asserts that there has been a “broad failure of anticorruption policies” in developing and developed countries alike. And many scholars proceed to reason backwards from that starting point of failure: If anticorruption reform efforts have been an across-the-board failure, it must be because anticorruption practitioners are doing things in the wrong way, which is because they are proceeding from an entirely wrongheaded set of premises. The principal problems identified by these scholars, perhaps not coincidentally, are those where academics might have a comparative advantage over practitioners: use of the wrong definition of corruption, use of the wrong social science framework to understand corruption, and (as Professor Rothstein puts it) locating corruption in the “wrong social spaces.”
That so many distinguished scholars have advanced something like this assessment makes me wary, as a practitioner, of offering a different view. But I do see things differently. In my view, both the initial assessment (that anticorruption reform efforts have been an across-the-board failure) and the diagnosis (that this failure is due to practitioners not embracing the right definitions and theories) are incorrect; they are more than a little unfair, and potentially harmful. I want to emphasize that different take should not be considered as an attack on eminent scholars, but a genuine effort to tease out why, when presented with the same evidence, some academics see failure, while many practitioners see success. Here goes: Continue reading
How Reliable Are Global Quantitative Corruption Statistics? A New U4 Report Suggests the Need for Caution
Those who work in the anticorruption field are likely familiar with the frequent citation of quantitative estimates of the amount and impact of global corruption. Indeed, it has become commonplace for speeches and reports about the corruption problem to open with such statistics—including, for example, the claim that approximately US$1 trillion in bribes are paid each year, the claim that corruption costs the global economy US$2.6 trillion (or 5% of global GDP) annually, and the claim that each year 10-25% of government procurement spending is lost to corruption. How reliable are these quantitative estimates? This is a topic we’ve discussed on the blog before: A few years back I did a couple of posts suggesting some skepticism about the US$1 trillion and US$2.6 trillion numbers (see here, here, here, and here), which were followed by some even sharper criticisms from senior GAB contributor Rick Messick and guest poster Maya Forstater.
This past year, thanks to the U4 Anti-Corruption Resource Centre, I had the opportunity to take a deeper dive into this issue in collaboration with Cecilie Wathne (formerly a U4 Senior Advisor, now a Project Leader at Norway’s Institute for Marine Research). The result of our work is a U4 Issue published last month, entitled “The Credibility of Corruption Statistics: A Critical Review of Ten Global Estimates.” (A direct link to the PDF version of the paper is here.)
In the paper, Cecilie and I identified and reviewed ten widely-cited quantitative estimates concerning corruption (including the three noted above), tried to trace these figures back to their original source, and assess their credibility and reliability. While the report provides a detailed discussion of what we found regarding the origins of each estimate, we also classified each of the ten into one of three categories: credible, problematic, and unfounded.
Alas, we could not rate any of these ten widely-cited statistics as credible (and only two came close). Six of the ten are problematic (sometimes seriously so), and the other four are, so far as we can tell, entirely unfounded. Interested readers can refer to the full report, but just to provide a bit more information about the statistics we investigated and what we found, let me reproduce here the summary table from the paper, and also try to summarize our principal suggestions for improving the use of quantitative evidence in discussions of global corruption: Continue reading
An Inside View of Corruption in a Tax and Customs Agency
Information derived from the direct observation of corrupt behavior provides insights no other source can match. From first-hand reports of the number and amount of bribes Indonesian truck drivers paid to traverse different provinces, Barron and Olken reached important conclusions about centralized versus decentralized bribery schemes. Data Sequiera and Djankov gathered from South African and Mozambican clearing agents on bribery at their nations’ ports and border posts allowed the two to show how differences in tariff rates and uncertainties over the expected bribe amount affected firms’ behavior. The resourcefulness these and other researchers displayed in compiling direct evidence of corruption and the thoughtful, sometimes counter-intuitive conclusions their analysis yielded are summarized in this first-rate review essay by Sequiera.
As rich a source of learning on corruption as it is, collecting direct observation data is no mean feat. Those committing corruption crimes don’t generally invite nosy observers to watch and record their actions. That is why it was especially welcome when a friend and colleague shared the parts of an interview with the head of a Latin American customs and tax agency that touched on corruption. The agency head’s insider view, though informed by training as a professional economist and a background in academia, offers nothing close to what readers can take from Barron and Olken, Sequiera and Djankov, and other full-blown academic studies. Nonetheless, what he reports raises interesting, provocative issues of use to reformers and to those looking for hypotheses worth testing.
The portion of the interview dealing with corruption, anonymized to protect the source, is below. Would other insiders please come forward? Again, it is doubtful your observations will be anywhere near as valuable as the data the Barrons, Olkens, Sequieras, and Djankovs of the world have so cleverly and painstakingly collected, but in an information scarce environment, all contributions are welcome. GAB would be more than happy to publish what you have observed about corruption in your organization with safeguards to protect your identity. Continue reading
Must the IMF Quantify Grand Corruption? A Friendly-But-Skeptical Reply to Global Financial Integrity
The World Bank and IMF held their annual meetings last week, and it appears from the agenda that considerable attention was devoted to corruption—an encouraging sign that these organizations continue to treat this problem as both serious and relevant to their work. But does addressing the corruption problem effectively require that these organizations make more of an effort to quantify the problem? In a provocative post last week on Global Financial Integrity’s blog, Tom Cardamone (GFI’s President) and Maureen Heydt (GFI’s Communications Coordinator) argue that the answer is yes. In particular, they argue that the IMF should “undertake two analyses”: First the IMF “should conduct an annual assessment of grand corruption in all countries and publish the dollar value of that analysis.” Second, the IMF “should conduct an opportunity cost analysis of [] stolen assets”—calculating, for example, how many hospital beds or vaccines the stolen money could have purchased, or how many school teachers could have been hired.
This second analysis is more straightforward, and dependent on the first—once we know the dollar value of stolen assets (or grand corruption more generally), it’s not too hard to do some simple division to show how that money might otherwise have been spent. So it seems to me that the real question is whether it indeed makes sense for the IMF to produce an annual estimate, for each country, of the total amount stolen or otherwise lost to grand corruption.
I’m skeptical, despite my general enthusiasm for evidence-based policymaking/advocacy generally, and for the need for more and better quantitative data on corruption. The reasons for my skepticism are as follows: Continue reading
New Podcast Episode, Featuring Oz Dincer
A new episode of KickBack: The Global Anticorruption Podcast is now available. This week’s episode features an interview with Professor Oguzhan “Oz” Dincer, the Director of the Institute for Corruption Studies at Illinois State University. In the interview, Professor Dincer and I discuss a range of topics, including new approaches to the challenges of measuring corruption, the concept of “legal corruption,” the role of cultural factors in influencing corrupt behavior (both internationally and within the United States), and troubling developments related to political corruption in Turkey.
You can find this episode, along with links to previous podcast episodes, at the following locations:
- The Interdisciplinary Corruption Research Network (ICRN) website
- iTunes
- Soundcloud
- Stitcher
- Spotify
KickBack is a collaborative effort between GAB and the ICRN. If you like it, please subscribe/follow, and tell all your friends! And if you have suggestions for voices you’d like to hear on the podcast, just send me a message and let me know.
The Case of the Missing Exports: What Trade Discrepancies Mean for Anticorruption Efforts
In 2017, the Republic of Georgia sent $272 million in exports to its neighbor, Azerbaijan. The same year, Azerbaijan reported receiving $74 million—that’s not a typo—in imports from Georgia. Goods worth $198 million seemingly disappeared before they reached Azerbaijani customs. The gap is a big deal. Azerbaijan taxes imports just above 5% on average (weighted for trade), which means its treasury missed out on collecting roughly $10 million in tariffs—0.1% of all government spending in that year—from just a single trading partner.
Many factors could explain the gap (see, for example, here, here, and here). Shippers might have rerouted goods to other destinations, the two countries’ customs offices might value goods differently, or the customs offices could have erred in reporting results or converting them to dollars. But one reason Azerbaijan’s reported imports are so low—not only here, but systemically across trade partners and years—is corruption and associated tariff evasion. Many traders likely undervalue and/or underreport their imports when going through Azerbaijani customs, and the sheer magnitude of the trade gap suggests the complicity or collusion of the authorities. The corruption involved might be petty (e.g., an importer bribing a customs officer to look the other way, or a customs officer pocketing the tax and leaving it off the books) or grand (e.g., a politician with a side business using her influence to shield imports from inspection; see here). A similar dynamic might also be at work in exporting countries: companies may undervalue exports to limit their income tax liability, possibly paying bribes to avoid audits.
Though Azerbaijan may be an extreme case, it is not unique. Economists have examined these export gaps (sometimes called “mirror statistics”) and have found similar discrepancies in, for example, Hong Kong’s exports to China, China’s exports to the United States, and Cambodia’s imports from all trading partners. Most recently, economists Derek Kellenberg and Arik Levinson compared trade data across almost all countries over an eleven-year time period, finding that “corruption plays an important role in the degree of misreports for both importers and exporters.” For lower-income countries, Professors Kellenberg and Levinson showed a positive relationship between a country’s level of perceived corruption, as measured by Transparency International’s Corruption Perceptions Index (CPI), and its underreporting of imports. The authors also showed a strong positive relationship between perceived corruption and the underreporting of exports across all countries.
Mirror statistics are an imperfect measure of customs corruption, to be sure, but they can serve two useful purposes in fighting this sort of corruption, and anticorruption reformers should pay more attention to this type of data. Continue reading