On the Political Subtext of Definition Debates, Part 1: Public vs. Private Sector Corruption

Since I started working in the anticorruption field a few years back, I’ve noticed that a substantial amount of the discussion in this field—at conferences, in journals, on blogs like this one, etc.—is given over to debates about definition and measurement. This is something I’ve discussed, and complained about, before (see here, here, and here)—though I concede that every time I bring this up, I’m contributing to the very problem I’m complaining about.

Now, one of the reasons there’s so much debate about definition and measurement in this field is because corruption is, relative to other concepts, particularly difficult to define and measure. Another reason—in my mind the main one—is that while “corruption” is sometimes used as a purely descriptive term (that is, to describe certain conduct, which we can try to measure empirically), it is also an evaluative/normative term—one that connotes “bad” behavior of a certain sort. So any attempt to define corruption (for purposes of positive analysis or empirical research) will often, perhaps inevitably, suggest a normative position on the sorts of conduct, people, or institutions that ought to be condemned.

That’s not an original point, nor even a terribly interesting one. But the more of these “what is corruption” conversations I’ve been a part of, the more I get the sense that there’s a more specific political/ideological subtext to some of the arguments about how corruption should be defined. Nobody ever articulates these ideas in so many words, and so I may be way off base, but I’m going to offer up some conjectures, in this post and in the next one, about what I sense is the ideological subtext of some of these definitional debates.

Here I’ll focus on a fairly narrow issue: Should those organizations that focus on (and sometimes try to measure) “corruption” emphasize forms of corruption that involve the public sector (government, or entities with a sufficiently close connection with government to be considered essentially public instrumentalities), or should the “anticorruption agenda”—as well as the definition and measurement of corruption—also include purely private sector corruption? Continue reading

Two Essential Volumes on Corruption

The study of corruption and what to do about it is no longer an academic or policy-studies backwater.  Matthew’s bibliography of corruption-related publications now lists over 6,000 books, articles, and reports and, as his regular updates show (thank you Matthew), the list continues to grow at the rate of some 50 plus per month.  That is the good news.  It is also of the course the bad news.  Few practitioners, and I suspect even academics, can claim to have absorbed the learning in the 6,000 current documents let alone keep up with the outpouring of new works.

For those who can’t , I recommend two recent books: Dan Hough’s Analysing Corruption and Alina Mungui-Pippidi and Michael Johnston’s Transitions to Good Governance: Creating Virtuous Circles of Anti-Corruption.  Both do an excellent job of synthesizing and extending recent scholarship on corruption issues, and both do so in a sophisticated but accessible manner.  Both have the added virtue of being available in reasonably priced paperback editions. Continue reading

Corruption Discussion on “The Scholars’ Circle”

Last summer UCLA Professor Miriam Golden and I did a radio interview on political corruption for a program called The Scholars’ Circle, hosted by Maria Armoudian. I just learned that a recording of the program is available online, and I thought it might be of interest to some readers of this blog. The recording can be found here; the discussion about corruption begins at 17:16.

The relatively brief but wide-ranging discussion, skillfully moderated by Ms. Armoudian, touches on five major issues (issues that we’ve also covered on this blog):

  • How should we define corruption, and how can we try to measure it? (at 18:11-26:31 on the recording)
  • Possible factors that might contribute to the level of corruption, including economic development, governance systems (democracy v. autocracy), social norms, and culture (26:32-32:41)
  • Whether and how countries can make the transition from a state of endemic corruption to a state of manageable/limited corruption—as well as the risk of backsliding (32:52-47:32)
  • What will the impact of the Trump Administration be on corruption, and on norms of integrity and the rule of law, in the United States? (47:42-52:02)
  • What are some of the main remedies that can help make a system less corrupt? (52:03-56:34)

There’s obviously a limit to how deep one can go in a format like this, and the program is geared toward a non-specialist audience, but I hope some readers find the conversation useful in stimulating more thinking on the topics we covered. Thanks for listening!

Guest Post: Going Beyond Bribery? Improving the Global Corruption Barometer

Coralie Pring, Research Expert at Transparency International, contributes today’s guest post:

Transparency International has been running the Global Corruption Barometer (GCB) – a general population survey on corruption experience and perception – for a decade and a half now. Before moving ahead with plans for the next round of the survey, we decided to review the survey to see if we can improve it and make it more relevant to the current corruption discourse. In particular, we wanted to know whether it would be worthwhile to add extra questions on topics like grand corruption, nepotism, revolving doors, lobbying, and so forth. To that end, we invited 25 academics and representatives from some of Transparency International’s national chapters to a workshop last October to discuss plans for improving the GCB. We initially planned to focus on what we thought would be a simple question: Should we expand the GCB survey to include questions about grand corruption and political corruption?

In fact, this question was nowhere near simple to answer and it really divided the group. (Perhaps this should have been expected when you get 25 researchers in one room!) Moreover, the discussion ended up focusing less on our initial query about whether or how to expand the GCB, and more on two more basic questions: First, are citizen perceptions of corruption reflective of reality? And second, can information about citizen corruption perceptions still be useful even if they are not accurate?

Because these debates may be of interest to many of this blog’s readers, and because TI is still hoping to get input from a broader set of experts on these and related questions, we would like to share a brief summary of the workshop exchange on these core questions. Continue reading

In Bribery Experience Surveys, Should You Control for Contact?

Perception-based corruption indicators, though still the most widely-used and widely-discussed measures of corruption at the country level, get a lot of criticism (some of it misguided, but much of it fair). The main alternative measures of corruption include experience surveys, which ask a representative random sample of firms or citizens about their experience with bribery. Corruption experience surveys are neither new nor rare, but they’re getting more attention these days as researchers and advocates look for more “objective” ways of assessing corruption levels and monitoring progress. Indeed, although some early discussions of measurement of progress toward the Sustainable Development Goals (SDGs) anticorruption target (Target 16.5) suggested—much to my chagrin—that changes in Transparency International’s Corruption Perceptions Index (CPI) score would be the main measure of progress, more recent discussions appear to indicate that in fact progress toward Goal Target 16.5 will be assessed using experience surveys (see here and here).

Of course, corruption experience surveys have their own problems. Most obviously, they typically only measure a fairly narrow form of corruption (usually petty bribery). Also, there’s always the risk that respondents won’t answer truthfully. There’s actually been quite a bit of interesting recent research on that latter concern, which Rick discussed a while back and that I might post about more at some point. But for now, I want to put that problem aside to focus on a different challenge for bribery experience surveys: When presenting or interpreting the results of those surveys, should one control for the amount of contact the respondents have with government officials? Or should one focus on overall rates of bribery, without regard for whether or how frequently respondents interacted with the government?

To make this a bit more concrete, imagine two towns, A and B, each with 1,000 inhabitants. Suppose we survey every resident of both towns and we ask them two questions: First, within the past 12 months, have you had any contact with a government official? Second, if the answer to the first question was yes, did the government official demand a bribe? In Town A, 200 of the residents had contact with a government official, and of these 200, 100 of them reported that the government official they encountered solicited a bribe. In Town B, 800 residents had contact with a government official, and of these 800, 200 reported that the official solicited a bribe. If we don’t control for contact, we would say that bribery experience rates are twice as high in Town B (20%) as in Town A (10%). If we do control for contact, we would say that bribery experience rates were twice as high in Town A (50%) as in Town B (25%). In which town is bribery a bigger problem? In which one are the public officials more corrupt?

The answer is not at all obvious; both controlling for contact and not controlling for contact have potentially significant problems: Continue reading

The 2016 CPI and the Value of Corruption Perceptions

Last month, Transparency International released its annual Corruption Perceptions Index (CPI). As usual, the release of the CPI has generated widespread discussion and analysis. Previous GAB posts have discussed many of the benefits and challenges of the CPI, with particular attention to the validity of the measurement and the flagrant misreporting of its results. The release of this year’s CPI, and all the media attention it has received, provides an occasion to revisit important questions about how the CPI should and should not be used by researchers, policymakers, and others.

As past posts have discussed, it’s a mistake to focus on the change in each country’s CPI score from the previous year. These changes are often due to changes in the sources used to calculate the score, and most of these changes are not statistically meaningful. As a quick check, I compared the confidence intervals for the 2015 and 2016 CPIs and found that, for each country included in both years, the confidence intervals overlap. (While this doesn’t rule out the possibility of statistically significant changes for some countries, it suggests that a more rigorous statistical test is required to see if the changes are meaningful.) Moreover, even though a few changes each year usually pass the conventional thresholds for statistical significance, with 176 countries in the data, we should expect some of them to exhibit statistical significance, even if in fact all changes are driven by random error. Nevertheless, international newspapers have already begun analyses that compare annual rankings, with headlines such as “Pakistan’s score improves on Corruption Perception Index 2016” from The News International, and “Demonetisation effect? Corruption index ranking improves but a long way to go” from the Hidustan Times. Alas, Transparency International sometimes seems to encourage this style of reporting, both by showing the CPI annual results in a table, and with language such as “more countries declined than improved in this year’s results.” After all, “no change” is no headline.

Although certain uses of the CPI are inappropriate, such as comparing each country’s movement from one year to the next, this does not mean that the CPI is not useful. Indeed, some critics have the unfortunate tendency to dismiss the CPI out of hand, often emphasizing that corruption perceptions are not the same as corruption reality. That is certainly true—TI goes out of its way to emphasize this point with each release of a new CPI— but there are at least two reasons why measuring corruption perceptions is valuable: Continue reading

Guest Post: The Metaphysics of “Corruption” (or, The Fundamental Challenge to Comparative Corruption Measurement)

GAB is pleased to welcome back Jacob Eisler, Lecturer at Cambridge University, who contributes the following guest post:

A couple months back, Matthew Stephenson and Michael Johnston engaged in a lively debate on the question of if aggregate-level data of corruption is useful, focusing on the appropriate level of methodological skepticism that should be directed towards large-scale efforts to quantify corruption (see here, here, here, and here). While this debate touched on a number of fascinating questions regarding how to best treat data regarding corruption, it has drifted away from why Michael had a concern with overly aggressive quantification in the first place: Actually addressing corruption requires a “standard of goodness,” and the difficulty in coming up with such a standard explains why the social sciences have faced a “longstanding inability to come to a working consensus over how to define corruption.” In other words, when we talk about corruption, we are inevitably talking about something bad that suggests the vitiation or distortion of something good. It is difficult to conceptualize corruption except as a distortion of a non-objectionable political process—that is, political practice undertaken with integrity. This need not mean that there must be some shared first-order property of good governance; but it does suggest that there is a shared property to distorted or corrupted governance that must derive from some shared property of all politics.

If this idea of a “shared feature” is taken seriously, it would suggest those who argue for the value of comparative corruption metrics are making a very strong claim: that if you are comparing corruption within a country, or across countries, all the relevant polities and types of practice must have some shared feature, deviation from which counts as corruption. This shared feature in turn would be an aspect of governance. It could be any number of constants in human society – a constant feature of morality in governance, or tendencies of human anthropology. But in any case, this is a very distinctive and powerful claim, and one that requires strong assumptions or assertions regarding the nature of governance. To weave this back to the original dispute, our willingness to rely on quantitative metrics should depend on our level of commitment to our faith in this constant feature of politics that makes corruption a transferable, or, more aggressively put, “universal” thing. Our use of these homogenizing empirical metrics implies that we are committed to the robustness of the constant feature. Yet it doesn’t seem like this conceptual work has been done. Continue reading