New Podcast Episode, Featuring Elizabeth Dávid-Barrett and Roxana Bratu

A new episode of KickBack: The Global Anticorruption Podcast is now available. In latest episode, host Dan Hough interviews Liz Dávid-Barrett (Professor at the University of Sussex) and Roxana Bratu (Senior Lecturer at King’s College London) about corruption measurement debates. The conversation touches on a range of questions, including: How has measurement of corruption changed over the three decades? What are the best tools currently available for measuring corruption, and what are are the strengths and weaknesses of these tools? And what do users actually want from corruption measurement tools? You can also find both this episode and an archive of prior episodes at the following locations: KickBack was originally founded as a collaborative effort between GAB and the Interdisciplinary Corruption Research Network (ICRN). It is now hosted and managed by the University of Sussex’s Centre for the Study of Corruption. If you like it, please subscribe/follow, and tell all your friends!

Guest Post: Against the “More-Is-Better” Principle in Corruption Survey Design

Frederic Lesne, a researcher at CERDI/Clermont Auvergne University (France), contributes today’s guest post:

A series of recent posts on this blog have addressed a persistent difficulty with corruption experience surveys: the reticence problem–in other words, the reluctance of respondents to give honest answers to questions about sensitive behaviors–which may be caused by fear of retaliation or by “social desirability” bias (fear of “looking bad” to an interviewer—see here, here, and here.) Various techniques have been developed to try to mitigate the reticence problem, leading to a range of different survey designs.

How can we tell if a corruption survey is well-designed? Some researchers, attuned to concerns about social desirability bias, implicitly or explicitly apply what some have dubbed the more-is-better principle. According to this criterion, the best wording for a sensitive question is the one that produces the highest estimates of the sensitive behavior (and the lowest non-response rates).

Yet there are reasons to question the more-is-better principle. Changing the wording of a sensitive question may not only alter its sensitivity but also the respondents’ understanding of the question and ability to answer it. This may lead to a measurement bias that causes the modified wording to produce higher estimates of the behavior, not because of more effective mitigation of social desirability bias, but because of the exacerbation of other forms of bias or inaccuracy. Consider a few examples: Continue reading

Guest Post: Going Beyond Bribery? Improving the Global Corruption Barometer

Coralie Pring, Research Expert at Transparency International, contributes today’s guest post:

Transparency International has been running the Global Corruption Barometer (GCB) – a general population survey on corruption experience and perception – for a decade and a half now. Before moving ahead with plans for the next round of the survey, we decided to review the survey to see if we can improve it and make it more relevant to the current corruption discourse. In particular, we wanted to know whether it would be worthwhile to add extra questions on topics like grand corruption, nepotism, revolving doors, lobbying, and so forth. To that end, we invited 25 academics and representatives from some of Transparency International’s national chapters to a workshop last October to discuss plans for improving the GCB. We initially planned to focus on what we thought would be a simple question: Should we expand the GCB survey to include questions about grand corruption and political corruption?

In fact, this question was nowhere near simple to answer and it really divided the group. (Perhaps this should have been expected when you get 25 researchers in one room!) Moreover, the discussion ended up focusing less on our initial query about whether or how to expand the GCB, and more on two more basic questions: First, are citizen perceptions of corruption reflective of reality? And second, can information about citizen corruption perceptions still be useful even if they are not accurate?

Because these debates may be of interest to many of this blog’s readers, and because TI is still hoping to get input from a broader set of experts on these and related questions, we would like to share a brief summary of the workshop exchange on these core questions. Continue reading

Guest Post: Refining Corruption Surveys To Identify New Opportunities for Social Change

GAB is delighted to welcome back Dieter Zinnbauer, Programme Manager at Transparency International, who contributes the following guest post:

Household corruption surveys, such as Transparency International’s Global Corruption Barometer (GCB) are primarily, and very importantly, focused on tracking the scale and scope of citizens’ personal bribery experience and their general perceptions about corruption levels in different institutions. More recently, the GCB has branched out into questions about what kind of action against corruption people do or do not take, and why. The hope is that better understanding what motivates people to take action against corruption will help groups like TI develop more effective advocacy and mobilization strategies.

In addition to these direct questions about why people say they do or don’t take action against corruption, household surveys have the potential to help advocacy groups in their efforts to mobilize citizens in another way as well: by identifying inconsistencies or discrepancies between what people’s experience of corruption and their perceptions of corruption. The existence of these gaps is not in itself surprising, but learning more about them might help advocates craft strategies for changing both behavior and beliefs. Consider the following examples: Continue reading

Another Way To Improve the Accuracy of Corruption Surveys: The Crosswise Model

Today’s post is yet another entry in what I guess has become a mini-series on corruption experience surveys. In the first post, from a few weeks back, I discussed the question whether, when trying to assess and compare bribery prevalence across jurisdictions using such surveys, the correct denominator should be all respondents, or only those who had contact with government officials. That post bracketed questions about whether respondents would honestly admit bribery in light of the “social desirability bias” problem (the reluctance to admit, even on an anonymous survey, that one has engaged in socially undesirable activities). My two more recent posts have focused on that problem, first criticizing one of the most common strategies for mitigating the social desirability bias problem (indirect questioning), and then, in last week’s post, trying to be a bit more constructive by calling attention to one potentially more promising solution, the so-called unmatched count technique (UCT), also known as the item count technique or list method. Today I want to continue in that latter vein by calling attention to yet another strategy for ameliorating social desirability bias in corruption surveys: the “crosswise model.”

As with the UCT, the crosswise model was developed outside the corruption field (see here and here) and has been deployed in other areas, but it has only recently been introduced into survey work on corruption. The scholars responsible for pioneering the use of the crosswise model in the study of corruption are Daniel Gingerich, Virginia Oliveros, Ana Corbacho, and Mauricio Ruiz-Vega, in (so far) two important papers, the first of which focuses primarily on the methodology, and the second of which applies the method to address the extent to which individual attitudes about corruption are influenced by beliefs about the extent of corruption in the society. (Both papers focus on Costa Rica, where the survey was fielded.) Those who are interested should check out the original papers by following the links above. Here I’ll just try to give a brief, non-technical flavor of the technique, and say a bit about why I think it might be useful not only for academics conducting their particular projects, but also for organizations that regularly field more comprehensive surveys on corruption, such as Transparency International’s Global Corruption Barometer.

The basic intuition behind the crosswise model is actually fairly straightforward, though it might not be immediately intuitive to everyone. Here’s the basic idea: Continue reading

Using the Unmatched Count Technique (UCT) to Elicit More Accurate Answers on Corruption Experience Surveys

With apologies to those readers who couldn’t care less about methodological issues associated with corruption experience surveys, I’m going to continue the train of thought I began in my last two posts (here and here) with further musings on that theme—in particular what survey researchers refer to as the “social desirability bias” problem (the reluctance of survey respondents to truthfully answer questions about sensitive behaviors like corruption). Last week’s post emphasized the seriousness of this concern and voiced some skepticism about whether one of the most common techniques for addressing it (so-called “indirect questioning,” in which respondents are asked not about their own behavior but about the behavior of people “like them” or “in their line of business”) actually works as well as is commonly assumed.

We professors, especially those of us who like to write blog posts, often get a bad rap for criticizing everything in sight but never offering any constructive solutions. The point is well-taken, and while I can’t promise to lay off the criticism, in today’s post I want to try to be at least a little bit constructive by calling attention to a promising alternative approach to mitigating the social desirability bias problem in corruption experience surveys: the unmatched count technique (UCT), sometimes alternatively called the “item count” or “list” method. This approach has been deployed occasionally by a few academic researchers working on corruption, but it hasn’t seemed to have been picked up by the major organizations that field large-scale corruption experience surveys, such as Transparency International’s Global Corruption Barometer (GCB), the World Bank’s Enterprise Surveys (WBES), or the various regional surveys (like AmericasBarometer or Afrobarometer). So it seemed worthwhile to try to draw more attention to the UCT. It’s by no means a perfect solution, and I’ll say a little bit more about costs and drawbacks near the end of the post. But the UCT is nonetheless worth serious consideration, both by other researchers designing their own surveys for individual research projects, and by more established organizations that regularly field surveys on corruption experience.

The way a UCT question works is roughly as follows: Continue reading

Can Indirect Questioning Induce Honest Responses on Bribery Experience Surveys?

As I noted in my last post, bribery experience surveys – of both firms and citizens – are increasingly popular as a tool not only for testing hypotheses about corruption’s causes and effects, but for measuring the effectiveness of anticorruption policies, for example in the context of assessing progress toward the Sustainable Development Goals’ anticorruption targets. Bribery experience surveys are thought to have a number of advantages over perception-based indicators, greater objectivity chief among them.

I certainly agree that bribery experience surveys are extremely useful and have contributed a great deal to our understanding of corruption’s causes and effects. They’re not perfect, but no indicator is; different measures have different strengths and weaknesses, and we just need to use caution when interpreting any given set of empirical results. In that spirit, though, I do think the anticorruption community should subject these experience surveys to a bit more critical scrutiny, comparable to the extensive exploration in the literature of the myriad shortcomings of corruption perception indicators. In last week’s post, I focused on the question of the correct denominator to use when calculating bribery victimization rates – all citizens, or all citizens who have had (a certain level of) contact with the bureaucracy? Today I want to focus on a different issue: What can we do about the fact that survey respondents might be reluctant to answer corruption questions truthfully?

The observation that survey respondents might be reluctant to truthfully answer questions about their personal bribery experience is neither new nor surprising. Survey respondents confronted with sensitive questions often have a tendency to give the answer that they think they “should” give (or that they think the interviewer wants to hear); social scientists call this tendency “social desirability bias.” There’s quite robust evidence that social desirability bias affects surveys on sensitive topics, including corruption; even more troubling, the available evidence suggests that social desirability bias on corruption surveys is neither constant nor randomly distributed, but rather varies across countries and regions. This means that apparent variations in bribery experience rates might actually reflect variations in willingness to truthfully answer questions about bribery experience, rather than (or in addition to) variations in actual bribery experience (see here, here, and here).

So, what can we do about this problem? Existing surveys use a range of techniques. Here I want to focus on one of the most popular: the “indirect questioning” approach. The idea is that instead of asking a respondent, “How much money does your firm have to spend each year on informal payments to government officials?”, you instead ask, “How much money does a typical firm in your line of business have to spend each year on informal payments to government officials?” (It’s perhaps worth noting that the indirect questioning method seems more ubiquitous in firm/manager surveys; many of the most prominent household surveys, such as the International Crime Victims Survey and the Global Corruption Barometer, ask directly about the household’s experience rather than asking about “households like yours.”) The hope is that asking the question indirectly will reduce social desirability bias, because the respondent doesn’t have to admit that he or she (or his or her firm) engaged in illegal activity.

Is that hope justified? I don’t doubt that indirect questioning helps to some extent. But I confess that I’m skeptical, on both theoretical and empirical grounds, that indirect questioning is the silver bullet solution for social desirability bias that some researchers seem to suggest that it is. Continue reading

In Bribery Experience Surveys, Should You Control for Contact?

Perception-based corruption indicators, though still the most widely-used and widely-discussed measures of corruption at the country level, get a lot of criticism (some of it misguided, but much of it fair). The main alternative measures of corruption include experience surveys, which ask a representative random sample of firms or citizens about their experience with bribery. Corruption experience surveys are neither new nor rare, but they’re getting more attention these days as researchers and advocates look for more “objective” ways of assessing corruption levels and monitoring progress. Indeed, although some early discussions of measurement of progress toward the Sustainable Development Goals (SDGs) anticorruption target (Target 16.5) suggested—much to my chagrin—that changes in Transparency International’s Corruption Perceptions Index (CPI) score would be the main measure of progress, more recent discussions appear to indicate that in fact progress toward Goal Target 16.5 will be assessed using experience surveys (see here and here).

Of course, corruption experience surveys have their own problems. Most obviously, they typically only measure a fairly narrow form of corruption (usually petty bribery). Also, there’s always the risk that respondents won’t answer truthfully. There’s actually been quite a bit of interesting recent research on that latter concern, which Rick discussed a while back and that I might post about more at some point. But for now, I want to put that problem aside to focus on a different challenge for bribery experience surveys: When presenting or interpreting the results of those surveys, should one control for the amount of contact the respondents have with government officials? Or should one focus on overall rates of bribery, without regard for whether or how frequently respondents interacted with the government?

To make this a bit more concrete, imagine two towns, A and B, each with 1,000 inhabitants. Suppose we survey every resident of both towns and we ask them two questions: First, within the past 12 months, have you had any contact with a government official? Second, if the answer to the first question was yes, did the government official demand a bribe? In Town A, 200 of the residents had contact with a government official, and of these 200, 100 of them reported that the government official they encountered solicited a bribe. In Town B, 800 residents had contact with a government official, and of these 800, 200 reported that the official solicited a bribe. If we don’t control for contact, we would say that bribery experience rates are twice as high in Town B (20%) as in Town A (10%). If we do control for contact, we would say that bribery experience rates were twice as high in Town A (50%) as in Town B (25%). In which town is bribery a bigger problem? In which one are the public officials more corrupt?

The answer is not at all obvious; both controlling for contact and not controlling for contact have potentially significant problems: Continue reading

Guest Post: When It Comes To Attitudes Toward Corruption, Russians Are More Like Americans Than You Think

Today’s guest post is from Marina Zaloznaya, Assistant Professor of Sociology at the University of Iowa and author of, The Politics of Bureaucratic Corruption in Post-Transitional Eastern Europe:

Russia and corruption have been dominating the news recently – with the reporting from Washington and Moscow converging in an unusual way. Ongoing accusations against Trump Administration officials resonate even more strongly when linked to Russia, a country most Americans view as rife with corruption. Indeed, many Americans think that Russian citizens are perfectly comfortable with the systematic corruption of political and business elites.

This is a myth. Yes, it is true beyond doubt that corruption is common in Russia – much more so than in the United States – affecting hundreds of thousands of people. But this is not because Russians are systematically more tolerant of corruption than are Americans. Continue reading

Guest Post: Please, Criticize Me! (Why Anticorruption Practitioners Should Scrutinize and Challenge Research Methodology)

GAB is pleased to welcome back Roger Henke, Chairman of the Board of the Southeast Asia Development Program (SADP), who contributes the following guest post:

In a previous post, I described a survey used to estimate the incidence of fraud and associated problems within the Cambodian NGO sector. The response to the results of that survey have so far been somewhat disheartening—not so much because the research has had little influence on action (the fate of most such research), but rather because those who have been told about the study’s results have all taken the results for granted, questioning neither their meaningfulness nor how they were generated. Such at-face-value uptake is, paradoxically, a huge risk to the longer-term public acceptance of the evidence produced by social-scientific research.  I am relieved that methodological considerations (issues of publication bias, replicability, p-hacking, and others) are finally getting some traction within the social science community, but it is evident that the decades-long neglect of these problems dovetails with a public opinion climate that doubts and disparages social science expertise.

Lack of attention to the methodological underpinnings of “interesting” conclusions is hardly a remarkable fate for corruption research results, nor is it specific to corruption research.  But the anticorruption community has a lot to lose by distrust in research, and thus a lot to win by ensuring that the findings it uses to build its cases upon pass basic quality checks. For the remainder of this post I’ll examine some basic questions that the Cambodia NGO corruption survey’s results should have triggered before being accepted as credible and meaningful: Continue reading