Guest Post: Against the “More-Is-Better” Principle in Corruption Survey Design

Frederic Lesne, a researcher at CERDI/Clermont Auvergne University (France), contributes today’s guest post:

A series of recent posts on this blog have addressed a persistent difficulty with corruption experience surveys: the reticence problem–in other words, the reluctance of respondents to give honest answers to questions about sensitive behaviors–which may be caused by fear of retaliation or by “social desirability” bias (fear of “looking bad” to an interviewer—see here, here, and here.) Various techniques have been developed to try to mitigate the reticence problem, leading to a range of different survey designs.

How can we tell if a corruption survey is well-designed? Some researchers, attuned to concerns about social desirability bias, implicitly or explicitly apply what some have dubbed the more-is-better principle. According to this criterion, the best wording for a sensitive question is the one that produces the highest estimates of the sensitive behavior (and the lowest non-response rates).

Yet there are reasons to question the more-is-better principle. Changing the wording of a sensitive question may not only alter its sensitivity but also the respondents’ understanding of the question and ability to answer it. This may lead to a measurement bias that causes the modified wording to produce higher estimates of the behavior, not because of more effective mitigation of social desirability bias, but because of the exacerbation of other forms of bias or inaccuracy. Consider a few examples: Continue reading

Guest Post: Going Beyond Bribery? Improving the Global Corruption Barometer

Coralie Pring, Research Expert at Transparency International, contributes today’s guest post:

Transparency International has been running the Global Corruption Barometer (GCB) – a general population survey on corruption experience and perception – for a decade and a half now. Before moving ahead with plans for the next round of the survey, we decided to review the survey to see if we can improve it and make it more relevant to the current corruption discourse. In particular, we wanted to know whether it would be worthwhile to add extra questions on topics like grand corruption, nepotism, revolving doors, lobbying, and so forth. To that end, we invited 25 academics and representatives from some of Transparency International’s national chapters to a workshop last October to discuss plans for improving the GCB. We initially planned to focus on what we thought would be a simple question: Should we expand the GCB survey to include questions about grand corruption and political corruption?

In fact, this question was nowhere near simple to answer and it really divided the group. (Perhaps this should have been expected when you get 25 researchers in one room!) Moreover, the discussion ended up focusing less on our initial query about whether or how to expand the GCB, and more on two more basic questions: First, are citizen perceptions of corruption reflective of reality? And second, can information about citizen corruption perceptions still be useful even if they are not accurate?

Because these debates may be of interest to many of this blog’s readers, and because TI is still hoping to get input from a broader set of experts on these and related questions, we would like to share a brief summary of the workshop exchange on these core questions. Continue reading

Another Way To Improve the Accuracy of Corruption Surveys: The Crosswise Model

Today’s post is yet another entry in what I guess has become a mini-series on corruption experience surveys. In the first post, from a few weeks back, I discussed the question whether, when trying to assess and compare bribery prevalence across jurisdictions using such surveys, the correct denominator should be all respondents, or only those who had contact with government officials. That post bracketed questions about whether respondents would honestly admit bribery in light of the “social desirability bias” problem (the reluctance to admit, even on an anonymous survey, that one has engaged in socially undesirable activities). My two more recent posts have focused on that problem, first criticizing one of the most common strategies for mitigating the social desirability bias problem (indirect questioning), and then, in last week’s post, trying to be a bit more constructive by calling attention to one potentially more promising solution, the so-called unmatched count technique (UCT), also known as the item count technique or list method. Today I want to continue in that latter vein by calling attention to yet another strategy for ameliorating social desirability bias in corruption surveys: the “crosswise model.”

As with the UCT, the crosswise model was developed outside the corruption field (see here and here) and has been deployed in other areas, but it has only recently been introduced into survey work on corruption. The scholars responsible for pioneering the use of the crosswise model in the study of corruption are Daniel Gingerich, Virginia Oliveros, Ana Corbacho, and Mauricio Ruiz-Vega, in (so far) two important papers, the first of which focuses primarily on the methodology, and the second of which applies the method to address the extent to which individual attitudes about corruption are influenced by beliefs about the extent of corruption in the society. (Both papers focus on Costa Rica, where the survey was fielded.) Those who are interested should check out the original papers by following the links above. Here I’ll just try to give a brief, non-technical flavor of the technique, and say a bit about why I think it might be useful not only for academics conducting their particular projects, but also for organizations that regularly field more comprehensive surveys on corruption, such as Transparency International’s Global Corruption Barometer.

The basic intuition behind the crosswise model is actually fairly straightforward, though it might not be immediately intuitive to everyone. Here’s the basic idea: Continue reading

Using the Unmatched Count Technique (UCT) to Elicit More Accurate Answers on Corruption Experience Surveys

With apologies to those readers who couldn’t care less about methodological issues associated with corruption experience surveys, I’m going to continue the train of thought I began in my last two posts (here and here) with further musings on that theme—in particular what survey researchers refer to as the “social desirability bias” problem (the reluctance of survey respondents to truthfully answer questions about sensitive behaviors like corruption). Last week’s post emphasized the seriousness of this concern and voiced some skepticism about whether one of the most common techniques for addressing it (so-called “indirect questioning,” in which respondents are asked not about their own behavior but about the behavior of people “like them” or “in their line of business”) actually works as well as is commonly assumed.

We professors, especially those of us who like to write blog posts, often get a bad rap for criticizing everything in sight but never offering any constructive solutions. The point is well-taken, and while I can’t promise to lay off the criticism, in today’s post I want to try to be at least a little bit constructive by calling attention to a promising alternative approach to mitigating the social desirability bias problem in corruption experience surveys: the unmatched count technique (UCT), sometimes alternatively called the “item count” or “list” method. This approach has been deployed occasionally by a few academic researchers working on corruption, but it hasn’t seemed to have been picked up by the major organizations that field large-scale corruption experience surveys, such as Transparency International’s Global Corruption Barometer (GCB), the World Bank’s Enterprise Surveys (WBES), or the various regional surveys (like AmericasBarometer or Afrobarometer). So it seemed worthwhile to try to draw more attention to the UCT. It’s by no means a perfect solution, and I’ll say a little bit more about costs and drawbacks near the end of the post. But the UCT is nonetheless worth serious consideration, both by other researchers designing their own surveys for individual research projects, and by more established organizations that regularly field surveys on corruption experience.

The way a UCT question works is roughly as follows: Continue reading

Can Indirect Questioning Induce Honest Responses on Bribery Experience Surveys?

As I noted in my last post, bribery experience surveys – of both firms and citizens – are increasingly popular as a tool not only for testing hypotheses about corruption’s causes and effects, but for measuring the effectiveness of anticorruption policies, for example in the context of assessing progress toward the Sustainable Development Goals’ anticorruption targets. Bribery experience surveys are thought to have a number of advantages over perception-based indicators, greater objectivity chief among them.

I certainly agree that bribery experience surveys are extremely useful and have contributed a great deal to our understanding of corruption’s causes and effects. They’re not perfect, but no indicator is; different measures have different strengths and weaknesses, and we just need to use caution when interpreting any given set of empirical results. In that spirit, though, I do think the anticorruption community should subject these experience surveys to a bit more critical scrutiny, comparable to the extensive exploration in the literature of the myriad shortcomings of corruption perception indicators. In last week’s post, I focused on the question of the correct denominator to use when calculating bribery victimization rates – all citizens, or all citizens who have had (a certain level of) contact with the bureaucracy? Today I want to focus on a different issue: What can we do about the fact that survey respondents might be reluctant to answer corruption questions truthfully?

The observation that survey respondents might be reluctant to truthfully answer questions about their personal bribery experience is neither new nor surprising. Survey respondents confronted with sensitive questions often have a tendency to give the answer that they think they “should” give (or that they think the interviewer wants to hear); social scientists call this tendency “social desirability bias.” There’s quite robust evidence that social desirability bias affects surveys on sensitive topics, including corruption; even more troubling, the available evidence suggests that social desirability bias on corruption surveys is neither constant nor randomly distributed, but rather varies across countries and regions. This means that apparent variations in bribery experience rates might actually reflect variations in willingness to truthfully answer questions about bribery experience, rather than (or in addition to) variations in actual bribery experience (see here, here, and here).

So, what can we do about this problem? Existing surveys use a range of techniques. Here I want to focus on one of the most popular: the “indirect questioning” approach. The idea is that instead of asking a respondent, “How much money does your firm have to spend each year on informal payments to government officials?”, you instead ask, “How much money does a typical firm in your line of business have to spend each year on informal payments to government officials?” (It’s perhaps worth noting that the indirect questioning method seems more ubiquitous in firm/manager surveys; many of the most prominent household surveys, such as the International Crime Victims Survey and the Global Corruption Barometer, ask directly about the household’s experience rather than asking about “households like yours.”) The hope is that asking the question indirectly will reduce social desirability bias, because the respondent doesn’t have to admit that he or she (or his or her firm) engaged in illegal activity.

Is that hope justified? I don’t doubt that indirect questioning helps to some extent. But I confess that I’m skeptical, on both theoretical and empirical grounds, that indirect questioning is the silver bullet solution for social desirability bias that some researchers seem to suggest that it is. Continue reading