Guest Post: Against the “More-Is-Better” Principle in Corruption Survey Design

Frederic Lesne, a researcher at CERDI/Clermont Auvergne University (France), contributes today’s guest post:

A series of recent posts on this blog have addressed a persistent difficulty with corruption experience surveys: the reticence problem–in other words, the reluctance of respondents to give honest answers to questions about sensitive behaviors–which may be caused by fear of retaliation or by “social desirability” bias (fear of “looking bad” to an interviewer—see here, here, and here.) Various techniques have been developed to try to mitigate the reticence problem, leading to a range of different survey designs.

How can we tell if a corruption survey is well-designed? Some researchers, attuned to concerns about social desirability bias, implicitly or explicitly apply what some have dubbed the more-is-better principle. According to this criterion, the best wording for a sensitive question is the one that produces the highest estimates of the sensitive behavior (and the lowest non-response rates).

Yet there are reasons to question the more-is-better principle. Changing the wording of a sensitive question may not only alter its sensitivity but also the respondents’ understanding of the question and ability to answer it. This may lead to a measurement bias that causes the modified wording to produce higher estimates of the behavior, not because of more effective mitigation of social desirability bias, but because of the exacerbation of other forms of bias or inaccuracy. Consider a few examples: Continue reading

Can Indirect Questioning Induce Honest Responses on Bribery Experience Surveys?

As I noted in my last post, bribery experience surveys – of both firms and citizens – are increasingly popular as a tool not only for testing hypotheses about corruption’s causes and effects, but for measuring the effectiveness of anticorruption policies, for example in the context of assessing progress toward the Sustainable Development Goals’ anticorruption targets. Bribery experience surveys are thought to have a number of advantages over perception-based indicators, greater objectivity chief among them.

I certainly agree that bribery experience surveys are extremely useful and have contributed a great deal to our understanding of corruption’s causes and effects. They’re not perfect, but no indicator is; different measures have different strengths and weaknesses, and we just need to use caution when interpreting any given set of empirical results. In that spirit, though, I do think the anticorruption community should subject these experience surveys to a bit more critical scrutiny, comparable to the extensive exploration in the literature of the myriad shortcomings of corruption perception indicators. In last week’s post, I focused on the question of the correct denominator to use when calculating bribery victimization rates – all citizens, or all citizens who have had (a certain level of) contact with the bureaucracy? Today I want to focus on a different issue: What can we do about the fact that survey respondents might be reluctant to answer corruption questions truthfully?

The observation that survey respondents might be reluctant to truthfully answer questions about their personal bribery experience is neither new nor surprising. Survey respondents confronted with sensitive questions often have a tendency to give the answer that they think they “should” give (or that they think the interviewer wants to hear); social scientists call this tendency “social desirability bias.” There’s quite robust evidence that social desirability bias affects surveys on sensitive topics, including corruption; even more troubling, the available evidence suggests that social desirability bias on corruption surveys is neither constant nor randomly distributed, but rather varies across countries and regions. This means that apparent variations in bribery experience rates might actually reflect variations in willingness to truthfully answer questions about bribery experience, rather than (or in addition to) variations in actual bribery experience (see here, here, and here).

So, what can we do about this problem? Existing surveys use a range of techniques. Here I want to focus on one of the most popular: the “indirect questioning” approach. The idea is that instead of asking a respondent, “How much money does your firm have to spend each year on informal payments to government officials?”, you instead ask, “How much money does a typical firm in your line of business have to spend each year on informal payments to government officials?” (It’s perhaps worth noting that the indirect questioning method seems more ubiquitous in firm/manager surveys; many of the most prominent household surveys, such as the International Crime Victims Survey and the Global Corruption Barometer, ask directly about the household’s experience rather than asking about “households like yours.”) The hope is that asking the question indirectly will reduce social desirability bias, because the respondent doesn’t have to admit that he or she (or his or her firm) engaged in illegal activity.

Is that hope justified? I don’t doubt that indirect questioning helps to some extent. But I confess that I’m skeptical, on both theoretical and empirical grounds, that indirect questioning is the silver bullet solution for social desirability bias that some researchers seem to suggest that it is. Continue reading