Guest Post: Against the “More-Is-Better” Principle in Corruption Survey Design

Frederic Lesne, a researcher at CERDI/Clermont Auvergne University (France), contributes today’s guest post:

A series of recent posts on this blog have addressed a persistent difficulty with corruption experience surveys: the reticence problem–in other words, the reluctance of respondents to give honest answers to questions about sensitive behaviors–which may be caused by fear of retaliation or by “social desirability” bias (fear of “looking bad” to an interviewer—see here, here, and here.) Various techniques have been developed to try to mitigate the reticence problem, leading to a range of different survey designs.

How can we tell if a corruption survey is well-designed? Some researchers, attuned to concerns about social desirability bias, implicitly or explicitly apply what some have dubbed the more-is-better principle. According to this criterion, the best wording for a sensitive question is the one that produces the highest estimates of the sensitive behavior (and the lowest non-response rates).

Yet there are reasons to question the more-is-better principle. Changing the wording of a sensitive question may not only alter its sensitivity but also the respondents’ understanding of the question and ability to answer it. This may lead to a measurement bias that causes the modified wording to produce higher estimates of the behavior, not because of more effective mitigation of social desirability bias, but because of the exacerbation of other forms of bias or inaccuracy. Consider a few examples: Continue reading

Using the Unmatched Count Technique (UCT) to Elicit More Accurate Answers on Corruption Experience Surveys

With apologies to those readers who couldn’t care less about methodological issues associated with corruption experience surveys, I’m going to continue the train of thought I began in my last two posts (here and here) with further musings on that theme—in particular what survey researchers refer to as the “social desirability bias” problem (the reluctance of survey respondents to truthfully answer questions about sensitive behaviors like corruption). Last week’s post emphasized the seriousness of this concern and voiced some skepticism about whether one of the most common techniques for addressing it (so-called “indirect questioning,” in which respondents are asked not about their own behavior but about the behavior of people “like them” or “in their line of business”) actually works as well as is commonly assumed.

We professors, especially those of us who like to write blog posts, often get a bad rap for criticizing everything in sight but never offering any constructive solutions. The point is well-taken, and while I can’t promise to lay off the criticism, in today’s post I want to try to be at least a little bit constructive by calling attention to a promising alternative approach to mitigating the social desirability bias problem in corruption experience surveys: the unmatched count technique (UCT), sometimes alternatively called the “item count” or “list” method. This approach has been deployed occasionally by a few academic researchers working on corruption, but it hasn’t seemed to have been picked up by the major organizations that field large-scale corruption experience surveys, such as Transparency International’s Global Corruption Barometer (GCB), the World Bank’s Enterprise Surveys (WBES), or the various regional surveys (like AmericasBarometer or Afrobarometer). So it seemed worthwhile to try to draw more attention to the UCT. It’s by no means a perfect solution, and I’ll say a little bit more about costs and drawbacks near the end of the post. But the UCT is nonetheless worth serious consideration, both by other researchers designing their own surveys for individual research projects, and by more established organizations that regularly field surveys on corruption experience.

The way a UCT question works is roughly as follows: Continue reading