With apologies to those readers who couldn’t care less about methodological issues associated with corruption experience surveys, I’m going to continue the train of thought I began in my last two posts (here and here) with further musings on that theme—in particular what survey researchers refer to as the “social desirability bias” problem (the reluctance of survey respondents to truthfully answer questions about sensitive behaviors like corruption). Last week’s post emphasized the seriousness of this concern and voiced some skepticism about whether one of the most common techniques for addressing it (so-called “indirect questioning,” in which respondents are asked not about their own behavior but about the behavior of people “like them” or “in their line of business”) actually works as well as is commonly assumed.
We professors, especially those of us who like to write blog posts, often get a bad rap for criticizing everything in sight but never offering any constructive solutions. The point is well-taken, and while I can’t promise to lay off the criticism, in today’s post I want to try to be at least a little bit constructive by calling attention to a promising alternative approach to mitigating the social desirability bias problem in corruption experience surveys: the unmatched count technique (UCT), sometimes alternatively called the “item count” or “list” method. This approach has been deployed occasionally by a few academic researchers working on corruption, but it hasn’t seemed to have been picked up by the major organizations that field large-scale corruption experience surveys, such as Transparency International’s Global Corruption Barometer (GCB), the World Bank’s Enterprise Surveys (WBES), or the various regional surveys (like AmericasBarometer or Afrobarometer). So it seemed worthwhile to try to draw more attention to the UCT. It’s by no means a perfect solution, and I’ll say a little bit more about costs and drawbacks near the end of the post. But the UCT is nonetheless worth serious consideration, both by other researchers designing their own surveys for individual research projects, and by more established organizations that regularly field surveys on corruption experience.
The way a UCT question works is roughly as follows: Continue reading