Frederic Lesne, a researcher at CERDI/Clermont Auvergne University (France), contributes today’s guest post:
A series of recent posts on this blog have addressed a persistent difficulty with corruption experience surveys: the reticence problem–in other words, the reluctance of respondents to give honest answers to questions about sensitive behaviors–which may be caused by fear of retaliation or by “social desirability” bias (fear of “looking bad” to an interviewer—see here, here, and here.) Various techniques have been developed to try to mitigate the reticence problem, leading to a range of different survey designs.
How can we tell if a corruption survey is well-designed? Some researchers, attuned to concerns about social desirability bias, implicitly or explicitly apply what some have dubbed the more-is-better principle. According to this criterion, the best wording for a sensitive question is the one that produces the highest estimates of the sensitive behavior (and the lowest non-response rates).
Yet there are reasons to question the more-is-better principle. Changing the wording of a sensitive question may not only alter its sensitivity but also the respondents’ understanding of the question and ability to answer it. This may lead to a measurement bias that causes the modified wording to produce higher estimates of the behavior, not because of more effective mitigation of social desirability bias, but because of the exacerbation of other forms of bias or inaccuracy. Consider a few examples:
- When asking firm managers to estimate total annual bribe payments to government officials, should we ask them to state the answers in terms of absolute amounts (in the local currency) or as a percentage of annual sales? In his book The Power of Survey Design, World Bank researcher Guiseppe Iarossi argues that asking respondents to state the answer to a sensitive question like this in percentage terms may make respondents less reluctant to answer such questions Perhaps for this reason, the World Bank’s Enterprise Surveys (WBES) allow firm owners and managers to estimate the average amount of bribes establishments like theirs have to pay annually to “get things done” either in monetary value or as a percentage of total annual sales. And the way the answer is expressed does indeed lead to enormous differences in estimated bribe payments. For example, Professor George Clarke, using WBES data gathered from 15 surveys carried out in African countries in 2006 and 2007, found that average estimates of the magnitude of bribery are significantly higher—4 to 15 times higher—for respondents who answered the question as a percentage of sales than in absolute amounts. I recently confirmed those findings with a randomized experiment carried out in Madagascar. Under the more-is-better criterion, we might conclude that asking about bribery magnitudes in terms of percentages is better, because (we might suppose), the higher estimates are due to the greater candor that this format promotes. But Professor Clarke persuasively argues that the reason for this huge gap is something else entirely: a systematic overestimation of amounts in percentages by some respondents. And he convincingly undermines the possibility that asking for bribes as a percentage of sales in less sensitive by noting that a similar gap appears when asking respondents to estimate non-sensitive items such as firm losses due to power outages.
- Another common application of the “more is better” criterion is the idea that question wordings that produce lower non-response rates are superior. Indeed, some researchers even measure question sensitivity by the non-response rate. But the non-response rate is not necessarily a good indicator of question sensitivity, let alone the accuracy of the answers from those respondents who do choose to answer. When choosing not to answer a sensitive question, a survey respondent may be motivated by a lack of knowledge on the subject or by a desire not to disclose compromising information. One cannot therefore simply assume that higher non-response rates indicate greater question sensitivity rather than greater respondent uncertainty. (After all, if respondents do not wish sensitive behavior to be inferred from their answers, a more effective strategy would be to lie rather than not answering the question.) Moreover, changing the format of a question in order to increase the response rate may also impact response accuracy. Indeed, recent research actually suggests a trade-off between a high response rate and the relevance of the data collected, because respondents who are most inclined to refrain from answering are also those whose answers are the least informative. Favoring question wordings with comparatively high response rates may therefore be counterproductive.
- Finally, another example of this issue, already discussed by Matthew in a recent post on this blog, is the use of indirect questioning. (The WBES uses an indirect questioning approach, asking firm managers not about how much they bribed government officials in the past year, but rather about how much “a typical firm” in the respondent’s industry pays in bribes each year.) It does seem to be the case that bribery estimates are higher when the questions are phrased indirectly rather than directly. But this might not be due to greater candor about respondents’ own behavior: As Matthew points out, respondents may interpret the indirect question literally, and attempt to answer based on their estimates (perhaps overestimates) of what their competitors are up to. If so, then this technique may lead to higher estimates of corruption for wrong reasons.
Assessing the quality of questions aiming at estimating underreported behavior like corruption by their ability to produce high estimates may therefore be misleading. Research on corruption measurement should promote systematic testing of underlying, and often overlooked, assumptions affecting the design of corruption questions in surveys. The more-is-better criterion researchers have long relied on (sometimes explicitly, sometimes implicitly) must cease to be a standard for deciding on the best approaches to ask questions about corruption.