Can Indirect Questioning Induce Honest Responses on Bribery Experience Surveys?

As I noted in my last post, bribery experience surveys – of both firms and citizens – are increasingly popular as a tool not only for testing hypotheses about corruption’s causes and effects, but for measuring the effectiveness of anticorruption policies, for example in the context of assessing progress toward the Sustainable Development Goals’ anticorruption targets. Bribery experience surveys are thought to have a number of advantages over perception-based indicators, greater objectivity chief among them.

I certainly agree that bribery experience surveys are extremely useful and have contributed a great deal to our understanding of corruption’s causes and effects. They’re not perfect, but no indicator is; different measures have different strengths and weaknesses, and we just need to use caution when interpreting any given set of empirical results. In that spirit, though, I do think the anticorruption community should subject these experience surveys to a bit more critical scrutiny, comparable to the extensive exploration in the literature of the myriad shortcomings of corruption perception indicators. In last week’s post, I focused on the question of the correct denominator to use when calculating bribery victimization rates – all citizens, or all citizens who have had (a certain level of) contact with the bureaucracy? Today I want to focus on a different issue: What can we do about the fact that survey respondents might be reluctant to answer corruption questions truthfully?

The observation that survey respondents might be reluctant to truthfully answer questions about their personal bribery experience is neither new nor surprising. Survey respondents confronted with sensitive questions often have a tendency to give the answer that they think they “should” give (or that they think the interviewer wants to hear); social scientists call this tendency “social desirability bias.” There’s quite robust evidence that social desirability bias affects surveys on sensitive topics, including corruption; even more troubling, the available evidence suggests that social desirability bias on corruption surveys is neither constant nor randomly distributed, but rather varies across countries and regions. This means that apparent variations in bribery experience rates might actually reflect variations in willingness to truthfully answer questions about bribery experience, rather than (or in addition to) variations in actual bribery experience (see here, here, and here).

So, what can we do about this problem? Existing surveys use a range of techniques. Here I want to focus on one of the most popular: the “indirect questioning” approach. The idea is that instead of asking a respondent, “How much money does your firm have to spend each year on informal payments to government officials?”, you instead ask, “How much money does a typical firm in your line of business have to spend each year on informal payments to government officials?” (It’s perhaps worth noting that the indirect questioning method seems more ubiquitous in firm/manager surveys; many of the most prominent household surveys, such as the International Crime Victims Survey and the Global Corruption Barometer, ask directly about the household’s experience rather than asking about “households like yours.”) The hope is that asking the question indirectly will reduce social desirability bias, because the respondent doesn’t have to admit that he or she (or his or her firm) engaged in illegal activity.

Is that hope justified? I don’t doubt that indirect questioning helps to some extent. But I confess that I’m skeptical, on both theoretical and empirical grounds, that indirect questioning is the silver bullet solution for social desirability bias that some researchers seem to suggest that it is.

My theoretical qualms about the efficacy of indirect questioning run as follows:

If respondents interpret the question literally – as one that asks them to estimate how often “firms like theirs” pay bribes – and try to answer it honestly, then the answers these respondents give will reflect their perceptions of how widespread bribery is in their sectors, rather than (only or primarily) their own experience with bribery. True, respondents might think that their own experience is typical—but then again, they might not. After all, one of the reasons for using corruption experience surveys rather than perception surveys is because of the belief that perceptions may be highly inaccurate. Many managers may believe (rightly or wrongly) that bribery in their sector is widespread, even if they have not had much first-hand experience with bribery. (Admittedly, it’s less likely that a manager whose firm pays lots of bribes will think that corruption in her sector is rare, though that’s also possible.)

Suppose instead that respondents understand perfectly well that when the questioner asks, “How often do firms like yours pay bribes?”, what the questioner really means is, “How often does your firm pay bribes?”, and the respondent knows that the questioner will interpret the answer accordingly. In this case, why wouldn’t we expect that the social desirability bias would kick in again, causing respondents to under-report bribery experience? After all, if the respondent knows full well that her answer will be interpreted as reflecting her own experience and behavior, and she doesn’t want the interviewer to think of her as a “bad” person, she has an incentive to dissemble.

This, then, is the basic conundrum: If respondents are savvy enough to understand what the interviewer is up to in asking an indirect corrupt question, social desirability bias is likely to be a problem (and may lead respondents to understate the extent of bribery, relative to their actual experience). If, on the other hand,  respondents naively think that the interviewer really is asking about “typical firms” in the industry, then respondents may give an answer that’s based more on perceptions than actual experience (which in many cases might lead to an overestimate of the extent of bribery, relative to the respondent’s actual first-hand experience). To be clear, it’s certainly possible that indirect questioning could achieve its desired results. Maybe, for example, respondents instinctively think in terms of “plausible deniability,” even though they should know that they will never have to answer for their survey responses to any third party. But it’s important to be clear about the assumptions that we’re making regarding human psychology when we assert that indirect questioning will significantly reduce the social desirability bias problem.

What does the empirical evidence suggest on the question whether indirect questioning mitigates social desirability bias? For obvious reasons, it’s difficult to get good empirical data on this issue. But the limited evidence that we do have is not encouraging. For example, Professor George Clarke analyzed data from a 2009 survey data of construction firm managers in Kabul, Afghanistan, which asked respondents how often “establishments like yours” make informal payments to government officials to get things done. Some of the construction firms in the sample had experience bidding on Afghanistan government contracts, but others only bid on contracts with international organizations. Clarke found that those firms that bid only on international organization contracts were significantly more likely to say that “establishments like [theirs]” paid bribes. Yet most people would assume that bribery is more common in contracts with Afghanistan’s government than in contracts with international organizations. (Not that corruption is absent in the latter! Just less common.) If that’s right, then the most natural interpretation of these results is that the firms that didn’t deal with the Afghanistan government interpreted the question as asking about construction firms generally, and assumed that most of those firms (their competitors) frequently paid bribes. It might also be the case that those firms that engaged in bribery more frequently (namely, those that did lots of business with the Afghan government) were more reluctant to admit that bribery was common.

A couple of other research papers use a so-called “randomized response technique” to identify those respondents who are most likely to be affected by social desirability bias (so-called “reticent respondents”), and investigate whether reticent and non-reticent respondents react differently to indirect questions, as compared to direct questions. The results here are not clear or conclusive, but don’t do much to allay my concerns about the efficacy of indirect questioning as a means of eliciting truthful responses about actual bribery experience. One study, using data from an enterprise survey in Nigeria, found that although reticent respondents were less likely to acknowledge paying bribes when asked directly, there were no statistically significant differences between reticent and non-reticent respondents when asked how often “firms in your line of business” pay in bribes. Another study in this vein, using data from enterprise surveys in Bangladesh and Sri Lanka, reached a broadly similar result: more reticent respondents were less likely to acknowledge involvement in bribery when asked directly, but did not appear to answer indirect questions about bribery by “establishments like yours” differently from other respondents. These findings could be interpreted as evidence that the indirect phrasing made reticent respondents more willing to answer truthfully. But they could just as easily be construed as evidence that both reticent and non-reticent respondents interpreted the question about “firms in your line of business” as about “firms in your line of business,” not about “your firm.”  So although these studies show quite convincingly that social desirability bias affects answers to direct questions, they don’t really help us figure out whether indirect questioning solves the problem.

Ultimately, then, while I still believe that corruption experience surveys can provide us with a great deal of useful information, I do think we need to adopt a bit more of a critical or skeptical posture towards these surveys. Social desirability bias is a major problem, and at least one of the most popular techniques for mitigating such bias is at best unproven.

3 thoughts on “Can Indirect Questioning Induce Honest Responses on Bribery Experience Surveys?

  1. This is an interesting article. Let me share my experience in how indirect questioning may also be subject to bias.
    If you are a company implementing strict anti–bribery policies and your competitors are more flexible in that respect (not an unusual situation) you may, out of frustration. overstate the extent of corruption in your sector.
    Another bias may kick in if a compliant company loses a bid to a competitor. Some in the company may blame this on the company’s policy arguing that the bid was lost to a competitor who paid a bribe even if that is no true. They may also use that argument to advocate more flexibility with respect to bribes.

  2. Pingback: Can Indirect Questioning Induce Honest Responses on Bribery Experience Surveys? | Matthews' Blog

  3. To my great despair, researchers have long interpreted answers to indirect questions in surveys about bribery as if they reflected respondents’ own experience – rather that their estimation of what firms similar to theirs usually pay in bribes. In light of Clarke’s results, this still commonly accepted practice needs to be seriously questioned, as you suggest in your post.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.