Today’s post is yet another entry in what I guess has become a mini-series on corruption experience surveys. In the first post, from a few weeks back, I discussed the question whether, when trying to assess and compare bribery prevalence across jurisdictions using such surveys, the correct denominator should be all respondents, or only those who had contact with government officials. That post bracketed questions about whether respondents would honestly admit bribery in light of the “social desirability bias” problem (the reluctance to admit, even on an anonymous survey, that one has engaged in socially undesirable activities). My two more recent posts have focused on that problem, first criticizing one of the most common strategies for mitigating the social desirability bias problem (indirect questioning), and then, in last week’s post, trying to be a bit more constructive by calling attention to one potentially more promising solution, the so-called unmatched count technique (UCT), also known as the item count technique or list method. Today I want to continue in that latter vein by calling attention to yet another strategy for ameliorating social desirability bias in corruption surveys: the “crosswise model.”
As with the UCT, the crosswise model was developed outside the corruption field (see here and here) and has been deployed in other areas, but it has only recently been introduced into survey work on corruption. The scholars responsible for pioneering the use of the crosswise model in the study of corruption are Daniel Gingerich, Virginia Oliveros, Ana Corbacho, and Mauricio Ruiz-Vega, in (so far) two important papers, the first of which focuses primarily on the methodology, and the second of which applies the method to address the extent to which individual attitudes about corruption are influenced by beliefs about the extent of corruption in the society. (Both papers focus on Costa Rica, where the survey was fielded.) Those who are interested should check out the original papers by following the links above. Here I’ll just try to give a brief, non-technical flavor of the technique, and say a bit about why I think it might be useful not only for academics conducting their particular projects, but also for organizations that regularly field more comprehensive surveys on corruption, such as Transparency International’s Global Corruption Barometer.
The basic intuition behind the crosswise model is actually fairly straightforward, though it might not be immediately intuitive to everyone. Here’s the basic idea:
Suppose we want to find out how many people would be willing to bribe a police officer to get out of paying a traffic ticket. If we just ask them, on a survey, “Please answer true or false: In order to avoid paying a traffic ticket, I would be willing to pay a bribe to a police officer,” it’s quite possible that a substantial number of people will answer “false,” even if they know that the statement is actually true for them, despite having been assured that the survey is anonymous. (This is especially so if they are interacting with the survey administrator live, rather than filling it out in writing.) We could try to avoid that problem by phrasing the question indirectly—“True or false: In order to avoid paying a traffic ticket, most people like you would be willing to pay a bribe to a police officer,” but as I discussed in my previous post, it is quite possible that many respondents will either take the question literally (and try to guess what other people do) or will understand the interviewer’s nudge-nudge, wink-wink understanding of the question and be reluctant to answer “true.”
So, what we want to do is to find a way to make the respondents feel reassured that their individual answers really will be anonymous, while still being able to extract, from all the survey responses aggregated together, meaningful information about attitudes toward paying a bribe to avoid a traffic ticket. The crosswise technique tries to do this by presenting respondents with two statements—one of which is the sensitive one, the other of which (1) is non-sensitive, (2) is one for which the survey administrators could not possibly know the answer with respect to any individual respondent, (3) has no relationship to the sensitive question, such that the answers to the two questions could not possibly be correlated, and (4) has a known distribution in the respondent population. Respondents are then asked to give one of two responses: They are told to give response A if either both statements are true or both statements or false (without revealing whether it’s “both true” or “both false”), and they are told to give response B if exactly one of the two statements is true (without revealing which one). Here’s the actual question that Gingerich et al. included in their survey in Costa Rica:
How many of the following statements are true?
- My mother was born in OCTOBER, NOVEMBER, OR DECEMBER
- In order to avoid paying a traffic ticket, I would be willing to pay a bribe to a police officer
Please indicate your answer below
A. Both statements are true OR neither statement is true
B. One of the two statements is true
Reminder: Your mother’s birthdate is unknown to anyone involved in the collection, administration, or analysis of this survey. As such, your confidentiality is guaranteed.
The trick is that the researchers know the distribution of October-December birthdays in the population (and can verify it). Unsurprisingly, it’s close to 25%. (It’s actually a bit higher – around 26.5% — but to keep things simple I’ll just treat it as 25% for now.) So, if everyone in the survey responded honestly and nobody was willing to bribe a police officer, 75% of respondents should give answer A and 25% should give answer B. If all respondents answer honestly and all of them are willing to bribe a police officer, then 25% should give answer A and 75% should give answer B. So, if we’re willing to assume that the technique works and respondents gave honest answers, the researchers can use the aggregate numbers to back out the percentage of respondents who think it’s OK to bribe a police officer. For instance, suppose 55% of respondents give answer A and 45% give answer B. If we assume honest responses, and a baseline rate of 25% of respondents whose mother has an October-December birthday, then we can infer that 40% of respondents think it’s OK to bribe a police officer. (Both statements are true for 10% of respondents and neither statement is true for 45% of respondents, and they all give response A. For 45%, one but not the other statement is true: 30% would pay a bribe but had a mother born in January-September, and 15% wouldn’t pay a bribe and had a mother born in October-December. Those two groups together give response B.) I won’t bother going into the mathematical details here, not least because that’s not where my competence lies. (Indeed, I fear a careful reader may find and point out a calculation error in the previous example!) But the intuition, I hope, should be clear.
As some readers may have noticed, the crosswise technique described above bears a strong family resemblance to another approach, the so-called “randomized response technique,” where respondents are asked a sensitive question, and then told to privately flip a coin (or use some other randomizing device), and to give a truthful answer if, say, the coin comes up heads, but to say “yes” no matter what if the coin comes up tails. And indeed, the crosswise model is really just a variant on the more traditional randomized response technique—the question about the mother’s birthday (or whatever) is, from the researcher’s perspective, the statistical equivalent of a randomizing device. The advantage of the crosswise technique is thought to be twofold: First, some respondents may find the randomization component (the coin flip) confusing. Second, some respondents may feel reluctant to say “yes” when asked if they would or did pay a bribe, especially when asked by a live interviewer, even if the coin flip gives them a form of plausible deniability; the crosswise model eliminates the need to give an affirmative answer to a question about bribery at any point.
Of course, the crosswise model has its problems too. For starters, it may not be all that intuitive for some respondents. (Indeed, when I first encountered the discussion in the first Gingerich et al. paper that I read, I actually had to read the description of the technique twice before I got it. I’m probably much dumber than the average survey researcher, but only slightly dumber than the average survey respondent.) Second, the effectiveness of the technique depends on respondents being willing and able to give honest answers to questions about corruption so long as they have sufficient guarantees of anonymity. It won’t help if respondents suffer from a kind of cognitive dissonance, convincing themselves, for example, that they’re more honest than they really are. That, of course, is a concern for just about any research on individual corruption attitudes or behavior, so it’s not so much a criticism of the crosswise technique as much as it is a generic problem with all research in this vein. On top of these problems, any technique that involves indirect inferences from aggregate statistics may involve some significant inefficiencies and imprecision in the estimates.
On that last point, it’s worth noting that the Gingerich et al. papers are innovative not only in their use of the crosswise model in a corruption survey, but also in how they integrate the crosswise model with more traditional direct questioning on the same survey—employing what they refer to as a “joint response model.” I don’t really have time or space to go into that now, but if anything I’ve said in this post has piqued your interest (especially if you work for an organization that fields corruption surveys), I strongly suggest you read through the original papers.
The basic takeaway from this post, together with my last post, is that there are some promising emerging techniques for improving the quality of surveys on sensitive questions, which scholars are already employing in corruption research but which haven’t yet gone mainstream. My objective in these two posts is to try to nudge the mainstream to pay a bit more attention to these emerging techniques.
Reblogged this on Matthews' Blog.
Thank you Matthew for this very interesting post. I find the crosswise model less convincing than the list method to elicit truthful answers to corruption questions in surveys. My feeling is that the rules of the crosswise model may feel odd to a significant number of respondents. In the example you provide in your post, many respondents may feel – rightly – that their mother’s birth month is none of the interviewer’s business…. Those respondents will not get what this question has anything to do with the rest of the questionnaire and feel suspicious about the motivation and seriousness of the survey. It would be interesting to evaluate how the supposedly non-sensitive statement that is asked together with the corruption question influences answers. My intuition is that the choice of the non-sensitive statement matters, and for reasons that are difficult to grasp for survey designers. The coin-flipping version of the technique you also mentioned has similar issues. In a World Bank’s Enterprise Survey carried out in Nigeria in 2008 and 2009, surveyors who administered the questionnaire had to determine after the interview whether, in their opinion, respondents understood the coin-flipping instructions. They reported that as much as 14 percent of respondents did not understand how to answer randomized response questions (Bianca Clausen, Aart Kraay and Peter Murrell, WPS5415). This figure, quite impressive already, is likely undermined. Interviewers have little incentives to report that respondents did not understand the questions as it is their responsibility to explain them to the respondents. Because of this issue, I have my doubts about the potential of the crosswise model to reduce social desirability bias.
This is a thought-provoking new method – and, as you’ve discussed, that may be both its strength and its weakness. I also had to re-read the description to understand how it works. If I were a respondent highly concerned about hiding my own willingness to offer a bribe, I would assume that the only safe answer is “neither,” since there are no consequences for misstating my mother’s birth month. I would be more likely to do this if I didn’t understand how the survey functioned. Nonetheless, as you’ve described, this type of concern is not specific to the model but rather a concern in all surveys of sensitive/embarrassing behavior.