Last month, the European Commission released a comprehensive report on corruption in the EU, based on two perception surveys (one of the general population and one of businesspeople) as well as existing public data. One of the report’s most striking findings was the prevalence of perceived corruption among the general public: over 75% of Europeans surveyed thought corruption was “widespread” in their country–even in countries where very few respondents had personally experienced or witnessed corruption.
The EU Report is not the first study to find a sizeable gap between people’s perception of corruption’s prevalence and their reported personal experience with corruption. What explains this gap? The two most common explanations are: (1) perceptions of corruption overstate true corruption (as perceptions may be swayed by sensationalistic media reports, and perhaps skewed by factors like ethnic heterogeneity and low social engagement, or because of different understandings of what “corruption” means); (2) self-reported experiences with corruption understate true corruption, because people do not respond truthfully to questions about their personal experience even when anonymity is guaranteed.
But there is another possibility, which highlights a limitation of studies that compare only general perceptions of corruption with direct, personal experience with corruption: These surveys typically fail to account for “tells” – observable indications of potential corruption.
Tells are the day-to-day visual images or interactions that drive peoples’ perceptions of corruption: a luxury car driven by a public official, absenteeism, bottles of high-end cognac on store shelves, a corporate social responsibility project in the neighboring town, or even an oil company’s promotional calendar sitting on a politician’s desk. They’re little signs of graft that lead to inferences of corruption. Today, most corruption indices get at perception by asking questions about (1) personal experiences with actual corrupt acts and/or (2) the impact of mass media. But they don’t ask about observable environmental cues.
I don’t mean to imply corruption “tells” completely explain the gap between experience and perception–just that they may be playing some role, which most existing corruption surveys don’t get at. It might therefore be helpful to expand corruption surveys to include follow-up questions that encourage participants to articulate what “tells”, if any, are driving their perception. For example, the EU Report asked respondents how widespread they think corruption is in their country–very widespread, fairly widespread, fairly rare, very rare, no corruption, or don’t know. For those respondents who reply “very widespread” – as many in the EU survey did – the survey could and should have asked the follow-up question: “What observations or interactions led you to perceive corruption as very widespread?”
To be sure, it’s not entirely clear what re-structured surveys would show. Knowing tells could make survey results more accurate by reflecting overlooked metrics of corruption and teasing out some cultural aspects of perception. On the other hand, they could make results less accurate in that they are anecdotal and centered solely on the perceiver’s experience. After all, tells can mislead: the politician with the promotional calendar on his desk is not necessarily more corrupt than the politician with an unadorned desk. That said, a better understanding of corruption’s tells would add important nuance to corruption survey results, and make country-to-country comparisons like the EU report’s more meaningful.