In an early post Matthew predicted that the measurement of corruption was likely to be a major topic of discussion on this blog. So far his prediction has proved correct. Ten of the sixty plus posts that have appeared since this blog was launched in mid-February have been devoted in whole or part to measurement issues: Are perception measures accurate? Useful whether accurate or not? What’s the source of the $1 trillion bribe estimate? Shouldn’t someone develop sub-national corruption perception measures? And so forth.
This eleventh post steps back from the policy issues examined in earlier ones to address a much more straightforward question: What are the different ways corruption can be measured?
Corruption measures divide into those based on a survey and those derived from empirical data. Transparency International’s Corruption Perception Index is surely the most well-known survey measure, but it is only one type of survey-based measure, that based on perceptions, or attitude, or opinions. The second type of survey measure draws on respondents’ actual experience in the form of responses to such questions as: Have you been solicited for a bribe in the last year? Have you had to pay a bribe to get a sewer or water connection? How much did you have to pay? Two examples of these “experiental” surveys are the World Bank’s Enterprise Surveys and the United Nations Office of Drugs and Crimes Crime Victims Surveys. The former includes questions asking firms if they have paid bribes and if so how much; the latter contains queries to individuals about their experience with different forms of corruption.
Both experiental and perception survey draw respondents either from a random sample of all citizens or from a narrower, pre-determined pool of respondents. Perception surveys often target investors or business executives or other groups likely to have more developed views than the average citizen. While the answers to such “expert” surveys will thus be more informed, the disadvantage is that the experts surveyed may be subject to “group think.” Expert surveys also don’t provide information on the prevalence of corruption. If, say, 20 percent of firms randomly sampled in a World Bank survey or 20 percent of the individuals randomly surveyed in a UNODC survey say they have had to pay a bribe, that is solid evidence of the incidence of bribery in the country or territory surveyed.
Measures based on empirical data are derived either from direct observation or a comparison of different sources of information. One striking example of data based direct observation is the diary Vladimir Montesinos, an adviser to the now disgraced ex-President of Peru Alberto Fujimori, kept showing how much he paid legislators, other government officials, and reporters during Fujimori’s administration. A second are the reports of bribes clearing agents paid at the ports of Maputo, Mozambique, and Durban, South Africa in the late 2000s. While the accuracy of direct observation data is not open to question and provides unrivaled insights about corruption, for obvious reasons it is rarely available. Montesinos was apprehended before he could destroy his diary, and it took enormous time and effort, backed by the resources of the International Financial Corporation, to compile the Maputo and Durban data.
Far more common are empirical measures of corruption measures derived from a comparison of two different data sources. Some examples of the comparisons possible: 1) two official data sources: exports to China recorded by the Hong Kong government versus imports to China from Hong Kong recorded by the Chinese government; 2) official data versus independent calculation: the amount of money the central government of Uganda reported sending to local schools versus the amount of money each school actually received; 3) actual results versus expected results: the pricing pattern in the bids submitted on a public tender versus the pattern predicted by an economic model; 4) before and after: prices paid for hospital supplies before and after a crackdown on corruption in Buenos Aires.
Survey measures are almost always cheaper than those based on empirical data. In addition, the same survey questions can be asked in different nations or at different times making it easy to compare results across countries or within one nation over time. Cost and ease of comparison account for their widespread use. On the other hand, survey measures are haunted by questions of reliability (will two surveys taken at the same time produce the same result?) and validity (is the survey actually measuring the level of corruption) that empirical means are not.
This post draws on three recent, excellent discussions of corruption measurement: Ben Olken and Rohini Pande’s Corruption in Developing Countries, Sandra Sequeira’s Advances in Measuring Corruption in the Field, and Eric Zitzewitz’ Forensic Economics. Reader seeking more information are well-advised to consult them.
Some useful sources of information for measuring corruption includes:
(1) Governance Indicators: A Users’ Guide by UNDP and EU
(2) A Users’ Guide to Measuring Corruption by UNDP and Global Integrity
(3) Transparency International’s Gateway: Mapping the Corruption Assessment Landscape
Thanks! They are good additions to my list.
Rick,
This is a very helpful overview – thanks for providing the birds-eye-view perspective on these different measurement techniques.
A small quibble: You distinguish measures based on “surveys” from those based on “empirical data” — but the results of surveys _are_ “empirical data”. They may be incorrect (perceptions may be wrong, people may report their own experiences inaccurately, etc.), but other data sources may be wrong as well (customs officers may mis-record quantities or values of goods moving through a port, Montesinos may not always have accurately recorded his unlawful payments, etc.). I think this is mostly a terminological gripe (all these kinds of data are “empirical”), but not entirely — with any source of data, we need to inquire as to its validity and reliability.
Second, there’s one other source of data that some research uses, which you didn’t mention: law enforcement data (number of convictions, settlements, etc.). Using that data is usually deeply problematic, because it depends not only on “true” underlying corruption, but also on law enforcement effort and strategy, which is endogenous. But some researchers have argued that in some cases these problems are mitigated — for example, several papers look at US federal convictions of state & local officials, and compare results across states. And one of the papers I discussed in an earlier post looked at US FCPA settlements involving various countries, attempting to control for US exports to those countries. To be clear, I think these approaches have important problems and limitations, but nonetheless, for completeness, it might be worthwhile to include them in your typology of corruption measures.