“. . . [S]ound policies require good information – about the existence, nature, and causes of a problem, about the costs and benefits to the affected public of various possible solutions to the problem, and about the effectiveness of current policies.” Peter H. Schuck, Why Government Fails So Often: And How it Can Do Better. Princeton: Princeton University Press, 2014, p. 162.
Few axioms of policymaking would seem as self-evident as the one above, and few are so often observed in the breach. Developing the knowledge required for good policymaking can be expensive, time-consuming, and intellectually challenging. At the same time, policymakers are often under pressure to act; the problem is urgent; the public demands a solution, and they want to address the nation’s ills, or at least appear to address them, quickly. So policy is made on the basis of incomplete data, hunches, intuition, and plain guesswork. The unfortunate result, as the title of Schuck’s book advertises, is almost always a policy failure.
Anticorruption is an area that seems particularly prone to policymaking on the fly. In 2007 the U4 Anticorruption Resource Centre examined different countries experiences developing and implementing a national anticorruption strategy. A major finding: “information, knowledge, and understanding of corruption continue to be a great weakness for the formulation and prioritization of anticorruption initiatives . . . .” A more recent review of national anticorruption strategies Matthew and I have underway for the UNODC suggests matters have changed little in the intervening years. Countries as different as India, Bosnia-Herzegovina, and Thailand have constructed detailed, complex strategies for combating corruption on a thin to non-existent knowledge base.
Given the challenges of building a sound knowledge base for anticorruption policymaking, it is easy to understand why this critical step in the process is so often ignored.
Start with the most basic information required for anticorruption policymaking: how bad is the problem? How much corruption is there really? Answering just this question can be a daunting, and discouraging, task.
Direct evidence of corruption, though occasionally revealed in court cases, is rare, and even when information from a case surfaces, there is no way to tell whether the conduct revealed is widespread or rare. Does the case represent a deviance from the norm? Or is it an example of the prevailing norm?
The most readily available information on the extent of corruption in a country — or a province, city or other region for that matter — remains data drawn from surveys. A representative sample of the public, businesses, public servants, or those with special expertise is asked about their perceptions of corruption or their actual experience with it or some combination of perceptions and experiences.
Perception surveys solicit opinions about the extent and nature of corruption, questioning respondents about how serious they think the corruption problem is and whether they believe it has improved or worsened in the past year. Besides the general public, perception surveys may target investors or business executives or other groups likely to have more informed views than the average citizen. This can help when information about corruption in the purchase of arms, the procurement of public works, and other types of “grand corruption” is sought, areas where citizens likely have little knowledge on which to base their perceptions.
The question that looms over perception surveys is how close the relationship is between perceptions of corruptions and the actual level of corruption. The evidence is growing that the answer is “not that much.” One reason, as the Gallup organization explains, is that citizens in some countries may be reluctant to answer truthfully for fear of being seen to criticize their government, a second, as Donchev and Ujhelyi found, that corruption perceptions are skewed by respondents’ income and education, and a third that respondents, as Sequirea observes, may have different ideas about what “corruption” means. Doubts about accuracy extend to all perception surveys, even the most well-known one, Transparency International’s Corruption Perceptions Survey. (A particular issue with the TI survey, important when selecting measures to evaluate an anticorruption effort, is that its results cannot be compared from year-to-year (as Matthew again emphasized yesterday), and it thus cannot be used as a baseline to assess a policy’s effectiveness in controlling corruption.)
Experience surveys query citizens or firms about how often they have paid a bribe in the past year; many also seek information about the agency to which the bribe was paid and the purpose of the payment. Transparency International’s 2013 Global Corruption Barometer is an example. Some 114,000 citizens in 107 countries were asked if anyone in their household had had to pay a bribe to the police, health care workers, or other public service providers in the last two years. The World Bank Enterprise Surveys ask firms similar questions: have they had to pay a bribe to obtain power, water or phone service or a construction or import license. Have they had ever had to bribe a tax auditor. Assuming respondents are willing to admit to a stranger, either in person or over the phone, that they have paid a bribe, and that those surveyed are a representative sample of all citizens or firms, experience surveys are a reliable, valid measure of bribery. On the other hand, experience surveys do not provide information on forms of corruption outside citizens’ daily experience: bid-rigging cartels and kickback schemes in public procurement; conflicts of interest, influence peddling, profiting from confidential information.
Surveys are not the only source of information on corruption that can be derived from a large number of responses or observations. Other forms of quantitative data useful in assessing the dimensions of the corruption problem are generated in the course of providing public services or managing benefit programs. The Ugandan Inspectorate of Government’s 2014 corruption tracking report is an instructive example. It compiles administrative data from several sources to measure corruption. The sources include: i) the number of corruption complaints citizens file about different departments; ii) the percent of government contracts completed on time and on budget; iii) the number of central government agencies and local government units not receiving a clean audit from the supreme audit agency; and iv) the percentage of corruption cases successfully prosecuted. Unlike survey data, none of these indicators measure corruption directly. All must be interpreted in light of other information. Government agencies may fail an audit not because of corruption but for lack trained staff to comply with audit procedures; public contracts may run over time and budget because of inadequate preparation or unforeseen events. The advantages of this data are three: it is often cheap and already collected, and unlike perception survey data, it is not open to question.
A third technique for measuring corruption that has gained ground in recent years is the comparison of two different sources of quantitative data. Four examples of the comparisons possible are discussed in an earlier post. This technique provides the most valid and reliable assessment of corruption, but it measures only a particular form of it. It demands too not only accurate data from the different sources but also the technical skills to conduct the comparisons – two challenges for poorer, less developed states.
The principal advantage of quantitative studies is their objectivity. Assuming survey responses are accurately transcribed and data on complaints, procurement, and case outcomes are correctly recorded, if one-third of survey respondents reported corruption was a serious problem, or 65 percent of local governments failed to secure a clean audit, who compiles and reports the data makes no difference. He or she may support or oppose the government, think corruption is an overstated or understated problem, or be biased in some other way. These views will not matter; the results will not differ.
On the other hand, quantitative studies have their limitations. They can be expensive and demand a high degree of technical skill to prepare. Many require data that less developed countries may not have. Surveys for example depend upon the existence of current census data to ensure the sample drawn is representative. Nor do quantitative studies provide a complete picture of incidence of corruption. To date, none are available that directly measure conflict of interest, nepotism, abuse of office, and other severe, and perhaps pervasive, corruption crimes.
Even with these limitations, if policymakers have a sufficient number and variety of quantitative reports a rough picture of the corruption landscape will emerge and will sometimes be sufficient for policymaking purposes. A combined perception and experience survey the Government of Zambia took before drafting a national anticorruption strategy showed seven government department to be particularly corrupt. As a result, the strategy provided for piloting enhanced corruption prevention programs in the seven.
The biggest disadvantage of quantitative studies is that they do not answer many questions critical for devising an effective anticorruption policy. Are there gaps in a country’s anticorruption laws that leave some offenses unpunished? How “well” is the anticorruption law being enforced? What is the cause of corruption – in the road sector, health care, the nation as a whole? What are the costs and benefits of an income and asset declaration law? Of introducing another layer of review into the procurement process?
If garnering enough information about the level of corruption to make sound policy wasn’t enough of a challenge, what about divining answers to these, or at least enough of an answer to avoid policy failure? Surely grist for future posts.