Anticorruption Policymaking: The Critical Role of Information

“. . . [S]ound policies require good information – about the existence, nature, and causes of a problem, about the costs and benefits to the affected public of various possible solutions to the problem, and about the effectiveness of current policies.” Peter H. Schuck, Why Government Fails So Often: And How it Can Do Better. Princeton: Princeton University Press, 2014, p. 162.

Few axioms of policymaking would seem as self-evident as the one above, and few are so often observed in the breach.  Developing the knowledge required for good policymaking can be expensive, time-consuming, and intellectually challenging.  At the same time, policymakers are often under pressure to act; the problem is urgent; the public demands a solution, and they want to address the nation’s ills, or at least appear to address them, quickly.  So policy is made on the basis of incomplete data, hunches, intuition, and plain guesswork. The unfortunate result, as the title of Schuck’s book advertises, is almost always a policy failure.

Anticorruption is an area that seems particularly prone to policymaking on the fly.  In 2007 the U4 Anticorruption Resource Centre examined different countries experiences developing and implementing a national anticorruption strategy.  A major finding: “information, knowledge, and understanding of corruption continue to be a great weakness for the formulation and prioritization of anticorruption initiatives . . . .”  A more recent review of national anticorruption strategies Matthew and I have underway for the UNODC suggests matters have changed little in the intervening years. Countries as different as India, Bosnia-Herzegovina, and Thailand have constructed detailed, complex strategies for combating corruption on a thin to non-existent knowledge base.

Given the challenges of building a sound knowledge base for anticorruption policymaking, it is easy to understand why this critical step in the process is so often ignored.

Start with the most basic information required for anticorruption policymaking: how bad is the problem?  How much corruption is there really?  Answering  just this question can be a daunting, and discouraging, task.

Direct evidence of corruption, though occasionally revealed in court cases, is rare, and even when information from a case surfaces, there is no way to tell whether the conduct revealed is widespread or rare.  Does the case represent a deviance from the norm?  Or is it an example of the prevailing norm?

The most readily available information on the extent of corruption in a country — or a province, city or other region for that matter — remains data drawn from surveys.  A representative sample of the public, businesses, public servants, or those with special expertise is asked about their perceptions of corruption or their actual experience with it or some combination of perceptions and experiences.

Perception surveys solicit opinions about the extent and nature of corruption, questioning respondents about how serious they think the corruption problem is and whether they believe it has improved or worsened in the past year.   Besides the general public, perception surveys may target investors or business executives or other groups likely to have more informed views than the average citizen.  This can help when information about corruption in the purchase of arms, the procurement of public works, and other types of “grand corruption” is sought, areas where citizens likely have little knowledge on which to base their perceptions.

The question that looms over perception surveys is how close the relationship is between perceptions of corruptions and the actual level of corruption.  The evidence is growing that the answer is “not that much.”  One reason, as the Gallup organization explains, is that citizens in some countries may be reluctant to answer truthfully for fear of being seen to criticize their government, a second, as Donchev and Ujhelyi found, that corruption perceptions are skewed by respondents’ income and education, and a third that respondents, as Sequirea observes, may have different ideas about what “corruption” means.  Doubts about accuracy extend to all perception surveys, even the most well-known one, Transparency International’s Corruption Perceptions Survey.  (A particular issue with the TI survey, important when selecting measures to evaluate an anticorruption effort, is that its results cannot be compared from year-to-year (as Matthew again emphasized yesterday), and it thus cannot be used as a baseline to assess a policy’s effectiveness in controlling corruption.)

Experience surveys query citizens or firms about how often they have paid a bribe in the past year; many also seek information about the agency to which the bribe was paid and the purpose of the payment.  Transparency International’s 2013 Global Corruption Barometer is an example.  Some 114,000 citizens in 107 countries were asked if anyone in their household had had to pay a bribe to the police, health care workers, or other public service providers in the last two years.  The World Bank Enterprise Surveys ask firms similar questions: have they had to pay a bribe to obtain power, water or phone service or a construction or import license.  Have they had ever had to bribe a tax auditor.  Assuming respondents are willing to admit to a stranger, either in person or over the phone, that they have paid a bribe, and that those surveyed are a representative sample of all citizens or firms, experience surveys are a reliable, valid measure of bribery.  On the other hand, experience surveys do not provide information on forms of corruption outside citizens’ daily experience: bid-rigging cartels and kickback schemes in public procurement; conflicts of interest, influence peddling, profiting from confidential information.

Surveys are not the only source of information on corruption that can be derived from a large number of responses or observations.  Other forms of quantitative data useful in assessing the dimensions of the corruption problem are generated in the course of providing public services or managing benefit programs.   The Ugandan Inspectorate of Government’s 2014 corruption tracking report is an instructive example.  It compiles administrative data from several sources to measure corruption.  The sources include: i) the number of corruption complaints citizens file about different departments; ii) the percent of government contracts completed on time and on budget; iii) the number of central government agencies and local government units not receiving a clean audit from the supreme audit agency; and iv) the percentage of corruption cases successfully prosecuted.  Unlike survey data, none of these indicators measure corruption directly.   All must be interpreted in light of other information.  Government agencies may fail an audit not because of corruption but for lack trained staff to comply with audit procedures; public contracts may run over time and budget because of inadequate preparation or unforeseen events.  The advantages of this data are three: it is often cheap and already collected, and unlike perception survey data, it is not open to question.

A third technique for measuring corruption that has gained ground in recent years is the comparison of two different sources of quantitative data.  Four examples of the comparisons possible are discussed in an earlier post.  This technique provides the most valid and reliable assessment of corruption, but it measures only a particular form of it.  It demands too not only accurate data from the different sources but also the technical skills to conduct the comparisons – two challenges for poorer, less developed states.

The principal advantage of quantitative studies is their objectivity.  Assuming survey responses are accurately transcribed and data on complaints, procurement, and case outcomes are correctly recorded, if one-third of survey respondents reported corruption was a serious problem, or 65 percent of local governments failed to secure a clean audit, who compiles and reports the data makes no difference.  He or she may support or oppose the government, think corruption is an overstated or understated problem, or be biased in some other way.  These views will not matter; the results will not differ.

On the other hand, quantitative studies have their limitations.  They can be expensive and demand a high degree of technical skill to prepare.  Many require data that less developed countries may not have.  Surveys for example depend upon the existence of current census data to ensure the sample drawn is representative.  Nor do quantitative studies provide a complete picture of incidence of corruption.  To date, none are available that directly measure conflict of interest, nepotism, abuse of office, and other severe, and perhaps pervasive, corruption crimes.

Even with these limitations, if policymakers have a sufficient number and variety of quantitative reports a rough picture of the corruption landscape will emerge and will sometimes be sufficient for policymaking purposes.  A combined perception and experience survey the Government of Zambia took before drafting a national anticorruption strategy showed seven government department to be particularly corrupt.  As a result, the strategy provided for piloting enhanced corruption prevention programs in the seven.

The biggest disadvantage of quantitative studies is that they do not answer many questions critical for devising an effective anticorruption policy.  Are there gaps in a country’s anticorruption laws that leave some offenses unpunished?  How “well” is the anticorruption law being enforced?  What is the cause of corruption – in the road sector, health care, the nation as a whole?  What are the costs and benefits of an income and asset declaration law?  Of introducing another layer of review into the procurement process?

If garnering enough information about the level of corruption to make sound policy wasn’t enough of a challenge, what about divining answers to these, or at least enough of an answer to avoid policy failure? Surely grist for future posts.

9 thoughts on “Anticorruption Policymaking: The Critical Role of Information

  1. May be a decade back Daniel Kaufmann has jotted this formula for anticorruption:
    AC = L+KI+CA
    Where L stands for leadership, that is, political commitment
    KI stands for knowledge and information, in fact, information is knowledge and knowledge is power (This is what Rick is talking about); and
    CA stands for collective action, that is engagement of common people, civil society members in anticorruption drive.
    Therefore, anticorruption strategy must have three elements built in.

  2. While policies are great, perhaps principles might be effective as well? We could begin from anticorruption structured principles rather than detailed data-hungry policies, and then emphasize enforcement. If we are successful, we could make the principle the norm.
    And I overstretch this analogy: the Old Testament’s ten commandments has survived hundreds of years, why can’t anticorruption?

  3. I am not very familiar with the research in this area, but given just how predominant data analysis is becoming in approaches to–well, I would say development, etc., but I suppose “everything” is accurate as well–I’m surprised there haven’t been more quantitative studies, even if only on the micro level. Yes, it seems like setting up an experiment deliberately might not be possible, but statisticians always seem to find such surprising nooks and crannies of events in the world that make for unexpected case studies. All the interest from, for example, many of the people involved in aid efforts now seems to be in “measurable returns,” so, even if there’s some reason to be skeptical of some of those efforts and even if the questions here often seem to be so big-picture that they’re particularly difficult to assess, you’d think someone out there would be trying to do this sort of analysis.

  4. Rick, your points are well taken and, like Katie, I’m eager to see more analysis of the nature and extent of corruption. But, in reviewing national anticorruption strategies, I’m often left feeling a bit overwhelmed. There are so many overlapping, moving parts to endemic corruption – corruption in procurement runs into corruption in oversight bodies runs into corruption in the judiciary, etc. More, specific information would help to clarify the picture but I’m still left wondering how you would prioritize among areas in need of intervention when those areas do not exist in a vacuum. I think the pivotal role of leadership, which Narayan points out, exacerbates the challenges of measuring corruption. How do you account for the impact of particular heads of anticorruption agencies, prosecutors, judges, etc.?

    I suppose I share Funmi’s sentiment that principles matter, likely because they bear on the collective action that Narayan mentioned. I certainly think more information would be useful, particularly in measuring the impact of given policies. But how can we best use data in crafting preventative policies? I suppose that’s a question we can save for when we have the information.

    • After some reflection, I’m starting to think I might be even more concerned about all those overlapping parts preventing people from investing in these sorts of issues. It still seems like many of the statistics we successfully measure would be similarly difficult to isolate, so this goal should be similarly achievable. However, if measuring corruption really is one of the hardest things to do (or is perceived to be), maybe the stats people, philanthropists, NGOs, etc. will largely exclude addressing corruption from the sorts of work they take on, in favor of more easily measurable returns (it sounds like they largely already are doing this, from what all of you are reporting, but I suppose I’m now just extra worried that that trend is unlikely to change). Still, maybe it’s overly optimistic, but the progressive we’ve made with quantitative studies on other issues makes me think there’s still room for hope.

  5. I’d like to echo Liz and Funmi’s mention of principles, if only because it seems like when it comes to anti-corruption information, some of the hardest hit countries will always be playing catch-up. While more data, and better data, is unquestionably important, I also don’t think it’s feasible for a large part of the developing world. Focusing research and data compilation resources on countries/sectors for which the data could be most appropriately applied to the widest set of countries would be worthwhile, but would also be side-stepping one of the very points you bring up, Rick: that we don’t have specific data.

    So what do we do in situations where we do not have good info, and where we’re not going to have it (at least in the near term)? Focusing on principles is appealing because we can draw from disciplines further afield. A constant in corruption is that it involved human beings weighing interests, perhaps principles drawn from psychology or game theory (fields with considerable quantitative research behind them) could be incorporated in education or corporate/public service training programs? We’d still be going out on a limb in terms of how effect these principles would be to reducing corruption, but we’d have at least a better foundation (in terms of research) for assuming what effect they would have on human behavior.

    • I agree with Mel and several of the other commenters that, especially in anti-corruption, principles are important. I do not see the danger in enforcing anti-corruption rules for departments or agencies that are ‘corrupt, but not the most corrupt.’ If, due to lack of data, the government fails to target some of the most corrupt groups that is bad, but not worse than if they did nothing.

      Where I see issues with moving forward without sufficient data is, as Bea mentions, in monitoring and enforcement. At some point in the chain, some entity, like police, prosecutors and/or the judiciary needs to be at least somewhat trustworthy to go after corruption. If they, too are corrupt (which in theory might not be knowable without the data), then the whole scheme will fall apart and might in fact be counterproductive if corrupt actors/the public take it as a sign that corruption will always triumph and never be punished.

      Of course, having the data is always better. The question is whether it is worth moving ahead anyway if it is impractical/unaffordable to collect the data. I agree with Mel that, at least when it comes to anti corruption, it is still possible to move forward.

  6. The comments brought up about principles over policies is interesting, but I do worry a great deal about the challenges a principle-centric approach would pose to monitoring and enforcement. In my review of national anticorruption strategies, those strategies which seemingly build off of a limited knowledge base (and thus, as Rick notes in his post, fail to diagnose the problem with any clarity), tend to issue broad, sweeping claims that efforts should be bolstered in just about every sector. It leaves one to wonder where priorities will lie, how the government and civil society can be expected to measure progress.

    And I do think there is much to be said for the ability to measure progress. There is an obvious accountability issue, but detailed monitoring could also provide helpful information about what strategies do and do not work. If, for example, salary increases or technological innovation have led to improved perception responses, then similar measures could be implemented in other problematic areas of government. And, of course, less effective measures could be used more sparingly in the future.

    Melanie’s point about the difficulties of obtaining knowledge in developing countries is well taken, but I also believe that there may be some important benefits to having such knowledge, particularly when it comes to resource allocation. I wonder if the costs of carrying out such studies could be offset in the long run when the questions generate better answers.

  7. Thanks for all the thoughtful replies. A couple of observations —

    On the principles versus policies difference. In the U.S. “policy” can mean one of several things. A statement of broad goals: “Our policy is to improve relations between the police and minority communities.” It can be a statement of what the speaker actually does: “Our policy is to hire minorities.” Or it can be set of actions that we believe will produce a given result: “We are hiring more anticorruption investigators and prosecutors in order to increase the level of deterrence and therefore bring corruption levels down.” In the post I use “policy”t in the latter sense and the need for a solid information base arises directly from that usage. What evidence do we have that hiring more anti-corruption investigators and prosecutors will increase deterrence? It may turn out that there is a threshold effect at work. If there are very few enforcers to begin with and only a few more are hired, perhaps we won’t reach the level where more enforcement produces less corruption.

    “Principles” has several meanings as well. One principle is that an increase in deterrence leads to less crime. If that is how it is being used, then there is little difference between the principle of stepping up enforcement to reduce corruption and the policy of hiring more enforcers to deter it. One just provides more detail about the intended action.

    The problem with some principles is that they are so indeterminate that they offer no guide to action. Take the commandment “thou shall not kill.” Stated in this way it is way too vague to be of use. All statutes that make intentional killing a crime (at least of which I am aware) provide a defense of justification. See Aeschylus’ The Eumenides for an early example (Orestes killing of his mother justified because she murdered his father and her husband Agamemnon).

    On the corruption problem being so complex that trying find ways to address it is likely to induce paralysis, Albert Hirschman provides a nice reply in Development Projects Observed (Brookings Institutions 1967). He explains that many projects in developing countries face innumerable hurdles to completion and that if planners had been cognizant of all of them in advance, they would have never begun the project. But many do get completed because as unanticipated problems arise mangers find creative ways to solve them, calling forth ex post creativity that would never have appeared ex ante. Hirschman goes so far as to give the phenomenon a name: the “Hiding Hand.” Were it not for it, there might have been no development in many countries. (In a preface to the 2015 re-issue of the volume HLS Professor Cass Sunstein nicely glosses Hirschman’s argument, noting among other things how it anticipates several insights of behavioral economics.)

    Although not perfect, there is an analogy here to anti-corruption strategies. If we thought through all the complexities, we might never do anything. Or we might be stuck in an “analysis paralysis” trap where we keep asking ever more questions about the the phenomenon, its causes and possible solutions. In the case of anticorruption strategies, it is the feedback from monitoring and evaluation, a byproduct of the sound knowledge base required to launch the strategy, that prompts creative responses to unanticipated problems.

Leave a reply to Katie King Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.