We’ve had a series of posts this week (from Michael, Rick, and Addar) about the vexed question of how to measure corruption–including controversies over whether the popular perception-based measures, like Transparency International’s Corruption Perceptions Index (CPI), do so accurately enough to be useful proxies.
But in addition to that discussion — and picking up on something Rick touched on in the latter half of his post — I think it may be worth raising the questions whether: (1) something like the CPI might have desirable effects even if the CPI score is not a particularly good indicator of true corruption, and (2) whether the CPI might have bad effects even if it’s actually quite an accurate measure. To be clear, my very strong predispositions are that we should try to produce and publicize accurate measures of corruption. But it’s at least worth thinking about the possibility that the social value of an indicator may not depend entirely on the accuracy of that indicator–and about what implications might follow from that observation.
So, first, let’s suppose that, as many critics have argued, the CPI rankings are not particularly accurate indicators of corruption levels. Maybe we’re pretty confident of the extreme cases—corruption is likely a bigger problem in Nigeria than in Finland, for example. But we knew that without the CPI; the CPI gets a lot of attention because it purports to evaluate, say, the amount of corruption in Ethiopia relative to Indonesia, or to compare Argentina with Brazil, and so forth. Let’s suppose that for most of the countries in the big mushy middle, relative CPI rankings are not much better than random noise, with very little connection to “true” corruption levels.
I think you could still make a case that the impact of the CPI on the world has been positive, for two reasons. First, and most straightforwardly, perceptions of corruption also matter, even if perceptions are not that closely correlated with reality. But even putting that point aside, it’s possible that the publication of the CPI has raised the profile of anticorruption generally, to good effect. People, and media outlets, love lists and ranks and numbers; publishing a list gets people talking and arguing and generally paying attention to an issue they might otherwise ignore. On top of that, since virtually all developing countries get a less-than-ideal CPI score, the publication of the CPI may give domestic political constituencies a rallying point–and some leverage–in putting pressure on the governments in some of those countries to do something about the problem, which might be a good thing even if it turns out that the relative CPI rankings were not well-grounded in reality.
Now, to flip this around, let’s suppose that the CPI is in fact a reasonably good proxy for actual corruption levels, so that the CPI rankings are not too far from how countries would rank if we could observe “true” corruption. Might the publication of the CPI nonetheless have some undesirable effects on the struggle against corruption? There’s an argument that it might: Perceptions of corruption can become a self-fulfilling prophesy: the belief that corruption is widespread causes corruption to remain widespread, or to become more so. After all, there’s evidence that people often behave how they expect others to behave, or in ways they think are socially acceptable. So showing people that their country is more corrupt than they thought might further erode norms against corruption, or engender despair or apathy. If that were the case, then we might elicit more honest behavior if we could convince people (inaccurately, at least at first) that honest behavior is in fact the norm.
Again, I want to be clear that I’m not endorsing either the suppression of accurate information or the dissemination of inaccurate information. My view is that the social value of corruption indicators probably does depend, more than anything else, on their accuracy, not least because these indicators are used in subsequent research to figure out what anticorruption tools work and don’t work. But it’s important to recognize that indicators like the CPI are not just data sources for other research; their publication has independent effects on anticorruption politics and policy, and at least some of those effects, both positive and negative, may not depend entirely on the accuracy of the indicators.
I tend to agree with your analysis Matthew, but suspect that the second possibility you discuss (that misleadingly positive CPI rankings will promote a norm of honesty) is extremely unlikely. For the CPI norm to have an effect, we would have to believe that individuals faced with corrupt opportunities are naive as to how their peers are actually behaving, or would behave, under the same circumstances. Yet I think most would agree that an individual decision to engage in corrupt behavior is based far more on the known actions and attitudes of peers and of professional or social superiors than by what any international poll might say about the matter. Consequently, if a bureau or country scores well on a corruption index despite habitual corruption, the attitude within the corrupt clique would most likely be ‘Great, we’re getting away with it!’ — or at least reinforce the thinking that there is nothing especially wrong with the behavior.
Thus, to produce positive change, the CPI needs to influence the actual behavior or expressed attitudes of this clique — e.g. by increasing the likelihood of reporting, punishment and social condemnation, and the attendant need for greater caution and secrecy. In agreement with your first hypothesis, I suspect this kind of institutional and attitudinal change is more likely produced by bad rankings — or more importantly the negative perceptions that the rankings reflect — whose reputational and economic costs, whether deserved or not, might spur elites to ‘clean up their act’ (at least superficially). As a result, the first hypothesis seems more realistic than the second hypothesis in relation to what we think drives the actions of corrupt insiders in practice. But to know for sure, I guess we’d need to poll them!
Daniel,
Your analysis makes a great deal of sense to me — it does seem that the first scenario is more likely than the second.
But perhaps a variant might have somewhat more plausibility. Effective anticorruption reforms may take time to be effective, but improvements might not show up in annual data like the CPI (assuming, for the sake of argument, that the CPI is accurate). The inaccurate belief that anticorruption reforms are having meaningful short-run effects might help sustain the momentum for those returns in the longer run.
But maybe I’m grasping at straws here. I tend to agree that the likelihood that accurate corruption measures would have bad effects is considerably more remote than the possibility that inaccurate corruption measures might nonetheless have good effects.
The perception of corruption of the CPI has a fact basis and an anecdotal basis but its impossible to accurately know what the percentage of each is in any country study. Perhaps a companion study that could easily be done along with the CPI is an anti-corruption efforts perception index (a ACPI?). This may provide the counter balance to one of Matt statements: “Perceptions of corruption can become a self-fulfilling prophesy: the belief that corruption is widespread causes corruption to remain widespread, or to become more so.”
A high corruption / low anti-corruption marker may create in a unified index the rebuttal to the self-fulfilling prophesy: that anti-corruption efforts are possible and more can be done to fight corruption.
This is quite an interesting idea. To some extent this is already happening – in addition to the CPI, Transparency International has already started publishing evaluations of countries’ “national integrity systems,” which gets at the kind of “anticorruption index” that you’re talking about. But I’m not sure anyone has combined them in quite the way you suggest.
A couple things to think about here, though:
First, what about the countries that score high on both corruption and anticorruption efforts? What effect might that have on policy?
Second, and related to that: the vigor of anticorruption efforts – and what we should expect with respect to those efforts – depends in part on the extent and nature of the corruption problem. If we want to make comparisons across countries, we’d need to decide whether or how to adjust for that.
One quick thought to add onto the analysis of whether the CPI and similar rankings are having a positive effect – taking as a given that the rankings are mostly empty noise with respect to countries in the “mushy middle”:
I think it’s a fair point that indexes like CPI do a public service by making corruption a more salient political issue. But to the extent that policymakers rely on corruption indexes and rankings to formulate foreign assistance policy, glitches in the accuracy of those indexes could be having a harmful effect. For example, the Millennium Challenge Corporation (MCC) – a U.S. foreign aid agency – uses the Control of Corruption Indicator published by the World Bank and Brookings as one of the criteria through which it determines whether countries are eligible for aid. (I have no idea whether this particular indicator is well-regarded; my point is only to show that indexes are sometimes used by development agencies). And even if the CPI isn’t directly tied to the choices of any particular aid organization, it seems undeniable that these indices affect perceptions of corruption in the international development and business communities (after all, that is their intended effect), and they are thus probably having some indirect effect on the amount of foreign aid and investment received by countries.
So, ultimately, if you think the indexes are inaccurate, I would argue you have to balance the good they do by increasing awareness of corruption as a problem against the distorting effects they might have on flows of outbound investment and development assistance. Not sure how you would go about that empirical analysis, but that’s how I would structure the inquiry.
I think that’s completely right. And it may highlight a way that these indexes could be harmful even if they’re accurate, which is considerably more plausible than the one I spun out in my original post (which Daniel effectively critiqued in his earlier comment).
There’s a big debate about whether giving foreign assistance to corrupt countries is helpful or harmful. The MCC policy is premised on the notion that it’s harmful. But what if that’s wrong? What if the best solution to corruption, superior to any specific policy intervention, is economic development? And what if foreign aid, even when provided to countries with endemic corruption, does tend to produce some meaningful degree of economic growth? Then even if the WGI/CPI/etc. are accurate in identifying the countries with the worst corruption problems, this could be counterproductive if it leads the MCC and other agencies to cut of development assistance.
Of course, that scenario relies on a number of assumptions that may not be right. I haven’t worked through the relevant research carefully enough to have an informed opinion on whether development aid has a positive or negative effect on growth in highly corrupt countries. But it’s at least a possibility. Here, though, the problem is not so much with the indicators, but with the policy decisions some agencies have made regarding how to use those indicators.
Sam, that’s a fascinating point about the MCC metric — an example of a not-infrequent problem that occurs when various entities searching for ‘objective’ standards latch on to easily available data, even if the original metric was not intended for these kinds of uses.
A thought on the question Matthew raises as to how donors might respond to this: One might argue that the MCC’s policy may be premised not on the belief that aid to corrupt countries is harmful per se, but merely that it’s wasteful, which seems a bit less controversial than the question of whether the aid still helps. With this in mind, we might be able to usefully distinguish the likelihood of corruption waste with respect to different types and methods of aid. With this in mind, perhaps donors can respond to corruption concerns by shifting aid budgets rather than cutting them. The shift could be thematic, moving from economic development to funding rule of law or targeted anti-corruption efforts, or institutional by diverting direct-to-government assistance to mulitlateral or NGO implementers deemed more responsible. Probably a dollar-per-dollar tradeoff is not possible, since ROL training costs a lot less than bridges and factories, and prioritization of this kind already occurs at least implicitly when donors select country focus areas and implementing partners. But whether using a bright line cutoff or a more holistic analysis, setting explicit benchmarks could create some interesting incentives for governments that would like to have retain control over aid budgets to show progress in these areas. Admittedly, this also creates incentives to game rating systems that we agree are inexact and susceptible to window-dressing — so there’s always a trade-off.
Sam, I’ll join the chorus of people who think this is a great point, and one I didn’t know about. But I have one question for you that applies equally to the other arguments made in Matthew’s original post and in some of the comments: What is our alternative?
I don’t think many would argue with Matthew’s point that if we publish corruption indicators, we should strive to publish accurate ones. And certainly I think that, insofar as nobody seems satisfied with the leading indicators out there, there’s almost definitely a mix of positive and negative effects to having inaccurate indicators. But is this debate about whether we should have them at all? Imperfect as they are, it’s hard to imagine a world without them. If we didn’t have any, we’d immediately make some. And if we didn’t use them in policymaking for things such as prioritization, what would we do?
So in the context of your comment, let’s say that we accept that indicators are highly imperfect and result in sub-optimal allocation of resources when used in the way you describe by the MCC. What should the MCC do instead?