As GAB readers are aware, I’ve occasionally used this platform to complain about widely-repeated corruption statistics that appear to be, at best, unreliable guesstimates misrepresented as precise calculations—and at worst, completely bogus. (The “$1 trillion in annual bribe payments” figure would be an example of the former; the “corruption costs the global economy $2.6 trillion per year” is an example of the latter.) I recognize that, in the grand scheme of things, made-up statistics and false precision are not that big a deal. After all, the anticorruption community faces 1,634 problems that are more important than false precision, and in any event 43% of all statistics quoted in public debates are completely made up. Yet my strong instincts are that we in the anticorruption community ought to purge these misleading figures from our discussions, and try to pursue not only the academic study of corruption, but also our anticorruption advocacy efforts, using a more rigorous and careful approach to evidence.
But perhaps I’m wrong about that, or at least naïve. A few months ago, after participating in a conference panel where some of the other speakers invoked the “corruption costs $2.6 trillion” figure, I was having a post-panel chat with another one of the panelists (an extremely smart guy who runs the anticorruption programs at a major international NGO), and I was criticizing (snarkily) the tendency to throw out these big but not-well-substantiated numbers. Why, I asked, can’t we just say, “Corruption is a really big problem that imposes significant costs?” We’ve got plenty of research on that point, and—a few iconoclastic critics aside—the idea that corruption is a big problem seems to have gained widespread, mainstream acceptance. Who really cares if the aggregate dollar value of annual bribe payments is $1 trillion, $450 billion, $2.3 trillion, or whatever? Why not just say, corruption is bad, here’s a quick summary of the evidence that it does lots of damage, and move on? My companion nodded, smiled, and said something along the lines of, “Yeah, I see what you’re saying. But as an advocate, you need to have a number.”
We didn’t get to continue our conversation, but that casual remark has stuck with me. After all, as I noted above, this person is extremely smart, insightful, and reflective, and he has lots of experience working on anticorruption advocacy at a very high level (a kind of experience that I, as an Ivory Tower academic, do not have). “As an advocate, you need to have a number.” Is that right? Is there a plausible case for continuing to open op-eds, speeches, policy briefs, and so forth with statements like, “Experts estimate that over $1 trillion bribes are paid each year, costing the global economy over $2.6 trillion,” even if we know that those numbers are at best wildly inaccurate? (This question, by the way, is closely related to an issue I raised in a post last year, that arose out of a debate I had with another advocate about the legal interpretation of the UN Convention Against Corruption.)
I thought I’d use this post as an opportunity to raise that question with our readers, in the hopes of both getting some feedback (especially from our readers with first-hand experience in the advocacy and policymaking communities) and provoking some conversations on this question, even if people don’t end up writing in with their views. And to be clear, I’m not just interested in the narrow question of whether we should keep using the $2.6 billion or $1 trillion estimates. I’m more generally curious about the role (and relative importance) of seemingly precise “big numbers” in anticorruption advocacy work. Do we really need them? Why? And is what we gain worth the costs?
As someone who works on corruption for an international advocacy NGO, I’ve struggled with the same question. I’ve concluded that the answer is yes.
It’s a reality of the increasingly information-saturated environment in which we live that cutting through the noise and capturing people’s attention requires presenting information in a compelling, convincing and relatable way. Just saying “corruption is bad” doesn’t cut it. People will want to know “how bad”, and will want some context against which to compare that (e.g. the $xx lost to a certain type of corrupt activity could have paid for xx% of a country’s health budget)
Some quantification of the size of a problem (even if not perfectly precise) is important for policymakers, who must weigh the relative priority of various problems + solutions. If a policymaker doesn’t know the scope or scale of a corrupt activity, choosing to allocate finite political and financial capital to combating it (rather than investing those into something else) becomes a matter of faith. And not having some sense of the size of the original problem makes it very difficult to measure the success (or failure) of various policy prescriptions. We’d be implementing policies in the dark, with no measuring stick.
So yes, numbers matter, both for getting people to care and engage, and for crafting good public policy. But that doesn’t necessarily mean that we need to start and end by calculating a sum of ALL corruption. Breaking that down into its separate components (e.g. tax evasion, trade misinvoicing, customs fraud, etc.) and quantifying each of those separately (starting at the national level) would be (somewhat) more achievable and precise and lend itself to crafting relevant, targeted policy measures.
Impressive yet inaccurate statistics and figures are endemic to a wide range of equally important policy debates, from public health, to the environment, to immigration. It seems that once an issues becomes political, respect for academic rigour goes out the window, which is indicative of lazy or irresponsible journalism, identity politics, and populism, just to name a few potential shortcomings of liberal democracy. So for me, the question that academics and advocates in almost all fields must ask themselves is, do we play the game? I would say that, in the case of corruption, an inherently political issue, to a certain extent we have to, to refuse to would simply be too “ivory tower”. However, this doesn’t have to mean giving up on academic rigour, but rather, just as advocates should base their arguments on sound academic research (as opposed to throwing around wild figures), academics should keep in mind the needs and ends of advocates when producing this research. This does entail sacrificing a certain degree of detachment and objectivity, yet (at least in politically charged fields such as corruption) isn’t this just a myth anyway? That’s my two cents.
A lawyer working in anti-bribery and corruption compliance, with prior government experience, asked me to post the following comments anonymously on his behalf. (The anonymity is due to his firm’s restrictions on what he can say in a public forum):
You write: “ ‘As an advocate, you need to have a number.’ Is that right? Is there a plausible case for continuing to open op-eds, speeches, policy briefs, and so forth with statements like, ‘Experts estimate that over $1 trillion bribes are paid each year, costing the global economy over $2.6 trillion,’ even if we know that those numbers are at best wildly inaccurate?”
o As you frame the question, my answer is of course no. It’s irresponsible to fob off meretricious stats on the public, especially on credulous populations like politicians and the media who will likely further disseminate and therefore magnify the harm of such stats. On the other hand, if you were to ask, “Is there a meaningful case for trying to develop a methodology on which statisticians and anti-corruption experts could agree that would generate at least the right order of magnitude for the total annual volume of bribery?”, I’d say yes.
o In my prior work in the government, one of my self-imposed challenges was trying to come up with an approximate number for a particular form of fraud. For a long time, law enforcement and NGOs alike kept using a particular figure that proved to have no empirical basis. Through an international working group, we were eventually able to get law enforcement agencies in multiple countries to pool data and to come up with a methodology that got us to a “on the order of tens of billions of dollars per year” statement on which everyone could agree.
o I think a similar exercise, but with a much larger cast of characters (law enforcement, NGOs, academia, think tanks), that would try to pool information from multiple sources to answer the question, “Can we come up with a methodology that gets us to the right order of magnitude for annual global bribery?”, would be worthwhile.
You also asked about the role of seemingly precise “big numbers” in anticorruption advocacy work. In my view, seemingly precise numbers are worthless, except to the credulous, as I argued above. But even approximate “big numbers,” if a reliable and transparent methodology can be devised to generate order-of-magnitude stats, could be highly useful to a number of different constituencies:
o Law enforcement authorities: Even in the U.S., where Justice and the SEC get decent chunks of money for FCPA enforcement, getting an approximate estimate of annual bribery that’s reliable (the estimate, not the bribery) strengthens their hands in annual budget negotiations on the Hill and in intradepartmental advocacy for maintenance or expansion of anticorruption enforcement. And if a reliable multiparty/multilateral methodology can be devised, the data it generates can be useful to those countries and multilaterals that are more vigorous about anticorruption enforcement in urging less vigorous countries to up their game.
o NGO Advocacy Organizations: If TI and TRACE can espouse the development and use of methodologies for country risk, they should also benefit from supporting the exploration of a methodology to generate a more reliable estimate of global bribery. More credible stats make them more credible as advocates.
o Compliant Entities: On one level, I shouldn’t be that interested in megastats as a corporate representative; what I care about is the bribery and corruption risks pertinent to my company. But even if my office was on, say, Mount Harvard, I still want to know roughly how fast global warming will melt the polar ice caps and sea levels will rise. And having an approximate but reliable estimate of global bribery should be meaningful in persuading C-level officers that a vigorous and sustained ABC compliance program makes sense. As you’ll recall, in the not-so-distant past, lots of companies, when the economy looked rocky, looked to cut costs by cutting compliance resources. That didn’t work out so well for various companies, but corporate memories can be short and compliance professionals always need to be prepared to justify current staffing and financial support therefor.
So, in my view, order-of-magnitude stats have their limitations, but I’d personally feel better if in public settings I could throw around a stat for which I had confidence in the methodology that produced it.
You may be interested in Peter Andreas and Kelly Greenhill’s volume, “Sex, Drugs, and Body Counts: The Politics of Numbers in Global Crime and Conflict”. It’s a surprisingly good read, and evaluates many of the same arguments you have raised (but for crime, not corruption per se).
Matthew, you ask: ‘Does an advocate need a number?’, an at-best rough guestimate of what corruption costs? Yes. We need that mind boggling number so we can get past it and talk to business and government about the policy change we seek, the institution we want to see better resourced or fixed, or the corruption case we care about given the attention it deserves by police, prosecutors and judges.
As Joe Kraus points out, both our supporter and detractors devour our figures, any figures, we can offer on corruption. Yet controversy and shooting the messenger are common, especially on the big ticket stats. Just look at the debate over what constitutes and how to measure a related phenomenon, illicit financial flows (http://www.cgdev.org/blog/how-much-do-we-really-know-about-multinational-tax-avoidance-and-how-much-it-really-worth). For all the debate, offering the number – which certainly involves sharing the methodology you are using to derive it – helps build a discussion around the magnitude and seriousness of the problem, and this is important for making attention to that issue urgent.
There are a couple of important lessons and challenges here. First, there is no silver bullet out there. As your unnamed lawyer colleague indicates, we in the advocacy world should be happy if the best and the brightest could help us come up with a (better evidenced) magic number on corruption. And we would indeed be happy, but it is not for lack of trying. We have brought together some of the best economists, statisticians, and policy researchers over the past decade and there is simply no easy way to measure corruption. For that reason, our CPI, with all its imperfections (watch this space for our CPI 2015 launch on 27 January) is still the best we’ve got for taking a snapshot of public sector corruption across a multitude of countries each year.
Second, the data deficit. You can end every good report or academic article with a plea for more data, and we often do. But if you consider the act of trying to measure a crime that is purposefully being hidden, you can see you’ve got your work cut out for you. Indeed, while the data deficit is real, some of us are working to remedy it (www.governancedata.org), and it may matter even more in the future, when we start to try to measure corruption reduction as a commitment under the Sustainable Development Goals.
This brings me to a third point, new kinds of data and data complexity. On the heels of developing the CPI, we at Transparency International – both our Secretariat and our national chapters – began measuring corruption in all kinds of ways about two decades ago. We have measured it in terms of public opinion, likelihood to bribe abroad, undue influence on politics, corporate transparency, public service delivery – you name it. But these numbers, which are critical to our evidenced based advocacy approach, necessarily make the story a noisier one. You have to digest, analyse and interpret the data that goes deeper, and data across institutions or countries may lead to different conclusions. So while diversification of information and numbers on corruption is welcome, and we believe it is essential to enable the difficult work of guiding policy change, this level of numbers have never been able to compete with the Really Big Stats that represent global costs of corruption.
Fourth and finally, I’d argue we need a number, but a new kind(s) of number(s). While the Really Big Stats on the size of corruption help our case, another number would help even more – a number (or numbers) that reliably show the savings, benefits to well-being, livelihoods and the economy, and the economic and financial gains for business made from efforts to stop corruption. In sum, we need much bolder, tested numbers that reflect just how good the medicine is. If we stop corruption, we gain. Big. That would be a number that could drive a new era of anti-corruption advocacy and potentially win a whole new range of stakeholders, in addition to raising the interest of academics who might intensify study of (anti-)corruption phenomena.
These are all great points. I’d like to register my especially strong agreement with your distinction between Really Big Stats (RBS?) that are used mainly for headline-grabbing, and more careful estimates of the impact of (different forms of) corruption and anticorruption measures. It’s great that TI (and other organizations) are doing so much good work in the latter vein.
Your points about the data deficit and the impossibility of solving all the methodological problems are true and well-taken. We’ve got to do the best we can with imperfect information. (Though I do think there’s quite a big difference between coming up with the best estimate you can, with a transparent methodology that acknowledges limitations, and throwing around numbers that are not much better than wild guesses.)
Again, I admire all the work you’re doing on these data and measurement issues at TI, and I look forward to seeing the new CPI in a couple weeks… (I can’t promise I won’t criticize it when it does come out, but of course that’s all in the spirit of constructive adversarialism!)
Did you receive a response from the WB and WEF on the $2.6 trillion question.
But a number is needed to calculate the benefits of a reduction in corruption. The IMF have a statement showing reduction in corruption and its effect on growth and employment. R Hodess, above, touches on this in his fourth point.
A wider point is that the campaign against corruption has to be by legal experts and economists. Each have their sphere of expertise relevant to the corruption problem and should assist each other in the campaign against corruption.
The short answer to your first question is no, nothing yet from the WB or WEF on where the $2.6 trillion number comes from or whether those organizations still stand by it. But I didn’t really expect an immediate response. Hopefully the post will be read by at least a few people in those and other places, and get some conversations going.
On your second point, I completely agree! We should do our best to get (accurate, even if imperfect) numbers to try to assess the impact of anticorruption interventions. The question I meant to raise in my post was _not_ whether we should ever try to quantify, but rather whether sometimes, for advocacy purposes, it’s OK to invoke big numbers (mainly for attention-grabbing purposes) even if we have no reason to believe that the numbers are much better than fabrications, or at best wild guesstimates.
I tend to think that more narrowly focused studies evaluating partial correlations are more helpful. I believe the IMF statement you cite is along those lines. But even there I’ve noticed quite a bit of sloppiness — for example, treating the simple bivariate correlation between corruption and per capita income as if it were an accurate estimate of the causal impact of the former on the latter.
I wonder if a more formal approach to the WB and WEF was more likely to get a formal response. Time will tell.
Attention grabbing based on fabrications and guestimates is bad advocacy. It would not stand up to criticism and would weaken the advocacy. Credibility would decline. A cautious qualification is wise in the use of figures.
Miss Lagarde is not the only lawyer who knows economics. My point there was that the legal side could send the file to the economist to work out the costs of the corruption beyond a bare number. Recently a case reported in Nigeria involved such large sums that it could be compared to losing hundreds of miles of new roads. Simplistic, I know, but it gives some real world feel for what is being lost by corruption, rather than just an amount of money. Such an approach is used in estimating the costs of tax evasion in the developing world. Although I do accept there is valid criticism of this method.