Last week, GAB Editor-in-Chief Matthew Stephenson published a post sharply criticizing Transparency International UK’s new “Pledge Tracker,” which evaluates how well countries are living up to the pledges they made at the May 2016 London Anti-Corruption Summit. GAB is delighted to have the opportunity to publish the following reply from Robert Barrington, the Executive Director of Transparency International UK:
“A slapdash, amateurish collection of arbitrary, often inconsistent judgements, unsupported by anything that resembles serious research.” Not since I was taken to task over an undergraduate essay by an eminent professor at Oxford have I had work for which I was responsible receive quite such a stinging critique. On that occasion, I could not escape a sense that my world view differed from that of the professor, and that—irrespective of the detail—was the root of our misunderstanding.
So is Professor Stephenson’s assessment of TI-UK’s Pledge Tracker merited? Here is my overall assessment: he is right on some but not all of the detail; he is wrong on most but not all of the big picture. At the root of the difference is the question of whether this is an index in which countries are compared with each other according to a consistent global standard, or whether it is the presentation of individual country assessments by local civil society organizations of their own country’s progress against their own country’s commitments.
Let me start with a straightforward declaration that I do not think the methodology is perfect. It is the first attempt I am aware of in which civil society representatives have tried a comprehensive tracking of how dozens of countries have lived up to hundreds of commitments made at an international summit. If we get it right, it should be an extremely valuable approach in many other contexts, not just our own. Of course, working in this field, one cannot help being aware that writers of the Global Anticorruption blog are often eager to pounce on a new methodology and subject it to a detailed academic critique. That is precisely why we had already opened the methodology to public consultation, having published our initial results.
Inevitably, the design of a methodology will depend on what one is trying to achieve. We should be clear up front: This is not an index. It is not designed to be an index, so should not be assessed on its merits as an index. It is a series of country-level assessments, done by civil society organizations within the countries concerned, in which those organizations give their opinions about the progress that their own countries have made in relation to the commitments made at a Summit. It is absolutely not a measure of how well a country is doing in tackling corruption, or a league table of how well countries are doing so. (That said, I acknowledge that it may well be that the Pledge Tracker website needs to be clearer about this. After all, when you publish lots of information about lots of countries it can start to look like an index.)
It is also the case that the methodology is constrained by the available resources. On this point, it is illuminating to reflect on the extensive efforts TI-UK made to get government entities to take ownership of tracking their Summit commitments both before and after the Summit. We found no government or international organization willing to do so. So we decided to do it ourselves, and having looked at various possible approaches, we decided to focus on information gathered and verified at the local level. This has obvious weaknesses, particularly if the point of comparison were to be a data-rich index compiled with full academic oversight and peer review. But it also has some great strengths in terms of doing simply what it says on the box: tracking at country level whether a government has delivered on the commitments it has made.
Inherent in the Pledge Tracker’s methodology is clearly an element of subjectivity or “arbitrary judgement.” Is this a bad thing? The Global Anticorruption Blog has a distinguished track record in commenting on the flaws in corruption indices based on subjective perceptions, and usually concludes that in the absence of anything better, where they are used, perceptions should be approached with standard rigour in terms of data collection and analysis. From my perspective, there is merit in having a local civil society organization assess how it thinks its own country is doing in delivering on its commitments. There are three reasons why I think this: First, progress measures that are assessed externally may fail to register subjective elements like intent and prospects for completion. Second, local ownership of the analysis is likely to have more credibility and traction at local level, where we are trying to effect change. Third, there is a chance that people on the ground actually do know what they are talking about.
To my mind, the problems Professor Stephenson is really—and usefully—highlighting are the flaws in cross-country comparisons given the methodology we have selected, and in which UN Convention Against Corruption (UNCAC) commitments are a good example. Comparing between countries is obviously useful: Being able to say that Country X has done better than Country Y can help to open a discussion with Country Y about how to improve. But what if Country X had a small number of weak commitments and made good progress against those (like Russia) and Country Y had a large number of challenging commitments and did not make progress in some of them (like Afghanistan)? Compare Russia with Afghanistan, and the former appears to be doing better. That was a problem we identified early, and thought we had been careful to put adequate descriptions to illustrate what really should be measured and compared. It was also clear that while there are some outliers (notably Russia), most countries are not—and so, for example, we take care not to make claims about Russia’s progress that might be interpreted as unmerited praise for its approach to tackling corruption. As a spin-off from the website and the country assessments, we also produced a short report that looks at some of the trends and who seems to be doing well and badly. Quite a sensible approach, I think. But after Professor Stephenson’s blog, we have taken this report down from our website so we can scrutinize whether we have over-done the cross-country comparisons. There would be no point in telling everyone else it is not an index, while then using it ourselves as a de facto index. We will review it, and make any necessary amendments.
Let me turn to the detail:
- First, our assessment of whether the US is living up to its commitment on the Foreign Corrupt Practices Act (FCPA). Look carefully at the wording of this commitment: The US pledged to “continue to prosecute violation of cases of the FCPA.” Professor Stephenson seems particularly cross about our assessment of this one, calling it “one of the most misinformed things I’ve read in any publication from a reputable organization working on anti-corruption issues.” Really? Perhaps we are indeed divided by a common language. You could argue, as Professor Stephenson does, that the US seems to be enforcing the FCPA as actively as before. He notes TI’s “facially absurd claim that the US has been inactive in enforcing the FCPA.” Yes, but… that is not what we are saying. We are trying to assess countries against their Summit commitments, which in the case of the US was to “prosecute” not to “enforce.” I don’t know why the US government chose to commit to “prosecute” and will certainly examine the language issue. As readers of the Global Anticorruption Blog will know, there has been a long-running debate in the UK about the use of settlements over prosecutions. We would generally interpret “prosecute” as implying a trial that leads to acquittal or conviction, and not an investigation that leads to a settlement. The US government may have meant something else by its commitment, and if our interpretation is a mistake, we will correct it. At this point, I should mention that we are already in dialogue with the US government (which, as Professor Stephenson will know from the tip-off he received before writing his blog, is not happy about the Pledge Tracker) about how we have rated their commitments, and if there are grounds for updating the assessments, we will do so.
- Second, the evidence links. There is no getting away from it, some of these are misleading, and that is the thing we most need to improve about the site. The benefit of a live website, unlike a static paper report, is that it can be updated as required. And that is indeed the process we are already embarked on with the evidence links. Notwithstanding the imperfections, I am inclined to wonder whether Professor Stephenson has stopped to ask himself why specific links were included, what story they are intending to tell, and whether the problem is governments not releasing adequate data rather than TI not finding it. (I certainly would not suggest—without evidence—that Professor Stephenson “delegated the task to some 22-year old unpaid intern, who then did a quick Google search.”) One example he takes and seems most offended by is a link to the FCPA Blog which seems to argue against our own assessment. Perhaps he felt this was a good example of “slapdash, amateurish” work in which the evidence does not support the argument. But here is an alternative view: There are a variety of opinions about how active the enforcement is, and the US is in a fast-changing political environment in which its approach to FCPA enforcement has been unclear. A link to an article which highlights that any assessment of this issue is full of uncertainty might, by contrast, be seen as a well-chosen link. There is no shortage of FCPA commentary, not least on the Global Anticorruption Blog. So perhaps this link might have been selected for a purpose, and not entirely “randomly.”
- Third, there is the question of whether the assessment of the US is adequate, and if not, whether that means all the other country assessments are likely to be inadequate as well. While we are confident in the abilities of our local partners in the US, we are indeed reviewing the information for the US, and if it proves to be inaccurate, will recalculate the degree of progress. We will do the same for other countries that feel aggrieved. It is possible that the US government, with which information was not initially cross-checked, may provide and publish information to improve its score, though its other commitments may turn out to be like the FCPA prosecution point analysed above. So don’t hold your breath. A key principle of the methodology is precisely that the local civil society organization does the assessment, cross-checked with the government for factual accuracy. We will not be bullied into producing a better rating by unhappy governments.
- Finally, on the detail, I cannot bring myself to agree with the contention that evidence links to documents that are not in English are either flawed or non-transparent.
Carefully hidden between the insults and invective, there are some extremely useful points from Professor Stephenson’s blog. We will learn from it and hope to improve – not simply updating the details, but also looking at the governance of our projects and publication processes. Is it all as bad as Professor Stephenson implies? I don’t think so. If this were a piece of academic research trying to compile an index, his critique might be justified. But the Pledge Tracker should be judged for what it is, not for what it is not. You can judge for yourself by visiting the thirty or so countries concerned and making your own assessment. Or you can make the assumption that there will be local experts who are reasonably well-equipped to assess their own countries, and look at the Pledge Tracker—the Pledge Tracker that it is being routinely updated and improved, whose methodology is open to public consultation, and that is not too proud to admit its flaws and seek to improve.