Guest Post: The Problem With Anticorruption Diagnostic Tools Is Not (Primarily) Too Much Standardization

José-Miguel Bello y Villarino, an official with the Spanish Ministry of Foreign Affairs and doctoral candidate at the University of Sydney, contributes today’s guest post:

There is a wide debate about how to produce and use data to assess and compare countries’ performance, particularly in domains that are, by nature, global such as human rights. In the corruption domain there are some well-known international indexes that purport to express a country’s perceived corruption level in a single number, such as the Corruption Perceptions Index (CPI) published annually by Transparency International (TI). Other diagnostic tools have been developed to assess individual countries’ anticorruption frameworks and policies against some global standard or benchmark. Among the latter, TI produces the National Integrity System (NIS) Country Assessments.

These assessments do not try to determine how much corruption there is in a country, but rather “how well a country tackles the problem.” NIS assessments do not aim to give each country a final “score” that can be compared to the scores of other countries. The assessments’ declared objective is to look into the effectiveness of each country’s anticorruption institutions by focusing on a standard set of “pillars” (things like democratic institutions, the judiciary , the media, and civil society). Consequently, NIS assessments are not meant to provide definitive conclusions, but rather observations within a common framework to supply a starting point for analysis, and to identify risks and possible areas for improvement. Their conclusions are designed to help stakeholders work to develop more concrete and country-specific responses.

The NIS Country Assessments, and similar tools (TI has identifies roughly 500 diagnostic tools used in the anticorruption area), have come in for a fair share of criticism. Much of this criticism centers upon their allegedly formalistic, formulaic, standardized approach to assessing anticorruption institutions. Some of those criticisms have appeared on this blog. A few months ago Richard Messick posted a commentary on a piece by Paul Heywood and Elizabeth Johnson that challenged the relevance and value of NIS reports for developing democracies (using Cambodia as an illustrative example), principally due to insufficient appreciation of cultural distinctiveness and an overemphasis on compliance-based approaches. Last month, Alan Doig’s post continued this conversation. Mr. Doig defended the value of the NIS Country Assessments as they were originally conceived, but argued that TI’s current approach to NIS assessments has become overly formalistic, which limits the utility of NIS country studies as an effective starting point for analysis or platform for progression. Though coming from a different perspective, Mr. Doig’s criticism is very similar to the core argument of Professors Heywood and Johnson. In essence, they share a skepticism that one can usefully apply broad global standards or categories to individual countries, given each country’s unique, particular, idiosyncratic circumstances.

Respectfully, I think these criticisms go too far. Taking individual country circumstances into consideration of course has value. However, standardization of assessment methodologies, the somewhat “formulaic” approach, can have benefits that may outweigh the costs. Continue reading

Guest Post: When and How Will We Learn How To Curb Corruption?

GAB is pleased to welcome Finn Heinrich, Research Director at Transparency International, who contributes the following guest post:

Listening to conversations about corruption among global policy-makers, corruption researchers, and anticorruption activists alike, I can’t help but notice that the focus of anticorruption research and policy is changing. The 1990s focused mainly on demonstrating that corruption exists and finding ways to measure it (largely through perception-based indicators), and the early 2000s were about assessing corruption risks in specific countries, sectors, or communities, and assessing the performance of anticorruption institutions. More recently, researchers (and their funders and clients) are shifting from the “Where is corruption?” question toward the “How can we fight corruption?” question. They ask: Do we know what works, when, where, and under which circumstances in curbing a specific type of corrupt behavior?

Answering such questions is extremely challenging. Corruption’s clandestine nature makes it difficult to measure, data is often of low quality or simply not available for time-series or cross-sectional analysis beyond aggregate country-level indicators. Furthermore, anticorruption interventions often lack an underlying theory of change which would be needed to design robust research evaluations to find out whether they worked and if so, how (and if not, why not). And we lack realistic but parsimonious causal models which can take account of contextual factors, which are so important to understand and tackle corruption, as corruption is an integral part of broader social and political power structures and relationships which differ across contexts. Similarly, there is a lack of exchange between micro-level approaches focusing on specific, usually local anticorruption interventions, on the one hand, and the macro-level literature on anti-corruption strategies and theories, on the other.

While we at Transparency International certainly do not have any ready-made solutions for these extremely tricky methodological and conceptual issues, we are committed to joining others in making headway on them and have therefore put the “what works” question at the heart of our organizational learning agenda by engaging in reviews of the existing evidence as well as ramping up impact reviews of some of our own key interventions. For example, we have just released a first rapid evidence review on how to curb political corruption, written by David Jackson and Daniel Salgado Moreno, which showcases some fascinating evidence from the vibrant field of political anticorruption research. We are also working with colleagues from Global Integrity on a more thorough evidence review on corruption grievance as a motivator for anti-corruption engagement and are planning further evidence reviews and impact evaluations.

As we start to get our feet wet and figure out how to best go about generating and making sense of the existing evidence on what works in anti-corruption, we are keen to engage with the broader anticorruption research community. Maybe there are others out there who have some ideas about how to go about learning about what works in fighting corruption? If so, please use the comment box on this blog or get in touch directly at acevidence@transparency.org.

Guest Post: Please, Criticize Me! (Why Anticorruption Practitioners Should Scrutinize and Challenge Research Methodology)

GAB is pleased to welcome back Roger Henke, Chairman of the Board of the Southeast Asia Development Program (SADP), who contributes the following guest post:

In a previous post, I described a survey used to estimate the incidence of fraud and associated problems within the Cambodian NGO sector. The response to the results of that survey have so far been somewhat disheartening—not so much because the research has had little influence on action (the fate of most such research), but rather because those who have been told about the study’s results have all taken the results for granted, questioning neither their meaningfulness nor how they were generated. Such at-face-value uptake is, paradoxically, a huge risk to the longer-term public acceptance of the evidence produced by social-scientific research.  I am relieved that methodological considerations (issues of publication bias, replicability, p-hacking, and others) are finally getting some traction within the social science community, but it is evident that the decades-long neglect of these problems dovetails with a public opinion climate that doubts and disparages social science expertise.

Lack of attention to the methodological underpinnings of “interesting” conclusions is hardly a remarkable fate for corruption research results, nor is it specific to corruption research.  But the anticorruption community has a lot to lose by distrust in research, and thus a lot to win by ensuring that the findings it uses to build its cases upon pass basic quality checks. For the remainder of this post I’ll examine some basic questions that the Cambodia NGO corruption survey’s results should have triggered before being accepted as credible and meaningful: Continue reading

The Case for Including Sextortion Measures in TI’s CPI

In a recent post, I called for the creation of an international index of sexual corruption. While I believe that such an index will have an effect standing alone, I also believe that such an index, once created, should be included as one of the sources used to construct composite indexes such as Transparency International’s Corruption Perceptions Index (CPI). As most GAB readers are likely aware, the CPI is does not reflect TI’s own independent assessment of corruption perception, but rather aggregates corruption perception measures from a range of other sources. These other sources, however, all measure perceptions of monetary corruption, such as bribery and embezzlement. But, as TI itself acknowledges, sexual corruption may not correlate well with other forms of corruption, meaning that an index like the CPI may give us an incomplete and misleading picture.

The exclusion of sexual corruption is not TI’s fault; there are currently no global comparative measures of perceptions of sexual corruption for TI to incorporate. Indeed, this gap is precisely why I advocate the creation of an international sexual corruption perceptions index. Of course, even if such an index is created, it would be a separate question whether the results ought to be included in the CPI. I believe it should be.

Continue reading