Guest Post: When It Comes To Attitudes Toward Corruption, Russians Are More Like Americans Than You Think

Today’s guest post is from Marina Zaloznaya, Assistant Professor of Sociology at the University of Iowa and author of, The Politics of Bureaucratic Corruption in Post-Transitional Eastern Europe:

Russia and corruption have been dominating the news recently – with the reporting from Washington and Moscow converging in an unusual way. Ongoing accusations against Trump Administration officials resonate even more strongly when linked to Russia, a country most Americans view as rife with corruption. Indeed, many Americans think that Russian citizens are perfectly comfortable with the systematic corruption of political and business elites.

This is a myth. Yes, it is true beyond doubt that corruption is common in Russia – much more so than in the United States – affecting hundreds of thousands of people. But this is not because Russians are systematically more tolerant of corruption than are Americans. Continue reading

Guest Post: Please, Criticize Me! (Why Anticorruption Practitioners Should Scrutinize and Challenge Research Methodology)

GAB is pleased to welcome back Roger Henke, Chairman of the Board of the Southeast Asia Development Program (SADP), who contributes the following guest post:

In a previous post, I described a survey used to estimate the incidence of fraud and associated problems within the Cambodian NGO sector. The response to the results of that survey have so far been somewhat disheartening—not so much because the research has had little influence on action (the fate of most such research), but rather because those who have been told about the study’s results have all taken the results for granted, questioning neither their meaningfulness nor how they were generated. Such at-face-value uptake is, paradoxically, a huge risk to the longer-term public acceptance of the evidence produced by social-scientific research.  I am relieved that methodological considerations (issues of publication bias, replicability, p-hacking, and others) are finally getting some traction within the social science community, but it is evident that the decades-long neglect of these problems dovetails with a public opinion climate that doubts and disparages social science expertise.

Lack of attention to the methodological underpinnings of “interesting” conclusions is hardly a remarkable fate for corruption research results, nor is it specific to corruption research.  But the anticorruption community has a lot to lose by distrust in research, and thus a lot to win by ensuring that the findings it uses to build its cases upon pass basic quality checks. For the remainder of this post I’ll examine some basic questions that the Cambodia NGO corruption survey’s results should have triggered before being accepted as credible and meaningful: Continue reading

How I Learned To Stop Worrying and Love SDG 16

A few weeks back, I posted a skeptical commentary about the integration of anticorruption into the new Sustainable Development Goals and associated targets, in particular Target 16.5 (“substantially reduce corruption and bribery in all their forms”). Rick was even harsher. The premise of most of my criticism (and Rick’s) was that progress on Target 16.5 was likely to be measured using changes in countries’ scores on Transparency International’s Corruption Perceptions Index (CPI). It turns that this premise was (probably) incorrect.

I had based my assumption on the lengthy report released last June by the Sustainable Development Solutions Network (SDSN)—a report which had been commissioned by the UN’s Inter-Agency Expert Group on SDG Indicators (IAEG-SDG). But as Transparency International Senior Policy Coordinator Craig Fagan helpfully pointed out in his comment on Rick’s post, the more recent official information released by IAEG-SDG in September 2015 does not indicate that the CPI will be used as the principal measure for Target 16.5. Rather, the IAEG-SDG document lists as the proposed indicator the “percentage of persons who had at least one contact with a public official, who paid a bribe to a public official, or were asked for a bribe by these public officials, during the last 12 months.” (The relevant material is on page 225.) This still isn’t finalized, but it certainly appears that the IAEG is poised to endorse an experience/survey-based measure for Target 16.5, rather than the CPI-style perception index.

Is this perfect? No, certainly not. But it’s a lot better than what I’d feared. A few further thoughts on this: Continue reading

The UK Aid Impact Commission’s Review of DFID Anticorruption Programs Is Dreadful

Last week, the United Kingdom’s Independent Commission for Aid Impact (ICAI) released its report on the UK Department for International Development (DFID)’s efforts to fight corruption in poor countries. The report, which got a fair amount of press attention (see here, here, here, and here), was harshly critical of DFID. But the report itself has already been criticized in return, by a wide range of anticorruption experts. Heather Marquette, the director of the Developmental Leadership Program at the University of Birmingham, described the ICAI report as “simplistic,” “a mess,” and a “wasted opportunity” that “fails to understand the nature of corruption.” Mick Moore, head of the International Centre for Tax and Development at the Institute for Development Studies, said that the report was “disingenuous[]” and “oversimplif[ied],” and that it “threatens to push British aid policy in the wrong direction.” Charles Kenny, a senior fellow at the Center for Global Development, called the report a “wasted opportunity” that “has failed to significantly add to our evidence base,” largely because “ICAI’s attitude to what counts as evidence is so inconsistent between what it asks of DFID and what it accepts for itself.”

Harsh words. Are they justified? After reading the ICAI report myself, I regret to say the answer is yes. Though there are some useful observations scattered throughout the ICAI report, taken as a whole the report is just dreadful. Despite a few helpful suggestions on relatively minor points, neither the report’s condemnatory tone nor its primary recommendations are backed up with adequate evidence or cogent reasoning. It is, in most respects, a cautionary example of how incompetent execution can undermine a worthwhile project. Continue reading

Corruption “Tells” — An Overlooked Factor in Determining Corruption Perceptions

Last month, the European Commission released a comprehensive report on corruption in the EU, based on two perception surveys (one of the general population and one of businesspeople) as well as existing public data. One of the report’s most striking findings was the prevalence of perceived corruption among the general public: over 75% of Europeans surveyed thought corruption was “widespread” in their country–even in countries where very few respondents had personally experienced or witnessed corruption.

The EU Report is not the first study to find a sizeable gap between people’s perception of corruption’s prevalence and their reported personal experience with corruption.  What explains this gap?  The two most common explanations are: (1) perceptions of corruption overstate true corruption (as perceptions may be swayed by sensationalistic media reports, and perhaps skewed by factors like ethnic heterogeneity and low social engagement, or because of different understandings of what “corruption” means); (2) self-reported experiences with corruption understate true corruption, because people do not respond truthfully to questions about their personal experience even when anonymity is guaranteed.

But there is another possibility, which highlights a limitation of studies that compare only general perceptions of corruption with direct, personal experience with corruption: These surveys typically fail to account for “tells” – observable indications of potential corruption. Continue reading