Guest Post: Please, Criticize Me! (Why Anticorruption Practitioners Should Scrutinize and Challenge Research Methodology)

GAB is pleased to welcome back Roger Henke, Chairman of the Board of the Southeast Asia Development Program (SADP), who contributes the following guest post:

In a previous post, I described a survey used to estimate the incidence of fraud and associated problems within the Cambodian NGO sector. The response to the results of that survey have so far been somewhat disheartening—not so much because the research has had little influence on action (the fate of most such research), but rather because those who have been told about the study’s results have all taken the results for granted, questioning neither their meaningfulness nor how they were generated. Such at-face-value uptake is, paradoxically, a huge risk to the longer-term public acceptance of the evidence produced by social-scientific research.  I am relieved that methodological considerations (issues of publication bias, replicability, p-hacking, and others) are finally getting some traction within the social science community, but it is evident that the decades-long neglect of these problems dovetails with a public opinion climate that doubts and disparages social science expertise.

Lack of attention to the methodological underpinnings of “interesting” conclusions is hardly a remarkable fate for corruption research results, nor is it specific to corruption research.  But the anticorruption community has a lot to lose by distrust in research, and thus a lot to win by ensuring that the findings it uses to build its cases upon pass basic quality checks. For the remainder of this post I’ll examine some basic questions that the Cambodia NGO corruption survey’s results should have triggered before being accepted as credible and meaningful:

  1. The point of using a survey (it should probably go without saying) is to make a credible estimate of some social fact; the survey data are input for an estimation process, not the estimate itself. Survey data are vulnerable to manifold well-known biases.  While some of these can be ameliorated if the number of respondents is large enough, in most cases the survey data must be combined with other information and interpreted though an explicit framework (partially evidence-based, partially theoretical) to derive estimates from the data. Without some kind of triangulation and explicit, challengeable reasoning, data don’t become information. Executive summaries have little choice other than to concentrate on the results, rather than of all this “back-office” work. But the distinguishing characteristic of research-based estimates is this back-office work, and I wasn’t prepared for the total lack of interest my policy and practice audiences had for these methodological issues. I fear that policy-makers and practitioners (and the public) too often look to research not for evidence (that might change their minds), but mainly for confirmation of their pre-existing opinions and agendas (and that they will ignore research that challenges those opinions and agendas).
  2. Even when a survey produces a reasonably credible estimate of the incidence of some phenomenon, this is still doesn’t mean very much by itself: It requires comparison to some other setting, or to some benchmark. In the Cambodia survey, my variable of interest was the fraction of local development NGOs that had been affected by some measure of fraud during the preceding two years. Let’s say we find, in a survey like this, that the rate of fraud in development NGOs in country X is 30%. The meaning of that percentage is very different if fraud incidence in country X’s private sector is 20%, as compared to when we know it is 50%. On top of that, to make these comparisons meaningful, we have to take into account other possible differences between these settings (for example, differences in the effectiveness of the organizational and institutional checks and balances).  If there are significant differences on other dimensions, the comparisons become more complicated. The Cambodia survey project did not establish clear baselines or plausible comparison groups. This is not an unusual problem; indeed, the same problem applies to much of the corruption-related survey research out there. Again, what bothered me most was not the problem itself, but the apparent total lack of interest from my audience on the issue.

The (accurate) perception that these issues are “methodological” probably explains the lack of attention from research users, who are more interested in substance than method.  Unfortunately, unless research users play their part as critical customers, the substantive results they are being served are much less trustworthy than they can and should be. Most corruption research is not of the purely academic variety, but explicitly aims for policy and practice relevance, and is funded by policy and practice interests and thus shaped by their agendas. In a market like this, unless research consumers play their part as responsible customers—consumers who also look at “production standards”—agenda fit is bound to trump standards. Research consumers’ attention to basic methodological quality is a sine qua non for research to be the best it can be.

So where from here? What kind of interest would I hope for from corruption research users? What “demand” would make a difference in both methodological rigor and substantive output? By way of example: though there’s lots of survey data out there on corruption and fraud, we lack transparent translations of systematic reviews of evidence into probability estimates that can help us work out which factors are likely to cause which other factors. Because we always use some kind of model—usually implicit—to translate data into conclusions, why not make those models as explicit as we can and then judge their usefulness against any new available evidence that comes our way? Models are tools to make debates more productive because they invite the translation of different understandings into a common language, and they require being specific enough to be practical.

1 thought on “Guest Post: Please, Criticize Me! (Why Anticorruption Practitioners Should Scrutinize and Challenge Research Methodology)

  1. Pingback: please, criticize Me! (why anticorruption practitioners should scrutinize and challenge research methodology) | roger henke's fancies

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.