On Theory, Data, and Academic Malpractice in Anticorruption Research

I’m committed (probably self-servingly) to the idea that academic research is vital to both understanding and ameliorating corruption. I sometimes worry, though, that we in the research community don’t always live up to our highest ideals. Case in point: A little while back I recently asked my law school’s library to help me track down some research papers on corruption-related topics, including a working paper from a few years ago, co-authored by a very well-known and influential corruption/good-governance researcher. I’d seen the paper cited in other articles but couldn’t find it. The library staff couldn’t find it either, and emailed the authors directly to ask if a copy of the paper was available. Here is a verbatim reproduction of this famous professor’s response:

Thanks for your email. Unfortunately, we decided not to finish this paper since we could not get the data to fit our theory[.]

I have to say, I found this response a bit troubling.

Now, to be fair, maybe what this person (whose first language is not English) actually meant was that he and his coauthor were unable to locate the data that would allow a meaningful test of the theory. (In other words, perhaps the statement “We could not get the data to fit our theory” should be understood to mean: “We could not acquire the sort of data that would be necessary to test our theory.”) But boy, much as I want to be charitable, it sure sounds like what this person meant was that he and his coauthor had tried to slice and dice the data in lots of different ways to get a result that fit a predetermined theory (so-called “Procrustean data torturing”), and that when they couldn’t get nature to confess, they spiked the paper rather than publicizing the null findings (contributing to the so-called “file drawer problem”).

Now, again, maybe that latter reading is wrong and unfair. Maybe the more charitable interpretation is actually the correct one. But still, it’s worrying. Even if this case was not, in fact, itself an illustration of the data torturing and the file-drawer problem, I’m sure those things go on in anticorruption research, just as they do elsewhere. Lots of scholars (including the author of the above email) have their own pet theories about the best way to promote high-quality governance, and spend quite a bit of time advising governments and NGO reformers on the basis of these (allegedly) evidence-based theories. But for the results of academic research to be credible and useful, we all need to be very careful about how we go about producing our scholarship, and to be careful not to let our findings — or our decisions about what projects to pursue, publish, and publicize — be unduly determined by our preconceived notions.

3 thoughts on “On Theory, Data, and Academic Malpractice in Anticorruption Research

  1. I think that the first interpretation sounds like a pretty charitable reading of the email as well! Another problem with the industry, of course, is if they tried to publish these null findings… would those have been acceptable to reviewers/editors either? I think many things are left as working papers eternally if they have null findings, even if the findings tell us something interesting about the world

  2. A very timely post! Anyone sharing your conviction that (academic) research is vital to understanding and ameliorating corruption should start waking up to the effects that the increasing public attention to everything that is amiss with quantitative social science implies. So many kudos for flagging this on your blog. My definition of the social sciences includes psychology, my own discipline, and I am proud that (together with medical science) it is at the forefront of acknowledging its (many!) problems and frantically debating them, piloting all kinds of ways to change research, data curation, and publication practices and push for the required institutional incentives that would support and sustain better practices. But being at the forefront also means that all the dirty linen is out there for anyone following debate to see. I find it difficult to conclude anything other than that current practices – in the aggregate – seriously, I mean seriously compromise the credibility of quantitative research. That may be my personal, and some/many might label it ‘alarmist’ interpretation of the ongoing debate, but in populist times, in which research expertise and the kind of evidence-based argument it (should) stand(s) for is already under blatant attack, we cannot afford complacency.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.