In a previous post, I described a survey used to estimate the incidence of fraud and associated problems within the Cambodian NGO sector. The response to the results of that survey have so far been somewhat disheartening—not so much because the research has had little influence on action (the fate of most such research), but rather because those who have been told about the study’s results have all taken the results for granted, questioning neither their meaningfulness nor how they were generated. Such at-face-value uptake is, paradoxically, a huge risk to the longer-term public acceptance of the evidence produced by social-scientific research. I am relieved that methodological considerations (issues of publication bias, replicability, p-hacking, and others) are finally getting some traction within the social science community, but it is evident that the decades-long neglect of these problems dovetails with a public opinion climate that doubts and disparages social science expertise.
Lack of attention to the methodological underpinnings of “interesting” conclusions is hardly a remarkable fate for corruption research results, nor is it specific to corruption research. But the anticorruption community has a lot to lose by distrust in research, and thus a lot to win by ensuring that the findings it uses to build its cases upon pass basic quality checks. For the remainder of this post I’ll examine some basic questions that the Cambodia NGO corruption survey’s results should have triggered before being accepted as credible and meaningful: Continue reading