I recently attended two unrelated anticorruption conferences that both raised — in very different contexts — questions related to the appropriate design of education programs designed to inculcate ethical norms and prevent corruption. In one conference, focused on corporate anti-bribery compliance, the issue had to do with the design of compliance training programs (and associated measures) designed to teach employees about their legal and ethical obligations, as well as the steps they should take to address any potential problems they come across in the course of their work. At the other conference, focused more broadly on university education in the developing world, participants spent a considerable amount of time discussing (and sometimes advocating) the integration of anticorruption components into university courses — including courses not specifically on corruption — in order to produce a generation of students who would be more likely to resist corrupt norms and promote more ethical conduct.
Notably, although there seemed to be wide consensus in both discussions that education is important, there was much less of a clear sense of what sorts of education or training programs are most likely to be effective, and how much of an impact one can expect such programs to have. That’s understandable, of course: In this context, as in many others, the messiness of reality makes it very difficult to figure out what works, and to isolate the impact of any one intervention. But perhaps some forms of anticorruption education (in both the corporate training and academic contexts) may be suitable for randomized controlled trials (RCTs). Let me use this post to make a tentative case for expanded use of RCTs in this context.
First, a quick clarification for those unfamiliar with the terminology: In an RCT, a researcher assesses the impact of a particular intervention (the “treatment”) by applying that intervention to a randomly-selected sample of the relevant population; that random sample (the “treatment group”) is then compared to another randomly selected sample that did not receive the intervention (the “control group”) with respect to some outcome variable (or variables) of interest. The advantage of an RCT is that, if the sample sizes are large enough, the assignment of the treatment is genuinely random, and there are no confounding spillover effects or similar problems, the average difference in outcomes between the treatment and the control groups is a very reliable indicator of the average effect of the intervention. There may be many other differences between the individual units in the treatment and control groups, but with random assignment and a large sample, these will tend to wash out. The use of RCTs in social science has been steadily increasing in the last decade; in the context of development economics, the Jameel Poverty Action Lab (J-PAL) at MIT has been particularly prominent pioneer in the use of RCTs, though it is hardly alone. Specifically in the field of anticorruption, a number of scholars have conducted exemplary RCT-based research (see here, here, here, and here), and a recent survey paper by the U4 Center described RCTs as the “gold standard” for anticorruption policy evaluation (echoing a term that has been applied to RCT research more generally).
That’s not to say RCTs are always necessary, or even ideal. That same U4 report identifies a number of practical challenges in applying the RCT model to many anticorruption issues, and other scholars have raised deeper questions about possible over-reliance on RCTs despite their limitations. But in the particular area of anticorruption education and training, the RCT approach seems particularly well-suited for assessing the effectiveness of different approaches — with one very important limitation (aside from practical or political constraints) which I will get to at the end.
Here’s how it could work:
- For corporate training, a sufficiently large corporation could conduct its own internal trial, assigning different training programs to different business units or offices, selected at random. For example, the relevant business units could be divided into four groups, with one receiving live in-person training in a lecture format, one receiving on-line training, and one receiving live training with participation and role-playing. (It would likely not be feasible to have a control group that received no training at all.) Or, to test another dimension of training, one could divide units into two groups (again at random), with one receiving a single long training program at one time, and the other receiving several shorter training programs spaced out over a longer period. There are any number of other dimensions to test, depending on the questions of interest. The point is that, instead of trying to select one training protocol for the whole firm, a company could experiment with randomly assigning different protocols to different units, and then compare the results.
- For academic education (whether at the primary, secondary, or university level), one could design an experimental curriculum emphasizing anticorruption (or ethics more generally), and then randomly assign that curriculum to a subset of classes or schools. Or, if there were multiple proposed designs for such an anticorruption curriculum, the administration could randomly assign different designs to different schools or classes. One could also compare the relative effectiveness of stand-alone anticorruption classes with the integration of anticorruption materials into existing classes. And one could imagine administering the evaluations sufficiently long after the educational intervention — say, a couple of years — to evaluate whether the interventions seem to have lasting effects.
The biggest difficulty with this — again, putting temporarily to one side practical or political constraints — is measuring effectiveness. After all, the ultimate outcome that we care about is long-term behavioral change, and that is likely to be very difficult to measure, at least in the short term. But there may be reasonable substitutes one could use. Psychologists, as well as some economists and other researchers, have designed clever experiments, presented as games, in which the researcher can observe or infer whether participants have cheated. (I blogged about one such set of experiments, conducted by the Harvard economist Rema Hana, in a previous post; there are many others.) And there may be at least some observational data that might be relevant, such as documented instances of cheating (in an academic setting) or malfeasance uncovered by an audit (in a corporate setting).
In sum, although there is inevitably a fair amount of guesswork and blind faith involved in developing anticorruption programs, there may be more opportunities to gather useful evidence from careful experimentation than we’ve appreciated. Education and training seems like a natural candidate for such experimentation, as well as an opportunity for productive collaboration between the educators and policy reformers who can identify the questions and design the interventions, and the social scientists who can assist them in designing the experiments and evaluation metrics.