The study of corruption these days is often heavily empirical, involving the close analysis of case studies or quantitative data. But sometimes it’s helpful to take a step back and think about the nature of the corruption phenomenon in more abstract, theoretical terms—not because this sort of abstract thinking translates neatly and directly into specific policy recommendations (it usually doesn’t), but rather because it helps us organize the otherwise overwhelming mass of particular information in a way that facilitates thinking, in broad strategic terms, about the kind of problem we’re dealing with and what kinds of interventions might be most promising.
It’s in that spirit that a range of contributions have suggested that our conventional ways of thinking about and responding to corruption are flawed, or at least incomplete, because they fail to recognize the extent to which the problem of corruption is a manifestation of the bad equilibrium in what game theorists would call an “assurance game.” The basic idea behind an assurance game is often traced back to Rousseau’s parable of the “Stag Hunt,” in which two hunters are chasing a stag when a hare runs by; if both hunters continue to pursue the stag, they’ll catch it and both will be better off (half a stag is better than a whole hare), but if one hunter chases after the hare, that hunter will get something while the other ends up with nothing. The key feature of this game is that it captures a setting where there are two stable outcomes (“equilibria”)—either both hunters hunt the stag or both chase the hare—and one of those (the stag) is clearly better for both of them. If both hunters go after the stag, and expect the other to do so as well, neither has an incentive to get distracted chasing the hare. But if both hunters expect the other to go after the hare, then both hunters will go after the hare themselves, because hunting the stage alone (in this parable) guarantees one will go hungry, while chasing the hare at least yields something. In that sense, the assurance game differs from the more famous “Prisoners’ Dilemma” game (and from other so-called “free rider” problems), because in the latter class of games each player has an incentive to take the “anti-social” action regardless of what everyone else is expected to do, even though everyone would be better off if they all cooperated.
What does this all have to do with corruption? Well, a number of scholars have advanced quite explicit arguments that the corruption is basically the equivalent of the hare-chasing equilibrium in the Stag Hunt: Everyone does it because everyone expects everyone else to do it, but if everyone could be assured that everyone else would act honestly, nobody would have an incentive to behave corruptly. The earliest scholarly paper of which I’m aware that argued that corruption is more like an assurance game than a prisoners’ dilemma is Professor Philip Nichols’ 2004 article, but the idea has been developed further by other scholars. For example, Professors Persson, Rothstein, and Teorell interpret the results of interviews in Kenya and Uganda as suggesting that corruption in those societies is more like an assurance game than a principal-agent problem, and in a 2019 follow-up paper these scholars argue more generally that systematic corruption “resemble[s] an assurance game…. Within this collective-action framework, unlike the single-equilibrium ‘prisoners dilemma,’ … what action is taken by any individual depends on expectations regarding how others will act.” And Professor Avinash Dixit, though more agnostic as to whether systemic corruption more closely resembles a prisoners’ dilemma or an assurance game, suggests that the latter is an important possibility. And for these and like-minded scholars, seeing corruption in these terms has important implications for how we might fight it. Professors Nichols and Dixit, for example, each independently argue for (somewhat different forms of) certification systems, which, in the assurance game context, can induce a shift from the “bad” (corrupt) equilibrium to the “good” (honest) equilibrium even without material sanctions. Professors Persson, Rothstein, and Teorell are somewhat less specific in the policy proposals that flow from seeing corruption as primarily an assurance problem, but they argue that understanding the problem in this way implies that “rather than ‘fixing the incentives,’ the important thing will be to change actors’ believes about what ‘all’ other actors are likely to do,” and that this in turn requires “a more revolutionary type of change,” though they acknowledge that we still don’t have a clear sense of what can induce successful “equilibrium shifts” of this type.
I want to push back (gently but firmly) against the notion that it’s helpful to think of corruption as (primarily) an assurance problem. But before I pursue my critique of this idea, let me start out by acknowledging that the scholars who have framed corruption as an assurance problem are almost certainly correct in highlighting that corruption is one of those social phenomena for which pervasiveness correlates with attractiveness. In other words, the more people who (are expected to) engage in corruption, the more people who (have an incentive to) engage in corruption. That insight is hardly unique to corruption, but it is certainly important in the corruption context, and may have a range of significant implications for anticorruption policy. My beef with the “corruption is an assurance problem” is not with that key insight, but with what seems to me to be a substantial exaggeration of the importance of that factor relative to other factors.
While I take no issue with the claim that the incentive to engage in corruption can get stronger when corruption is more widespread—in part because each actor worries that if everyone else cheats, playing by the rules amounts to being a “sucker”—the assertion that corruption is an assurance game takes that view to an untenable extreme. The corruption-is-an-assurance-problem view implies that if only each agent could be confident that everyone else would behave honestly, there would be no corruption.
Just to play out this scenario and do a preliminary gut-check on its plausibility, imagine that 100 companies are competing for public tenders in Country X, whose procurement officers are known to request bribes in return for favorable treatment in the bidding process. Suppose that every company pays bribes, and even though some of them would prefer not to do so, the managers of that latter set of firms reason, “If nobody else paid bribes I wouldn’t either, but because everyone else is doing this, I have to do so too.” OK, so far so good for the corruption-as-assurance-game proponents. This sounds more or less like, “I’d rather hunt the stag, but since everyone else is chasing the hare, I have to do so too.” But let’s push this a little bit further. Suppose the managers of all 100 firms got in a room and one of them stood up and said, “This is ridiculous. We’re all paying bribes to these officials, and in the end we’re still competing with each other—dividing up the market in the same proportion—and just transferring some of our profits to these corrupt officials. Let’s all agree right now that none of us will pay bribes in the future. And since we all know that nobody else is paying bribes, the market shares will still look basically the same, but all of our profits will be higher.” In other words, someone says, “Let’s stop chasing these hares, and start chasing the stag!” And everyone claps and cheers and agrees to be honest in the future. Now, if the game is really an assurance game, this should be enough.
But that doesn’t seem terribly plausible to me. Maybe some firm managers would be perfectly happy competing honestly as long as everyone else is doing the same thing. But is it reasonable to suppose that all of them (or even most of them) feel that way? Wouldn’t it be more likely that, after the meeting in my imaginary scenario, a few managers would say to themselves, “Hmm, well, I know what we all said about everyone being better off if we’re all honest. But actually, if everyone else is honest but I’m still willing to pay some bribes—secretly, of course—then I’ll reap enormous profits. So why not?” In other words, the game, for at least some of the competitors, is more like a Prisoners’ Dilemma than a Stag Hunt. And once some managers reason this way and start cheating, other firms will presumably notice, and start to again feel like they’re “suckers” for sticking with the no-bribe pledge. Even if there were some credible way to certify that certain firms were adhering to the pledge and others weren’t, it’s not clear why this would alter the incentives, unless there’s some sort of peer pressure or similar sanction (which, just to be clear, is not necessary in a pure assurance game). This is not to deny that at least some firms might be able to reduce corruption through some sort of mutual reassurance. But the idea that corruption generally could be mitigated or eliminated simply through reassurance strikes me as fanciful.
Moreover, it’s important to underscore that even if individuals in corrupt societies explain (and justify) their own corrupt behavior by saying, “everybody else does it, so I have to,” this is not sufficient evidence that corruption is genuinely an assurance game in the relevant sense. First of all, if these individuals are actually in something more closely resembling a Prisoners’ Dilemma, they would say the same thing. (In the equilibrium of the Prisoner’s Dilemma game, everyone engages in the anti-social behavior, and they could all say—plausibly—that because everyone else is behaving anti-socially, the only sensible move is to act similarly.) Second, we shouldn’t overlook the human capacity for rationalization and self-justification. Without denying that others’ conduct (or perceived conduct) likely does have an effect on one’s own behavior, it’s also generally the case that we tend to exaggerate those social influences, or peer pressure, when we do something we know we aren’t supposed to do (while when we act virtuously, we’re more likely to attribute our conduct to our own good character than to peer effects or social expectations). And even in societies were corruption is widespread, it’s usually not true that everyone engages in corruption (except perhaps in the most extreme cases). So while we shouldn’t discount the importance of social expectations on individual behavior, we should also be careful to treat assertions along the lines of, “I’m only corrupt because everyone else is” with a grain of salt.
Now, at least some of the scholars who have advocated the idea that corruption is an assurance game (such as Professors Persson, Rothstein, and Teorell) don’t seem to really think that we could eliminate systemic corruption solely by coordinating a shift in expectations, without material incentives of some kind. But if that’s right, then when they say that corruption is actually an assurance game (and not a Prisoners’ Dilemma or a principal-agent problem), they don’t really mean what they’re saying. They key feature of an assurance game is that the good (honest) equilibrium can be maintained solely by mutual expectations of honest behavior, and that this honest behavior is self-sustaining even without any external enforcement device (except perhaps some way of credibly signaling that one is behaving honestly). If everyone is hunting the stag, nobody has an incentive to chase the hare, and so we don’t need to mandate punishments for those who desert the stag hunt, because nobody would ever do it. In the corruption context—just to beat Rousseau’s metaphor into the ground—some of the hares are so big and tasty-looking, and some of the hunters so partial to hare, that stricter enforcement is likely necessary. (An aside: Some of the folks writing in this vein seem to conflate assurance games with collective action problems, and differentiate collective action problems from principal-agent problems. These are both errors. An assurance game is a form of collective action problem, but there are lots of other collective action problems that are not assurance games, including the Prisoners’ Dilemma. And collective action problems and principal-agent problems are not competing alternatives, but rather different sorts of incentive problems that may be present simultaneously.)
Again, I agree with the proposition that corruption, or at least some forms of corruption, have a strongly self-reinforcing property, in part because each individual’s incentive to behave corruptly is stronger when she expects others in the community also to behave corruptly. This insight might indeed support interventions that facilitate coordination on more honest behavior, and that provide opportunities for mutual reassurance. This insight may also suggest that effective enforcement strategies have a kind of multiplier effect, as cracking down on some of the bad actors may not only deter, but may convince other actors that behaving honestly doesn’t inevitably mean getting out-competed by cheaters. So the corruption-as-assurance-problem perspective is founded on a kernel of truth. But the sweeping claim that corruption is an assurance problem that can be solved (solely or primarily) by effecting a coordinated shift in expectations regarding others’ behavior seems like the wrong approach, one that’s more likely to mislead than to enlighten.