A little while back I attended a very interesting talk by California Supreme Court Justice Mariano-Florentino Cuellar about a paper of his, co-authored with the political scientists Margaret Levi and Barry Weingast, entitled “Conflict, Institutions, and Public Law: Reflections on Twentieth-Century America as a Developing Country.” It’s a short, provocative paper, well worth reading for a number of reasons, but what I really want to focus on here is less the substance of the paper itself than the broader theme, captured by the paper’s subtitle, that it may be valuable to think about the pre-World War II United States as not so different from modern developing countries. Most relevant for readers of this blog, it may be worth looking to U.S. history (and the history of other developed countries) to better understand the process by which endemic public corruption may be brought under control.
The Cuellar-Levi-Weingast paper itself touches on, but doesn’t really delve into, this issue. Nonetheless, it got me thinking about three features of the historical U.S. struggle against systemic corruption—a struggle that, while certainly not complete, does appear to have successfully transformed the United States from a system where corruption was the norm (with some happy exceptions) to one where integrity is the norm (with some unhappy exceptions). Importantly, each of these three observations casts doubt on prominent claims in the modern debate about fighting corruption in the developing world:
- First, the U.S. historical experience shows that the transition from corruption to integrity is possible even when a culture of corruption is entrenched. This is an important lesson, because it’s so easy to become discouraged, even fatalistic, about corruption. As numerous scholars and commentators have pointed out, corruption can be self-reinforcing, creating “vicious cycles” or “corruption traps” that are difficult to break. And if one looks at the most popular perceived corruption measures, such as Transparency International’s Corruption Perception Index (CPI), one sees very little movement in the scores over the 20-plus years that the index has been in existence. (True, there are a few countries that have seen notable increases or decreases in their scores, but these are exceptions, and may well be statistical anomalies.) For this reason, many people say that those societies afflicted with a deep-rooted “culture of corruption” are basically stuck, and nothing can be done. But the history of the United States between the mid-19th and mid-20th century offers a powerful counterexample. From everything I’ve read, government in the United States in the 1870s and beforehand—especially at the state and local level but at the national level as well—was afflicted by pervasive cultures of corruption not so different from what we see in many parts of the developing world today. Patronage politics and outright clientalism in the civil service were the norm. Bribery and embezzlement were widespread. Vote buying was common. If there were a CPI in the 19th century, the US would have scored quite poorly—probably not much differently from many developing countries do today (at least if the same standards were applied). True, many U.S. citizens in the 19th century were disgusted by this corruption, but many also tolerated it, or viewed it as inevitable. And there were also some honest politicians and civil servants, much as there are in modern developing countries afflicted by corruption, but not enough to alter the general corrupt norms. But something happened in the U.S. between about 1870 and about 1940, such that by the end of this roughly seven decade stretch, the situation looked quite different. Of course, there was still a great deal of corruption, especially though not exclusively at the state and local level. But, at least according to what I’ve read and heard from knowledgeable scholars, the U.S. federal government, and many state governments, were no longer characterized by an entrenched “culture of corruption.” And if one takes the story forward a few more decades—say, through the 1970s—we see even more substantial changes, including major clean-ups at the state and local level. I don’t yet feel competent to offer anything like a coherent narrative of the changes that took place, nor am I well-equipped to document changes in the level or type of corruption with rigorous evidence. But I think most historians would agree at least with the general contours of the story I just told. (If any readers out there think I’m wrong, I would of course welcome corrections.) If so, then the U.S. example offers an important corrective to the fatalism that sometimes infects discussions of corruption in modern developing countries—including, perhaps especially, by citizens of those countries. The next time somebody asks, “Are there any examples of countries—not just autocratic city-states like Singapore or Hong Kong, but large democratic countries—that have managed to get corruption under control?”, you can point to the United States as one example. And there are more, including the United Kingdom, Sweden, Finland, Norway—indeed, I’m willing to bet that virtually every country that today gets high marks on the CPI was at one point in its history afflicted with the same sort of entrenched corruption that one sees today in places like India, Nigeria, Brazil, Indonesia, and elsewhere. So while it remains to be seen whether the U.S. experience, or that of these other now-clean(ish) countries, offers specific practical lessons to modern developing countries struggling with corruption, at the very least the U.S. experience shows that progress is possible.
- Second, the U.S. historical experience shows that the transition from corruption to integrity can be gradual and incremental. While some are drawn to the strong version of the fatalistic view that countries with entrenched cultures of corruption are just stuck, others—including a number of very sophisticated scholars—make a related but different argument. That argument goes like this: Because corruption can be self-reinforcing—a “vicious circle”—a gradual, incrementalist approach to anticorruption reform is doomed to failure; trying to fix one part of the system without fixing others, and without drastically changing shared public expectations about the future prevalence of corruption, can’t work. On this view, although (contrary to the extreme fatalist view) shifting from a high-corruption equilibrium to a high-integrity equilibrium is possible, this is only possible with a comprehensive “big bang” reform program. This may well be true in some cases. But the U.S. experience over the 70-year period between 1870 and 1940—or, perhaps more appropriately, the 110 year period between 1870 and 1980—suggests that, even when a society is stuck in a “high-corruption trap,” that society can see significant improvements through a gradual but sustained reform program, and the associated accumulation of new laws and rules, generational changes in social norms and expectations, and other shifts. Here again, I confess that my knowledge of U.S. history is not yet sufficient to offer a full account, and the history itself is sufficiently rich and complex that doing so would probably require a book, not a blog post. But based on what I do know, I don’t think one can point to a single period of a few years where the corruption situation (or even relevant policies) changed all at once. Federal civil service reform got underway in the decades following the Civil War, culminating most notably in the Pendleton Act of 1883. The Progressive Era reforms, which exposed and targeted corruption not only in the federal government but also in state and local government, especially the urban political machines, is usually dated as starting in the 1890s and running through the 1940s, but Progressive Era reforms took place at different times in different states over that period. Some of the most important anticorruption efforts occurred after World War II—the Knapp Commission Report on police corruption in New York City, for example, is a product of the early 1970s. Maybe there’s something to the “big bang” approach to anticorruption reform, but the strong assertion that a big bang approach is necessary if corruption is widespread—when it is, as some researchers put it, a “collective action problem”—seems inconsistent with the U.S. experience. This is not a trivial point, because the belief that only a big bang anticorruption drive can work implies the need to place our hopes in a strong leader or administration, with relatively few constraints on its power, that can push through a series of dramatic reforms all at once. But that’s not story of the U.S. Progressives, or their progenitors and successors; the U.S. story, as I understand it, is the story of a sustained social movement, one that was maintained over several generations, that engaged in political struggle at multiple levels of government, with some victories and some defeats, but eventually achieved a dramatic transformation of the norms and practices of U.S. government. Whether this model of reform is appropriate for modern developing countries is of course another question, one that I don’t attempt to answer here. But it does suggest that extravagant claims about the impossibility of solving the corruption collective action problem through gradual reforms over fairly long periods of time are at the very least overstated.
- Third, the U.S. historical experience shows that the transition from corruption to integrity can go hand in hand with a dramatic expansion of the power and discretion of the government. Consider a third claim about the fight against entrenched corruption that shows up in both the academic and policy debates with some frequency: the claim that effective action against corruption requires, and tends to go hand-in-hand with, efforts to shrink the state and reduce the scope for bureaucratic discretion. This position may find some support in the recent experience of some of the post-socialist countries of East and Central Europe, and more generally in the evidence that excessive red tape and unconstrained bureaucratic arbitrariness may create the conditions in which corruption thrives. I certainly do not mean to deny that in some cases, shrinking the state and reining in the bureaucracy may be important elements of an anticorruption strategy. But the strong version of the claim—the view that dramatic reductions in corruption necessarily go hand-in-hand with dramatic reductions in state size and power—is hard to square with the U.S. experience, especially the period between roughly the 1890s and 1940s, when perhaps the greatest successes in reducing corruption and patronage (especially at the federal level) corresponded with one of the most dramatic expansions of government, the rise of the so-called “administrative state.” This expansion included not just the expansion of total government size as measured by government revenue or spending as a percentage of GDP, but also a greater concentration of power in government agencies, bureaus, and departments. To be sure, the causality here is hard to nail down. It could be that because the size and power of the government was growing, there were much greater social and political pressures to crack down on corruption. It could be that anticorruption and other good government measures sufficiently increased citizen confidence in public institutions, making the dramatic expansion of the administrative state more politically palatable. It could be that both developments—a larger government and a greater effort to clean up government—were spurred by common underlying cause, such as industrialization, technological change, and a rising middle class—or that both were spurred on by the crisis of the Great Depression. It could be some combination of these factors, or other factors, or it could just be a coincidence. But it’s at least notable that the oft-repeated claim that shrinking and/or constraining the state is a prerequisite to effective anticorruption efforts appears, at the very least, to be inconsistent with U.S. historical experience. (And for what it’s worth, as I noted in a prior post, the correlation between government size and perceived corruption appears to be negative, not positive, when one examines modern cross-country data.)
Again, I want to be clear that I don’t mean to make any strong claims here about what the U.S. experience teaches modern developing countries about what they should do to address their systematic corruption problems. I don’t know the U.S. history nearly well enough, and it’s always hazardous to try to draw clear, straightforward lessons from one country for other, quite different countries. But, as explained above, I do think that the U.S. experience does provide a reason to be skeptical of three oft-repeated assertions about fighting corruption in the modern developing world: the fatalist assertion that societies afflicted by entrenched corruption are stuck; the slightly-more-hopeful assertion that only a big bang approach can succeed in uprooting entrenched corruption; and the libertarian assertion that substantial reductions in entrenched corruption are linked to substantial reductions in the size and power of government. While each of these may be true for particular countries at particular times, none seems consistent with the U.S. historical experience. More generally, it seems to me that we all might benefit from delving a bit more deeply into the history of the countries of the developed world, not so much for simple/simplistic “lessons,” but to better understand the dynamics of governance change over time.