Unknown's avatar

About Matthew Stephenson

Professor of Law, Harvard Law School

How Transparent Should Prosecutors Be About Investigations Into High-Level Corruption?

Today’s post is going to be one of those ones where I raise a question that I’ve been puzzling over, without having much to offer in the way of good answers.

Here’s the question: How open and transparent with the public should the officials investigating serious allegations of high-level corruption be about the progress of their investigations?

To be sure, no competent investigator or prosecutor would or should be completely transparent, as doing so might well tip off the targets of the investigation to what the investigators know, their investigative and legal strategies, and so forth. But even with that constraint, there’s a fairly broad range of options. Investigators could be absolutely tight-lipped about everything. Or they could hold regular press conferences covering significant developments in the case (and perhaps even going further to comment on the larger issues that the investigation implicates). Or something in between.

I was prompted to think more about this question in part by an exchange I had with Jose Ugaz at last month’s Harvard conference on Populist Plutocrats. I was asking Mr. Ugaz about his experience serving as Peru’s Ad Hoc State Attorney investigating and prosecuting high-level corruption in the Fujimori regime, and in particular how he dealt with concerns that his investigation might be perceived as politicized. Those who are interested can watch the video of our exchange (which starts around 7:15:55), but the key part of Mr. Ugaz’s response (slightly edited for clarity) ran as follows: Continue reading

Guest Post: Transparency International UK’s Pledge Tracker–Amateur Research or Different Objectives?

Last week, GAB Editor-in-Chief Matthew Stephenson published a post sharply criticizing Transparency International UK’s new “Pledge Tracker,” which evaluates how well countries are living up to the pledges they made at the May 2016 London Anti-Corruption Summit. GAB is delighted to have the opportunity to publish the following reply from Robert Barrington, the Executive Director of Transparency International UK:

“A slapdash, amateurish collection of arbitrary, often inconsistent judgements, unsupported by anything that resembles serious research.” Not since I was taken to task over an undergraduate essay by an eminent professor at Oxford have I had work for which I was responsible receive quite such a stinging critique.  On that occasion, I could not escape a sense that my world view differed from that of the professor, and that—irrespective of the detail—was the root of our misunderstanding.

So is Professor Stephenson’s assessment of TI-UK’s Pledge Tracker merited? Here is my overall assessment: he is right on some but not all of the detail; he is wrong on most but not all of the big picture. At the root of the difference is the question of whether this is an index in which countries are compared with each other according to a consistent global standard, or whether it is the presentation of individual country assessments by local civil society organizations of their own country’s progress against their own country’s commitments. Continue reading

Anticorruption Bibliography–October 2017 Update

An updated version of my anticorruption bibliography is available from my faculty webpage. A direct link to the pdf of the full bibliography is here, and a list of the new sources added in this update is here. As always, I welcome suggestions for other sources that are not yet included, including any papers GAB readers have written.

Transparency International’s Anti-Corruption Pledge Tracker Is Badly Flawed. It Needs To Be Redone from Scratch.

In May 2016, at the London Anticorruption Summit sponsored by then-Prime Minister David Cameron, participating countries issued declarations announcing a variety of commitments—some new, some continuations of existing policies—to further the fight against international corruption. Of course, all too often governments fail to follow through on their grandiose promises, so I was heartened by Transparency International’s announcement, in September 2016, that it had gone through all the country declarations, compiled a spreadsheet identifying each country’s specific promises, and would be monitoring how well each country was following through on its commitments.

Last month, a year after TI published the spreadsheet documenting the list of summit commitments, TI released a report and an interactive website that purport to track whether countries have followed through on those commitments. So what do we learn from this tracking exercise?

Alas, the answer is “almost nothing.” TI’s “Anti-Corruption Pledge Tracker,” in its current form, is a catastrophic failure—a slapdash, amateurish collection of arbitrary, often inconsistent judgments, unsupported by anything that resembles serious research, and (ironically) non-transparent. This is all the more surprising—and disappointing—given the fact that TI has done so much better in producing similar assessment tools in other contexts. Indeed, at least one such recent tool—TI’s Government Defense Anti-Corruption Index—provides a model for what the Pledge Tracker could and should have looked like. Given the importance of tracking countries’ fulfillment of their summit pledges, and TI’s natural position as a leader on that effort, I dearly hope that TI will scrap the Pledge Tracker in its current form, go back to the drawing board, and do a new version.

I know that sounds harsh, and perhaps it seems excessive. But let me explain why I don’t find the Pledge Tracker, in its current form, worthy of credence. Continue reading

Guest Post: Refining Corruption Surveys To Identify New Opportunities for Social Change

GAB is delighted to welcome back Dieter Zinnbauer, Programme Manager at Transparency International, who contributes the following guest post:

Household corruption surveys, such as Transparency International’s Global Corruption Barometer (GCB) are primarily, and very importantly, focused on tracking the scale and scope of citizens’ personal bribery experience and their general perceptions about corruption levels in different institutions. More recently, the GCB has branched out into questions about what kind of action against corruption people do or do not take, and why. The hope is that better understanding what motivates people to take action against corruption will help groups like TI develop more effective advocacy and mobilization strategies.

In addition to these direct questions about why people say they do or don’t take action against corruption, household surveys have the potential to help advocacy groups in their efforts to mobilize citizens in another way as well: by identifying inconsistencies or discrepancies between what people’s experience of corruption and their perceptions of corruption. The existence of these gaps is not in itself surprising, but learning more about them might help advocates craft strategies for changing both behavior and beliefs. Consider the following examples: Continue reading

Another Way To Improve the Accuracy of Corruption Surveys: The Crosswise Model

Today’s post is yet another entry in what I guess has become a mini-series on corruption experience surveys. In the first post, from a few weeks back, I discussed the question whether, when trying to assess and compare bribery prevalence across jurisdictions using such surveys, the correct denominator should be all respondents, or only those who had contact with government officials. That post bracketed questions about whether respondents would honestly admit bribery in light of the “social desirability bias” problem (the reluctance to admit, even on an anonymous survey, that one has engaged in socially undesirable activities). My two more recent posts have focused on that problem, first criticizing one of the most common strategies for mitigating the social desirability bias problem (indirect questioning), and then, in last week’s post, trying to be a bit more constructive by calling attention to one potentially more promising solution, the so-called unmatched count technique (UCT), also known as the item count technique or list method. Today I want to continue in that latter vein by calling attention to yet another strategy for ameliorating social desirability bias in corruption surveys: the “crosswise model.”

As with the UCT, the crosswise model was developed outside the corruption field (see here and here) and has been deployed in other areas, but it has only recently been introduced into survey work on corruption. The scholars responsible for pioneering the use of the crosswise model in the study of corruption are Daniel Gingerich, Virginia Oliveros, Ana Corbacho, and Mauricio Ruiz-Vega, in (so far) two important papers, the first of which focuses primarily on the methodology, and the second of which applies the method to address the extent to which individual attitudes about corruption are influenced by beliefs about the extent of corruption in the society. (Both papers focus on Costa Rica, where the survey was fielded.) Those who are interested should check out the original papers by following the links above. Here I’ll just try to give a brief, non-technical flavor of the technique, and say a bit about why I think it might be useful not only for academics conducting their particular projects, but also for organizations that regularly field more comprehensive surveys on corruption, such as Transparency International’s Global Corruption Barometer.

The basic intuition behind the crosswise model is actually fairly straightforward, though it might not be immediately intuitive to everyone. Here’s the basic idea: Continue reading

Tracking Corruption and Conflicts of Interest in the Trump Administration–October 2017 Update

Last May, we launched our project to track credible allegations that President Trump, as well as his family members and close associates, are seeking to use the presidency to advance their personal financial interests.Just as President Trump’s son Eric will be providing President Trump with “quarterly” updates on the Trump Organization’s business affairs, we will do our best to provide readers with regular updates on credible allegations of presidential profiteering. Our October update is now available here.

There were relatively few new developments this month, though the list of existing conflicts and related concerns is still plenty long.. We will continue to monitor and report on allegations that Trump, or his family and close associates, are seeking to profit from the presidency.

As we are always careful to note, while we try to sift through the media reports to include only those allegations that appear credible, we acknowledge that many of the allegations discussed are speculative and/or contested. We also do not attempt a full analysis of the laws and regulations that may or may not have been broken if the allegations are true. For an overview of some of the relevant federal laws and regulations that might apply to some of the alleged problematic conduct, see here.

Using the Unmatched Count Technique (UCT) to Elicit More Accurate Answers on Corruption Experience Surveys

With apologies to those readers who couldn’t care less about methodological issues associated with corruption experience surveys, I’m going to continue the train of thought I began in my last two posts (here and here) with further musings on that theme—in particular what survey researchers refer to as the “social desirability bias” problem (the reluctance of survey respondents to truthfully answer questions about sensitive behaviors like corruption). Last week’s post emphasized the seriousness of this concern and voiced some skepticism about whether one of the most common techniques for addressing it (so-called “indirect questioning,” in which respondents are asked not about their own behavior but about the behavior of people “like them” or “in their line of business”) actually works as well as is commonly assumed.

We professors, especially those of us who like to write blog posts, often get a bad rap for criticizing everything in sight but never offering any constructive solutions. The point is well-taken, and while I can’t promise to lay off the criticism, in today’s post I want to try to be at least a little bit constructive by calling attention to a promising alternative approach to mitigating the social desirability bias problem in corruption experience surveys: the unmatched count technique (UCT), sometimes alternatively called the “item count” or “list” method. This approach has been deployed occasionally by a few academic researchers working on corruption, but it hasn’t seemed to have been picked up by the major organizations that field large-scale corruption experience surveys, such as Transparency International’s Global Corruption Barometer (GCB), the World Bank’s Enterprise Surveys (WBES), or the various regional surveys (like AmericasBarometer or Afrobarometer). So it seemed worthwhile to try to draw more attention to the UCT. It’s by no means a perfect solution, and I’ll say a little bit more about costs and drawbacks near the end of the post. But the UCT is nonetheless worth serious consideration, both by other researchers designing their own surveys for individual research projects, and by more established organizations that regularly field surveys on corruption experience.

The way a UCT question works is roughly as follows: Continue reading

Populist Plutocrats Conference–Video Available

Last Saturday, on September 23, Harvard Law School organized (in collaboration with the Stigler Center at the University of Chicago) a conference on “Populist Plutocrats: Lessons from Around the World,” which I previously advertised on this blog (see here and here). The event was video-recorded for those who are interested but were not able to attend in person. At the moment, the available video is a full, unedited recording, which you can find here (on the Stigler Center’s YouTube channel). We’re hoping to get the video edited and uploaded in a more convenient format soon, but for those who are interested, I’ll provide in this post the time locations for different sessions of the event:

I hope and expect that we’ll have some more posts in the coming weeks that reflect and engage substantively with some of the discussions at the conference, and in particular how they relate to issues of corruption and related topics, but for now I hope some of you will check out some of the video recording.

 

Anticorruption Bibliography–September 2017 Update

An updated version of my anticorruption bibliography is available from my faculty webpage. A direct link to the pdf of the full bibliography is here, and a list of the new sources added in this update is here. As always, I welcome suggestions for other sources that are not yet included, including any papers GAB readers have written.