The New Frontier: Using Artificial Intelligence To Help Fight Corruption

In January 2018, scientists from Valladolid, Spain brought a piece of inspiring news to anticorruption advocates: they created an artificial intelligence (AI) system that can predict in which Spanish provinces are at higher risk for corruption, and also identifies the variables that are associated with greater corruption (including the real estate tax, inflated housing prices, the opening of bank branches, and the establishment of new companies, among others). This is hardly the first example of computer technology being used in the fight against corruption. Governments, international organizations, and civil society organizations have already been mining “big data” (see, for example, here and here) and using mobile apps to encourage reporting (see, for example, here and here). What makes the recent Spanish innovation notable is its use of AI.

AI is a cluster of technologies that are distinct in their ability to “learn,” rather than relying solely on the instructions specified in advance by human programmers. AI systems come in several types, including “machine learning” (in which a computer analyzes large quantities of data to identify patterns, which in turn enables the machine to perform tasks and make predictions when confronted with new information) and more advanced “deep learning” systems that can find patterns in unstructured data – in hundreds of thousands of dimensions – and can obtain something resembling human cognitive capabilities, though capable of making predictions beyond normal human capacity.

AI is a potentially transformative technology in many fields, including anticorruption. Consider three examples of the anticorruption potential of AI systems:

  • First, corporations could use AI to design more effective internal compliance programs. Although there is widespread agreement that effective compliance programs should be “risk-based,” it is very hard for corporate compliance officers and other decision-makers to make nuanced, accurate decisions regarding corruption risk levels for different activities. An AI system could help, through analysis of both the relevant laws and regulations (through natural language processing) and past cases of compliance or non-compliance. An AI system could “understand” the content of regulations and learn to recognize patterns associated with compliance and non-compliance, and in so doing help identify risk areas in a way that allows the corporation to build an individualized compliance program. An additional benefit is that any subsequent revisions to the laws and regulations (or other relevant changes) could be directly incorporated by the AI system without any human intervention.
  • Second, following the lead of the Spanish researchers, governments can use AI systems to identify vulnerabilities (both geographically and by sector). This would help governments target their efforts, focusing on stricter controls in those particular risky areas. Perhaps even more importantly, AI systems can help governments spot loopholes within the national or regional regulatory framework. (Without going into too much technical detail, when one variable identified by the AI system fails to respond as expected to changes of another variable, the AI system can send out an alert.)
  • Third, AI can be especially helpful in the anti-money laundering (AML) context, helping investigators by increasing (potentially by orders of magnitude) the efficiency and accuracy of detection and due diligence. Ravn, a machine-learning AI platform based in London, has already demonstrated the power of AI systems, assisting the seven human investigators of the Serious Fraud Office (SFO) sift through and index 30 million documents—processing and summarizing up to 600,000 documents per day—in the Rolls-Royce corruption case. AI systems can also reduce “false positives” in banks’ current transaction monitoring systems (TMS). Currently, somewhere between 90% to 95% of all alerts are false positives, according to a PwC industry survey; this is because the traditional TMS is built on unsophisticated model extracting data from fairly broad/crude human-identified risk factors. Reviewing all these alerts requires the involvement of thousands of personnel, the annual cost of which reaching hundreds of millions of dollars. Hence any decrease of false positives would enhance significantly AML’s efficiency and help investigators focus on the right cases.

Of course, AI is no cure-all. Although AI reduces the need for human personnel to perform routine tasks, humans are important to steer AI in the right direction and to funnel the “supervised” information into the AI system when it is first developed. And AI systems also raise transparency concerns, as many stakeholders (and sometimes even those who designed the systems) cannot make sense of AI at the algorithmic level. Some are understandably uncomfortable entrusting important financial or personal information to a “black box,” and such concerns should be taken into consideration when designing or promoting AI technology in politically and socially sensitive areas such as anticorruption. Nevertheless, a hybrid of human efforts and the transformative power of AI technology has the potential to enable compliance officers, governments, investigators, and others to unearth the truth hiding amid thickets of data, and in so doing empower both public and private sector actors to fight corruption more effectively.

12 thoughts on “The New Frontier: Using Artificial Intelligence To Help Fight Corruption

  1. Very interesting, Helen! To your second point (and perhaps even third point), there is now software available to detect suspected corruption and money laundering at the individual business or bank account level, not just at the higher plain of geography or business sector. See the following report by the Economist that discusses how such software is already in use by some European police departments.

    As the AI software becomes more manageable, cheaper, and accessible, we may increasingly see anti-corruption capabilities imported from the compliance sector to the enforcement sector.

  2. Helen, thanks for the great post. Regarding your first point, I wonder how an AI-generated individualized corporate compliance program would interact with potential corporate liability for corrupt acts. I know, for example, that the UK’s Bribery Act contains a safe harbor for those corporations that have implemented an effective compliance program. Would adopting an AI-generated program be sufficient to satisfy the safe harbor requirements? Would utilizing some form of AI eventually become necessary to meeting those requirements? Fortunately, we may get answers to these questions even despite the lack of litigation under statutes like the UK Bribery Act and the FCPA. This is because prosecutors can take AI-generated compliance programs into account at the charging stage of an investigation. The value of these programs in preventing corporate liability then is likely to be in the hands of prosecutors, who will decide just how sufficient or necessary an AI-generated compliance program is. I wonder if we think that’s the right place for these decisions to be made.

    • Good point, Jason. And I wonder if at some point we may get to a place where companies of a certain size will be de facto guilty under, for example, a charge of “failure to prevent bribery” under UKBA Section 7 if they do not have such software but bribery occurs.

    • Thanks for raising those interesting points, Jason and Kees. Indeed, using AI generated compliance system may change SFO’s conclusion as to whether a company’s compliance system satisfies the “adequate procedure” defence under UKBA Sec 7(2). It may also change the calculation of a company’s decision whether or not to self-report suspected wrongdoing.
      However, it should be noticed that a company’s geographical reach and size of operation also weigh into SFO’s decision whether to prosecute and a jury’s verdict whether to find the company guilty.
      I agree with Jason that the value of these AI programs in preventing corporate liability is in the hands of prosecutors, and with Kees that each case will turn on its own facts regarding the adequacy of a company’s compliance procedure.

      • Interesting discussion! Jason – I feel that in cases where a company would want to use an AI – generated compliance program as a defense, it would need to provide evidence why it was convinced that such a program actually works in that particular company (continuous risks analysis showing that the AI – generated compliance program covers all existing and emerging risks, tools in place to implement it, etc). In other words, I guess I think that the AI – generated compliance program would be subject to the same scrutiny as any other compliance program in the end of the day. I would find it hard to imagine that the mere fact that the company had an AI – generated program would ever be a defense in itself…

  3. This is a topic that will only keep getting more prominent in years to come, and this article contributes by spreading the word. However, I believe it carries two important problems that undervalues if own efforts and those of researchers and practitioners.

    The first is the statement that some kind of a corruption-prediction artificial intelligence system has been created. That is nothing but scientific sensationalism. A Self Organizing Map (SOM) is a tool suitable for the exploration of data sets that helps us visualize clusters; but at its core it simply provides another (albeit powerful) method to conduct causal analysis of multivariate and nonlinear data, something scholars have been doing long before the current craze of calling any other form of complex system analysis “artificial intelligence”. To suggest that ‘artificial intelligence’ has been recruited to fight corruption completely distorts the current level of scientific progress in the field, and creates unreasonable expectations among the general audience.

    Second, it’s highly misleading to say that the study produced by the Spanish researchers can “predict” corruption. It can’t any more than regular regression models are able to, even if the authors would like to think otherwise. In order to test the predictive ability of a model, you need different training and test data sets, or to apply a resampling method. The paper doesn’t suggest either of those routes was taken. What do the authors mean by prediction, then? They mean they’ve identified economic factors that explain the inter-province variability in the frequency of corruption cases. Few other researchers would venture to call this a “predictive” system, even if the term has all but become the rule nowadays among those working with artificial neural networks.

    • Without going into the details of the technical nitty-gritty of AI technology that I could not yet fully comprehend, I think the major advance in the Spanish research is that the collection and analysis of data were all conducted with neural networks, which is unprecedented, and which “show the most predictive factors of corruption,” according to the scientists. This potentially could strengthen and make effective the collective efforts we are making now to fight corruption.

  4. Super interesting post. The power of AI to enhance anti-corruption capacity – even if hard to comprehend technically speaking – is an exciting prospect. That said, this notion that AI will impact how we write regulations by highlighting loopholes in proposed drafts feels more uncertain. I’m not sure how reactive policymakers often keen to accommodate special interests would ever be to information that evaluates their proposals by opaque algorithm.

    • Thanks, Hilary for pointing out this caveat.
      It is very true that transparency is a major obstacle to the adoption of AI systems in many places. Many stakeholders including governments and the general public cannot make sense of AI at the algorithmic level, or to unpack the results of machine learning. They find it hard to trust – or even troubled by the mere idea of trusting – a “black box” as such with important financial and personal information. Even the impressive performance and positive results generated by AI systems are not enough to overcome such fear and distrust. It seems that it is going to take a relatively long time before AI technology is fully integrated into our current compliance and oversight system.

  5. Hi Helen,
    This was really interesting! Do you see any concerns for large multinational corporations who might want to use this software, particularly with the GDPR and individual data privacy laws? I’m wondering if increasing data privacy will limit or hinder access to the data necessary to make those risk assessments.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.