Today’s guest post is from Dieter Zinnbauer of the Copenhagen Business School’s Sustainability Center:
Jim Anderson over at the World Bank blog and Matthew Stephenson on this blog kicked of an interesting discussion about how the new era of artificial intelligence—particularly the natural language chat-bots like OpenAI’s revolutionary ChatGPT—will affect the anticorruption field. As Matthew suggested, the ability of ChatGPT to generate plausible-sounding (if a bit bland) summaries and speeches on corruption-related topics should inspire all of us real humans to aim to do more creative and original—and less bot-like—writing and speaking on anticorruption topics. And both Jim and Matthew suggested that in this field, as in many others, ChatGPT can also be a valuable aid for researchers and advocates, performing in seconds research and drafting work that might take a human being several hours.
Yet while ChatGPT may be able to assist in some tasks, we shouldn’t get too excited about it just yet, especially when it comes to research. Some of its limits as a research tool are already well known and widely discussed. But I wanted to call attention to another problem, based on a couple of recent experiences I had trying to use ChatGPT as a research aid. Continue reading