A PYMNTS Company

Regulating ChatGPT and Other Large Generative AI Models

 |  March 6, 2023

By: Phillip Hacker, Andreas Engel & Marco Mauer (Oxford Business Law Blog)

The world’s attention has been captured by Large Generative AI Models (LGAIMs), the latest iteration of the much-expected and often misunderstood ‘Artificial Intelligence’ technology that is revolutionizing the way we create, visualize, and interpret new content, thus transforming our work and personal lives in unprecedented ways. As this technology matures all sectors of our society will be affected, from business to medicine, to academia, to the art world and the tech industry itself. However, along with the enormous possibilities for good, significant risks are also present. LGAIMS are already deployed by millions of users to create human-like text, images, audio or even video (ChatGPT, Stable Diffusion, DALL·E 2, Synthesia, and others). These tools may soon become part of systems used to interview and evaluate everything, from job candidates to hospital and school admissions. The savings in terms of time and labor could be substantial, allowing professionals to focus on the more pressing matters. These engines may thus contribute to a better and more effective use of resources. Still, errors in this area are costly, and risks cannot be ignored. The potential of these systems to be misused for manipulation (fake news and malicioius information being prime examples) can represent a whole new level of danger involved. This has made the discussion on how to regulate LGAIMs (or the reasons to leave them be) rise as an urgent issue, one which will have wide-ranging and long-lasting consequences.

In this paper, the authors argue that AI regulation, particularly that being considered in the EU, is not quite ready for the rise of this new generation of AI tools. While the EU has been at the vanguard of regulation for new technologies and AI systems, including passing sophisticated legislation and legal instruments (AI Act, AI Liability Directive) targeting platforms that make use of AI (Digital Services Act, Digital Markets Act), LGAIMs require and deserve special focus and unique solutions from legislators. So far, regulation in the EU and beyond has mainly dealt with the more conventional AI models, with all its limitations, while not yet grappling with the new generation of tools that have sprung up in recent days.

Considering this situation, the authors tear down the EU AI Act, which aims to address the risks presented by AI, while in reality failing to adequately consider the dangers and downsides of LGAIMS due to their versatility. Addressing all possible risks as part of a comprehensive risk management system under the AI Act (Article 9) proposals might be unneccessary, costly, and burdensome. However, an alternative regulation of the risks presented by lGAIM can be imagined that focuses on applications themselves, rather than on the base model…

CONTINUE READING…