By: Gabriela Ramos & Mariana Mazzucato (Project Syndicate)
LONDON – The tech world has generated a fresh abundance of front-page news in 2022. In October, Elon Musk bought Twitter – one of the main public communication platforms used by journalists, academics, businesses, and policymakers – and proceeded to fire most of its content-moderation staff, indicating that the company would rely instead on artificial intelligence.
Then, in November, a group of Meta employees revealed that they had devised an AI program capable of beating most humans in the strategy game Diplomacy. In Shenzhen, China, officials are using “digital twins” of thousands of 5G-connected mobile devices to monitor and manage flows of people, traffic, and energy consumption in real time. And with the latest iteration of ChatGPT’s language-prediction model, many are declaring the end of the college essay.
In short, it was a year in which already serious concerns about how technologies are being designed and used deepened into even more urgent misgivings. Who is in charge here? Who should be in charge? Public policies and institutions should be designed to ensure that innovations are improving the world, yet many technologies are currently being deployed in a vacuum. We need inclusive mission-oriented governance structures that are centered around a true common good. Capable governments can shape this technological revolution to serve the public interest.
Consider AI, which the Oxford English Dictionary defines broadly as “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” AI can make our lives better in many ways. It can enhance food production and management, by making farming more efficient and improving food safety. It can help us bolster resilience against natural disasters, design energy-efficient buildings, improve power storage, and optimize renewable energy deployment. And it can enhance the accuracy of medical diagnostics when combined with doctors’ own assessments.
These applications would make our lives better in many ways. But with no effective rules in place, AI is likely to create new inequalities and amplify pre-existing ones. One need not look far to find examples of AI-powered systems reproducing unfair social biases. In one recent experiment, robots powered by a machine-learning algorithm became overtly racist and sexist. Without better oversight, algorithms that are supposed to help the public sector manage welfare benefits may discriminate against families that are in real need. Equally worrying, public authorities in some countries are already using AI-powered facial-recognition technology to monitor political dissent and subject citizens to mass-surveillance regimes.
Market concentration is also a major concern. AI development – and control of the underlying data – is dominated by just a few powerful players in just a few locales. Between 2013 and 2021, China and the United States accounted for 80% of private AI investment globally. There is now a massive power imbalance between the private owners of these technologies and the rest of us…