Why The EU Wants To Regulate Artificial Intelligence Through A ‘Risk-Based’ Approach

By Aarathi Ganesan, Medianama

The European Commission proposed the Artificial Intelligence Act (AI Act) last April, after over two years of public consultations. The Act lays ‘down a uniform legal framework [across the EU] for the development, marketing and use of artificial intelligence in conformity with Union values.’ These ‘values’ are indicated by ‘democracy’, ‘freedom’, and ‘equality’.  

The Act uses a ‘risk-based’ regulatory approach to all AI systems providers in the EU ‘irrespective of whether they are established within the Union or in a third country.’ It prohibits certain kinds of AI, places higher regulatory scrutiny on ‘High Risk AI’, and limits the use of certain kinds of surveillance technologies, among other objectives. To implement the regulations, the Act establishes the formation of a Union-level ‘European Artificial Intelligence Board’. Individual Member States are to direct ‘one or more national competent authorities’ to implement the Act.

The Act was introduced amid growing recognition of the usefulness of AI in the EU—for example investing in AI and promoting its use can provide businesses with ‘competitive advantages’ that support ‘socially and environmentally beneficial outcomes’. However, it also appears cognizant of the many risks associated with AI—which can harm protected fundamental rights as well as the public interest. The Act states that it is an attempt to strike a “proportionate” balance between supporting AI innovation and economic and technological growth, and protecting the rights and interests of EU citizens. Ultimately, the legislation aims to establish a ‘legal framework for trustworthy AI’ in Europe that helps instil consumer confidence in the technology. 

Continue Reading…