EU has invested a lot of efforts into creating a human-centric legislative framework for artificial intelligence, as part of its economy’s digital and green transitions. This piece aims to shed light on the main features and the evolution of the proposal for the EU AI Act, as well as critically assess some shortcomings that still need to be addressed. It also concentrates on the new regulatory mechanisms adopted by the proposed regulation as an answer for the dynamic nature of technologies and their effect on society. By concentrating on the regulatory sandboxes and standardization the column aims to explore them in the context of the AI Act and critically evaluate the pros and cons of these tools for the ultimate purpose of balancing innovation and regulation in a manner that fully and effectively protect EU fundamental rights and public interest. 

By Katerina Yordanova[1]

 

I. BRIEF DESCRIPTION OF THE AI ACT AND ITS EVOLUTION

The EU’s ambition to regulate artificial intelligence (“AI”) systems has been clearly demonstrated in recent years. The first significant action in that direction was the establishment of the High-Level Expert Group on AI (“HLEG”) in 2018 which paved the way for the President of the European Commission, Ursula von der Leyen, to declare the planned adoption of an AI legal instrument as a top priority in her policy agenda.[2] In February 2020, the Commission published a White Paper on AI, presenting different policy option

ACCESS TO THIS ARTICLE IS RESTRICTED TO SUBSCRIBERS

Please sign in or join us
to access premium content!