The topic of regulating Artificial Intelligence has gained momentum in the past few years, most recently with the European Union’s AI Act, which was released last year. At the heart of these discussions is opacity of machine learning models, the risk of bias from AI systems and issues like agency and keeping humans in the loop. There has been a proliferation of principles related to ethical and responsible AI which includes sector specific approaches and guidance. But there is also an increased demand from stakeholder groups, especially civil society, to ensure that these principles are adopted and implemented. While the AI governance landscape continues to evolve, businesses will have to prepare for emerging regulation which includes elements like certifications and conformity assessments for high-risk use cases (e.g. automated hiring). Governments, private sector and civil society will have to work together on multistakeholder and agile approaches for governing AI to ensure balance between innovation and regulation.

By Jayant Narayan[1]

 

Consider these artificial intelligence and machine learning applications and use-cases: an application trained on historical consumer data, that can assess if a loan should be disbursed to an individual or not or to detect financial fraud. Or consider leveraging energy distribution and consumption data to better forecast energy demand. These and several other examples aren’t use-cases on the horizon; these are current and

...
THIS ARTICLE IS NOT AVAILABLE FOR IP ADDRESS 44.192.79.149

Please verify email or join us
to access premium content!