Artificial intelligence using machine-learning (“AI/ML”) is already providing countless benefits to society, but is also presenting some risks and concerns that require governance.  Yet, the rapid pace of AI/ML, the many diverse applications and industries across which it is being implemented, and the complexity of the technology itself challenge effective governance. At the international level, no binding treaties or conventions are likely anytime soon, but organizations such as the OECD and UNESCO have developed non-binding recommendations that can help guide AI/ML governance by governments and industry. Other major AI powers such as China and the European Union are putting in place legislative frameworks for AI with uncertain impacts and effectiveness, whereas the U.S. Congress has not enacted any substantive controls on AI/ML to date. Rather, various federal agencies have started producing guidance documents and recommendations, primarily focused on discouraging algorithm applications with biased or discriminatory impacts.  Some state and local governments are also in the process of starting to adopt some restrictions on problematic AI/ML applications and uses. At this time, most governance of AI/ML consists of a variety of “soft law” programs. Given the central role these programs in AI/ML governance, it is important to make these programs more effective and credible.

By Gary E. Marchant[1]


Artificial intelligence (“AI”) has surged in its applica


Please verify email or join us
to access premium content!