As the widespread application of artificial intelligence permeates an increasing number of businesses, ethical issues such as algorithmic bias, data privacy, and transparency have gained increased attention, raising renewed calls for policy and regulatory changes to address the potential consequences of AI systems and products. In this article, we build on original research to outline distinct approaches to AI governance and regulation and discuss the implications for firms and their managers in terms of adopting AI and ethical practices going forward. We examine how manager perception of AI ethics increases with the potential of AI-related regulation but at the cost of AI diffusion. Such trade-offs are likely to be associated with industry specific characteristics, which holds implications for how new and intended AI regulations could affect varying industries differently. Overall, we recommend that businesses embrace new managerial standards and practices that detail AI liability under varying circumstances, even before it is regulatory prescribed. Stronger internal audits, as well as third-party examinations, would provide more information for managers, reduce managerial uncertainty, and aid the development of AI products and services that are subject to higher ethical as well as legal, and policy standards.

By Benjamin Cedric Larsen & Yong Suk Lee[1]

 

I. INTRODUCTION

Artificial intelligence (“AI”) application has expanded rapidly in the last decade

ACCESS TO THIS ARTICLE IS RESTRICTED TO SUBSCRIBERS

Please sign in or join us
to access premium content!