A PYMNTS Company

Explainable AI in M&A: Legal Incentives and Technical Challenges

 |  June 1, 2020

By Phillippe Hacker (Oxford Business Law Blog)

Advanced machine learning (ML) techniques, such as deep neural networks or random forests, are often said to be powerful, but opaque. However, a burgeoning field of computer science is committed to developing machine learning tools that are interpretable ex ante or at least explainable ex post. This has implications not only for technological progress, but also for the law, as we explain in a recent open-access article.

On the legal side, algorithmic explainability has so far been discussed mainly in data protection law, where a vivid debate has erupted over whether the European Union’s General Data Protection Regulation (GDPR) provides for a ‘right to an explanation’. While the obligations flowing from the GDPR in this respect are quite uncertain, we show that more concrete incentives to adopt explainable ML tools may arise from contract and tort law.

To this end, we conduct two legal case studies, in medical and corporate merger applications of ML. As a second contribution, we discuss the (legally required) trade-off between accuracy and explainability, and demonstrate the effect in a technical case study.

CONTINUE READING…