A PYMNTS Company

IFPMA Issues Principles Encouraging Ethical Use Of AI In Healthcare

 |  July 21, 2022

By: Coran Darling (Technology’s Legal Edge)

Use of AI in healthcare continues to solidify itself as a solid practice for the benefit of patients and healthcare providers. No more than a cursory search of the subject provides a number of existing examples, such as cancer detection, of AI intervention in AI. In some cases, such intervention exceeds the capabilities of even the most experienced practitioners.

However, each superhuman capability has its kryptonite, and AI is no exception. Due to limited datasets, biased training data, or ineffective algorithms, AI has the potential to miss signs that would otherwise have been caught or queried further. Late last year, for example, a study discovered that in its current state, AI has proven to be less effective at identifying skin cancer in persons of colour. Creators of AI and practitioners therefore must move forward with caution and ensure that an over-reliance on the technology doesn’t emerge.

In effort to address this, a number of initiatives around the world have emerged. In the EU, the proposed EU AI Regulation is set to enact a number of provisions aimed at the protection of individuals that come into contact with AI. Failure to adhere to the standards and protections set out in the Regulation has the potential to result in substantial fines and, in some cases, criminal sanctions. In the UK, the NHS and UK Government recently commenced a pilot implementing algorithmic impact assessments as a method of keeping AI applied to healthcare and treatment in check.

In a recent initiative, the International Federation of Pharmaceutical Manufacturers and Associations (“IFPMA”) have released their own input into shaping the use of AI and healthcare in the form of principles (“Principles”), focusing on the ethical use of AI in its general application.

Principles on Ethical Guidance

The IFPMA highlight that the Principles aim to provide those within healthcare a set of guardrails that they should consider, adapt, and operationalise where possible, in their organisations. They have been designed to align with the current data ethics principles released by the IFPMA in May 2021 and work in harmony with the existing principles to establish a safe and ethical environment for data use (particularly in the case of algorithmic decision making)…

CONTINUE READING…