A PYMNTS Company

Computer Scientists Can’t Treat Social and Ethical Impacts as an Afterthought

 |  July 25, 2022

By: Edmund L. Andrews (HAI Blog/Stanford)

When people sound alarms about ethical and social pitfalls in computing, especially artificial intelligence, they are often reacting to systems that are already in use. How should a social media platform handle algorithms that amplify hate speech and misinformation? Do systems that evaluate creditworthiness or job applications have hidden racial or gender biases? Does facial recognition jeopardize privacy?

But a new report from an advisory committee to the National Academies of Science, whose members include John Hennessy, the former president of Stanford and an advisor to Stanford HAI, argues that computer researchers and the institutions that fund them need to anticipate social and ethical risks long before they have a product.

If they don’t, the report warns, it may be too late.

“It is much easier to design a technology correctly from the start than it is to fix it later,” the report warns. “Failure to consider the consequences early in research increases the risk of adverse societal or ethical impacts.”

That may sound obvious, but the authors — including luminaries in computer science, social science, and philosophy — say it requires a broad rethink by the institutions that fund and carry out research: universities, corporations, professional societies, and the government.

In part, that means reaching out early to stakeholders as well as to experts in social sciences, ethics, and moral reasoning. It also means thinking early and hard about the unexpected ways that a new technology might be used or misused…

CONTINUE READING…