A UAE official said artificial intelligence (AI) needs the same level of oversight as weapons-grade uranium.
UAE Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications Omar Al Olama said there is a need for a global coalition to oversee AI, The National reported Friday (May 19).
“Even if we were the most progressive, most proactive country on Earth and put in place the best guardrails and safeguards, if [AI] goes off on the wrong tangent in China, or the U.S., or the U.K. — or anywhere else — because of our interconnectedness, it is going to harm our people,” Al Olama said during The National’s Connectivity Forum.
Al Olama said that the world community needs to have for AI that same sort of mechanisms it has that enable it to find out if a country is enriching uranium, even if the country doesn’t disclose that it is doing so, according to the report.
“We need to have the same level of rigor, the same level of oversight on AI,” Al Olama said.
Other government officials and industry leaders have also said there is a need for regulation around AI.
OpenAI CEO Sam Altman told a U.S. Senate subcommittee Tuesday (May 16) that the technology needs oversight to prevent possible harm.
“I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that,” Altman said. “We want to work with the government to prevent that from happening.”
About two weeks earlier, on May 4, the White House underscored the importance of ensuring AI products are safe and secure as it announced new initiatives around the technology and as officials met with CEOs of leading companies in the field.
Vice President Kamala Harris said in a statement released after the meeting: “As I shared today with CEOs of companies at the forefront of American AI innovation, the private sector has an ethical, moral and legal responsibility to ensure the safety and security of their products.”
In March, Tesla, Twitter and SpaceX owner Elon Musk was among the first signatories to an open letter published by AI watchdog group Future of Life Institute on the potential dangers of AI.