The Federal Trade Commission (FTC) is keeping an eye on the artificial intelligence (AI) industry.
The regulator aims to ensure the field isn’t dominated by existing big tech companies and that the claims companies make about what AI products can do are not overstated, FTC Chair Lina Khan said during an event Monday (March 27), according to Bloomberg.
Because machine learning requires a huge amount of data and storage, there is a potential that this demand could cause “big companies to become bigger,” Khan said during an event hosted by the Justice Department, per the report.
Read more: FTC Hearings #7: Artificial Intelligence, Algorithms, and Predictive Analytics
The FTC has already begun an investigation into competition and data security in the cloud computing industry, the report said.
When it comes to the claims about the capabilities of AI-powered products, Khan said the FTC has issued guidance covering this area, according to the report.
“Developers of these tools can potentially be liable if technologies they are creating are effectively designed to deceive,” Khan said, per the report.
This news comes on the same day that it was reported that the three largest cloud-computing providers — Amazon, Microsoft and Alphabet’s Google — see generative AI as a business driver.
The companies have been putting this technology front and center in their sales pitches since OpenAI’s ChatGPT became a sensation, The Wall Street Journal (WSJ) reported Monday.
Microsoft has been promoting the efficiencies companies can gain by using AI through the cloud. Google has opened access to one of its previously in-house-only AI programs to software developers using its cloud services. Amazon has been telling customers that it provides access to a variety of AI models, according to the report.
In June 2022, the FTC expressed another concern about the expanding use of AI.
The agency issued a report to Congress warning about using AI to combat online harm and urging policymakers to exercise “great caution” when relying on AI as a policy tool.
“Our report emphasizes that nobody should treat AI as the solution to the spread of harmful online content,” Samuel Levine, director of the FTC’s Bureau of Consumer Protection, said at the time.