The US competition watchdog has launched an investigation into the creator of artificial intelligence (AI) chatbot ChatGPT.

The Federal Trade Commission (FTC) has begun an examination into the ChatGPT maker, OpenAI, seeking to find out what the company’s data privacy rules are and what action it takes to stop its technology from giving wrong information.

It will look at whether there have been any harms posed by ChatGPT responding with false answers to users’ questions.

The competition regulator wrote to OpenAI seeking detailed information on its business, including its privacy rules, data security regulations, processes and AI technology.

Read more
ChatGPT upgraded with Bing search data to give chatbot real-time knowledge
Elon Musk launches long-awaited AI start-up in a bid to rival ChatGPT

No comment was given by an FTC spokesperson to the story, first reported in the Washington Post.

According to the FTC letter, published in the Washington Post, OpenAI was being investigated over whether it has “engaged in unfair or deceptive privacy or data security practices” or practices that harm users.

More on Artificial Intelligence

The company founder, Sam Altman, said he will work with investigating officials but expressed disappointment at the case being opened and how he found out via a leak to the Washington Post.

Please use Chrome browser for a more accessible video player


1:28

Sky Tom Clarke asks the important question: Can AI replace humans?

In a tweet, Mr Altman said the move would “not help build trust,” but added the company will work with the FTC.

“It’s super important to us that our technology is safe and pro-consumer, and we are confident we follow the law,” he said.

“We protect user privacy and design our systems to learn about the world, not private individuals.”

The FTC probe is not the only legal challenge facing OpenAI.

Comedian Sarah Silverman and two other authors have taken legal action against the company, as well as Facebook owner Meta, claiming copyright infringement.

They say the companies’ AI systems were illegally “trained” by datasets containing illegal copies of their works.