The Federal Trade Commission has started an inquiry into several social media and artificial intelligence companies, including OpenAI and Meta, about the potential harms to children and teenagers who use their chatbots as companions.
On Thursday, the FTC said it has sent letters to Google parent Alphabet, Facebook and Instagram parent Meta Platforms, Snap, Character Technologies, ChatGPT maker OpenAI and xAI.
The FTC said it wants to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products’ use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the chatbots.
The inquiry comes after OpenAI said it plans to make changes to ChatGPT safeguards for vulnerable people, including adding extra protections for those under 18 years old, after the parents of a teen boy who died by suicide in April sued, alleging the artificial intelligence chatbot led their teen to take his own life.
More children are now using AI chatbots for everything — from homework help to personal advice, emotional support and everyday decision-making. That’s despite research on the harms of chatbots, which have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders.
“As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry,” said FTC Chairman Andrew N. Ferguson in a statement.
He added, “The study we’re launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children.”
In a statement to CBS News, Character.AI said it is looking forward to “collaborating with the FTC on this inquiry and providing insight on the consumer AI industry and the space’s rapidly evolving technology.”
Meta declined to comment on the FTC inquiry. The company has been working on making sure its AI chatbots are safe and age appropriate for children, a spokesperson said.

