The Federal Trade Commission (FTC) has launched an inquiry into AI chatbots acting as companions, issuing orders to seven companies to gather information on their impact on children and teens.
The FTC’s investigation focuses on how these companies measure, test, and monitor the potential negative impacts of AI chatbots, particularly on younger users. The inquiry aims to ensure that these technologies are safe for children and comply with the Children’s Online Privacy Protection Act Rule.
The FTC said that the chatbots “may use generative artificial intelligence technology to simulate human-like communication and interpersonal relationships with users” and “can effectively mimic human characteristics, emotions, and intentions, and generally are designed to communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots.”
The FTC is using its 6(b) authority to conduct this study, which does not have a specific law enforcement purpose.
The companies receiving the orders include:
- Alphabet, Inc.
- Character Technologies, Inc.
- Instagram, LLC
- Meta Platforms, Inc.
- OpenAI OpCo, LLC
- Snap, Inc.
- X.AI Corp,
The inquiry seeks to understand how these companies monetize user engagement, process user inputs, and generate outputs. It is also looking into how they develop and approve chatbot characters, and how they measure and mitigate negative impacts, especially on children.
The FTC is also interested in how these companies inform users and parents about the chatbots’ features, capabilities, and potential risks, as well as their data collection and handling practices. The Commission is examining how companies monitor compliance with FTC rules.
© 2025 Cox Media Group