

The Federal Trade Commission (FTC) has initiated an inquiry into OpenAI, the Microsoft-backed business behind ChatGPT, seeking information about how the company addresses potential risks to individuals’ reputations.
This move reflects the growing regulatory scrutiny surrounding generative artificial intelligence (AI) technology.
OpenAI CEO Sam Altman has confirmed that the company will cooperate with the FTC’s investigation. ChatGPT, known for generating human-like responses to user queries, has garnered significant attention due to its potential to transform the way people access information online.
Also Read
- OpenAI Licenses Associated Press’ News Archive In AI Deal, Aimed At Responsible Use Of Generative AI
- China Implements First-Of-Its-Kind Rules To Regulate Generative AI, Striking A Balance Between Innovation And Oversight
- Elon Musk Launches xAI, an AI Startup Aimed at Challenging ChatGPT and Building Safer AI
While competitors rush to develop their own versions of generative AI, the technology has sparked intense debate, focusing on issues such as data usage, response accuracy, and potential violations of authors’ rights during the training process.
The FTC’s letter to OpenAI specifically inquires about the steps taken by the company to address the possibility of generating false, misleading, disparaging, or harmful statements about real individuals.
The regulatory body is also interested in understanding OpenAI’s approach to data privacy, including data acquisition methods for training and informing the AI system.
Altman emphasized OpenAI’s commitment to safety research, stating that they have made ChatGPT “safer and more aligned before releasing it.” He reassured users that their privacy is protected and that OpenAI designs its systems to learn about the world rather than private individuals.
During a Congressional hearing earlier this year, Altman acknowledged that errors could occur with the technology.
He advocated for the creation of regulations in the emerging AI industry and suggested the formation of a new agency dedicated to overseeing AI safety.
He stressed the importance of proactive collaboration with the government to prevent any potential negative outcomes.
While the FTC investigation remains at an early stage, the consumer watchdog has recently taken a prominent role in scrutinizing tech giants under the leadership of Chair Lina Khan.
Khan, known for her critique of anti-monopoly enforcement related to Amazon, has faced criticism for pushing the boundaries of the FTC’s authority.
During a congressional hearing, Ms. Khan expressed concerns about ChatGPT’s output, citing instances where sensitive information and defamatory statements emerged in response to user inquiries.
The FTC’s investigation aligns with their broader focus on preventing fraud and deception in emerging technologies.
OpenAI has faced previous challenges related to privacy concerns. In April, Italy banned ChatGPT due to these concerns, but the service was subsequently restored after OpenAI implemented age verification measures and provided additional information about its privacy policy.
As the investigation proceeds, OpenAI and the FTC will navigate discussions surrounding reputation risks, data privacy, and responsible AI development.
The outcome of the inquiry may contribute to shaping regulatory frameworks and practices for the emerging generative AI industry.