ChatGPT is treading cautiously right now, but the chatbot may become more risqué by the end of the year.
In recent weeks, the generative AI chatbot has been operating under somewhat stringent limitations, as OpenAI tried to address concerns that it was not handling sensitive mental health issues well. But CEO Sam Altman said in a post on X Tuesday that the company would ease some of those restrictions because it's "been able to mitigate the serious mental health issues."
Altman said in a follow-up post Wednesday that the changes are expected to prioritize safety for teenagers while also "treating adult users like adults." Tools built into the models to deal with sensitive topics and address mental health crises will remain for all users, but adult users will have more freedom to use ChatGPT without preemptive popups or model rerouting.
"It doesn't apply across the board of course: for example, we will still not allow things that cause harm to others, and we will treat users who are having mental health crises very different from users who are not," Altman said. "Without being paternalistic we will attempt to help users achieve their long-term goals."
(Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Other changes are also expected. Altman said the company could allow "erotica" for verified adult users as it implements an "age-gating" system, or age-restricted content, in December. The mature content is part of the company's "treat adult users like adults" principle, Altman said.
Altman's post also announced a new version of ChatGPT in the next few weeks, with a personality that behaves more like the company's GPT-4o model. Chatbot users had complained after the company replaced 4o with the impersonal GPT-5 earlier this year, saying the new version lacked the engaging and fun personality of previous chatbot models.
"If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing)," Altman wrote.
Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
After OpenAI was sued by parents who alleged ChatGPT contributed to their teen son's suicide, the company imposed an array of new restrictions and changes, including parental controls, alerts for risky behavior and a teen-friendly version of the chatbot. In the summer, OpenAI implemented break reminders that encourage people to occasionally stop chatting with the bot.
On Tuesday, the company also announced the creation of a council of experts on AI and well-being, including some with expertise in psychology and human behavior.
This comes as lawmakers and regulators are ringing the alarm on the risks AI tools pose to people, especially children. On Monday, California Gov. Gavin Newsom signed new restrictions on AI companion chatbots into law. Last month, the Federal Trade Commission launched an investigation into several AI companies, including OpenAI.