OpenAI CEO Sam Altman announced on 14 October that the company’s artificial intelligence chatbot ChatGPT will begin rolling out age-verified access from December and relax restrictions for users in mental distress.
“As we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults,” Sam Altman wrote in a post on X late on 14 October.
Removing restrictions, says Sam Altman
He added that OpenAI has “been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.”
Sam Altman said that OpenAI had made ChatGPT “pretty restrictive” to make sure it was being careful with mental health issues, though that made the chatbot “less useful/enjoyable to many users who had no mental health problems.”
New version to allow users to decide tone, personality of ChatGPT
“In the coming weeks, OpenAI will release a version of ChatGPT that will allow people to better dictate the tone and personality of the chatbot,” he added.
“If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing),” according to Altman.
Earlier on Tuesday, Meta unveiled a new system to limit what users under 18 can see on Instagram and its generative AI tools, using filters inspired by the PG-13 movie rating system, as per a Reuters report.
‘Should have freedom in how they use ChatGPT’
Responding to users in the comments, Sam Altman said that users would not get the mature content unless they ask for it.
He also answered a user’s question on making the chatbot fun, “For sure; we want that too.Almost all users can use ChatGPT. however they’d like without negative effects; for a very small percentage of users in mentally fragile states there can be serious problems. 0.1% of a billion users is still a million people [sic].”
“We needed (and will continue to need) to learn how to protect those users, and then with enhanced tools for that, adults that are not at risk of serious harm (mental health breakdowns, suicide, etc) should have a great deal of freedom in how they use ChatGPT,” he added.