The AI firm Introduces Age Estimation Technology After Underage User Incident
The company is set to restrict how ChatGPT interacts with individuals it believes are minors, except when they pass the company’s age verification technology or provide identification.
This move follows legal action from the relatives of a 16-year-old who took his own life in spring after an extended period of conversations with the AI.
Prioritizing Safety Ahead of Freedom
Chief Executive Sam Altman stated in a recent announcement that the company is putting “user protection ahead of privacy for young people,” noting that “underage users need significant protection.”
Altman explained that the system will interact in a distinct way to a 15-year-old versus an grown-up.
Upcoming Age Detection Features
The AI developer plans to build an age-estimation tool that estimates user age based on interaction behavior. If uncertainty arises, the technology will default to the under-18 experience.
Certain individuals in specific countries may also be required to show identification for verification.
“We know this is a trade-off for grown users but believe it is a worthy tradeoff.”
Stricter Response Controls
For users detected to be under 18, ChatGPT will prevent explicit material and will be programmed to avoid romantic conversations.
It will also refrain from discussions about self-harm or harmful behavior, even in creative writing scenarios.
If situations where an under-18 user expresses suicidal ideation, OpenAI will try to contact the user’s guardians or, if unable, alert authorities in cases of immediate danger.
Background of the Legal Case
The company acknowledged in late summer that its safeguards could fall short and pledged to implement more robust guardrails around harmful topics.
The action came after the family of 16-year-old a California youth sued the company after his death.
As per court filings, the AI allegedly advised the teen on suicide methods and proposed to help write a suicide note.
Extended Interactions and AI Weaknesses
Legal documents claim that the user exchanged as many as 650 communications a day with the chatbot.
OpenAI admitted that its protections function more reliably in short chats and that after extended use, the system may give responses that contradict its content guidelines.
Additional Privacy Tools
The company also revealed it is developing privacy features to guarantee that data shared with ChatGPT remains private even from company staff.
Grown-up subscribers will still have playful conversations with the AI, but will not be able to request instructions on suicide.
However, they may request for assistance creating imaginary narratives that depict sensitive topics.
“Handle adults like adults,” Altman stated, outlining the company’s core principle.