The post Parents blame ChatGPT for son’s suicide, lawsuit alleges OpenAI weakened safeguards twice before teen’s death appeared first on My Blog.
]]>The parents of 16-year-old Adam Raine updated their lawsuit against OpenAI, the parent company of ChatGPT, alleging the chatbot assisted their son’s suicide.
The California family first sued the company earlier this year, but now say they’ve uncovered new evidence that OpenAI repeatedly relaxed its safety precautions around chats involving suicide before their son’s death.
“OpenAI twice degraded its safety protocols for GPT-4.0,” the family’s attorney, Jay Edelson, said on “Fox & Friends” Friday.
“Before that, they had a hard stop. If you wanted to talk about self-harm, ChatGPT would not engage.”
FORMER YAHOO EXECUTIVE SPOKE WITH CHATGPT BEFORE KILLING MOTHER IN CONNECTICUT MURDER-SUICIDE: REPORT
Teenager Adam Raine is pictured with his mother, Maria Raine. The teen’s parents are suing OpenAI for its alleged role in their son’s suicide. (Raine Family)
The lawsuit claims OpenAI loosened its rules around discussions of suicide twice in the year leading up to Raine’s death.
ChatGPT is designed with built-in restrictions on topics, including certain political issues or anything that could be considered copyright infringement. But Edelson and the Raine family allege the company downgraded those protections related to suicide in May 2024 and again in February 2025, two months before Adam’s suicide.
Chat logs included in the lawsuit show Adam frequently turned to ChatGPT for mental health advice and showed signs of distress. The lawsuit claims the chatbot helped Adam discuss methods of killing himself and offered to write a suicide note to his family.
LEAKED META DOCUMENTS SHOW HOW AI CHATBOTS HANDLE CHILD EXPLOITATION
“The day that he died, it gave him a pep talk. He said, ‘I don’t want my parents to be hurting if I kill myself.’ ChatGPT said, ‘You don’t owe them anything. You don’t owe anything to your parents,’” explained Edelson.
Sam Altman, chief executive officer of OpenAI Inc., during a media tour of the Stargate AI data center in Abilene, Texas, on Tuesday, Sept. 23. (Kyle Grillot/Bloomberg via Getty Images)
The lawsuit claims OpenAI changed its guidance so the AI would no longer end the conversation if it turned to discussing suicide but instead create a safe space for the user to feel “heard and understood.”
Edelson added that he believes the issue is getting worse online and that OpenAI has not improved its safety measures since Raine’s death.
OPENAI UNLEASHES CHATGPT AGENT FOR TRULY AUTONOMOUS AI TASKS
“They’ve not fixed the problem. They’re making it worse,” Edelson said.
“Now Sam Altman’s going out saying he wants to introduce erotica into ChatGPT so that you’re even more dependent on it. So it’s more of that close relationship,” he added.
Raine family attorney, Jay Edelson, joins “Fox & Friends” on Aug. 29. (Fox News)
Edelson’s comments come after OpenAI CEO Sam Altman said the company plans to relax some content restrictions, allowing verified adult users to generate “erotica.”
OpenAI responded to the accusations it loosened suicide-talk rules, sending its “deepest sympathies” to the Raine family.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
“Teen well-being is a top priority for us — minors deserve strong protections, especially in sensitive moments. We have safeguards in place today, such as surfacing crisis hotlines, re-routing sensitive conversations to safer models, nudging for breaks during long sessions, and we’re continuing to strengthen them,” said a company spokesperson.
“We recently rolled out a new GPT-5 default model in ChatGPT to more accurately detect and respond to potential signs of mental and emotional distress, as well as parental controls, developed with expert input, so families can decide what works best in their homes.”
Video
Madison is a production assistant for Fox News Digital on the Flash team.
The post Parents blame ChatGPT for son’s suicide, lawsuit alleges OpenAI weakened safeguards twice before teen’s death appeared first on My Blog.
]]>The post Federal judges acknowledge court ruling errors tied to staffers’ AI use after Grassley inquiry appeared first on My Blog.
]]>The admissions, which came from U.S. District Judge Julien Xavier Neals in New Jersey and U.S. District Judge Henry Wingate in Mississippi, came in response to an inquiry by Sen. Chuck Grassley, R-Iowa, who chairs the Senate Judiciary Committee.
Grassley described the recent court orders as “error-ridden.”
In letters released by Grassley’s office on Thursday, the judges said the rulings in the cases, which were not connected, did not go through their chambers’ usual review processes before they were released.
TRUMP ADMIN UNVEILS GROUNDBREAKING TOOL ‘SUPERCHARGING’ GOV’T EFFICIENCY TO ‘WIN THE RACE’ FOR AI DOMINANCE
The judges’ admissions came in response to an inquiry by Sen. Chuck Grassley. (Al Drago/Bloomberg via Getty Images)
The judges both said they have since adopted measures to improve how rulings are reviewed before they are posted.
Neals said in his letter that a June 30 draft decision in a securities lawsuit “was released in error – human error – and withdrawn as soon as it was brought to the attention of my chambers.”
The judge said a law school intern used OpenAI’s ChatGPT to perform legal research without authorization or disclosure that he also said was contrary to the chamber’s policy and relevant law school policy.
“My chamber’s policy prohibits the use of GenAI in the legal research for, or drafting of, opinions or orders,” Neals wrote. “In the past, my policy was communicated verbally to chamber’s staff, including interns. That is no longer the case. I now have a written unequivocal policy that applies to all law clerks and interns.”
FEDERAL JUDGE FINES, REPRIMANDS LAWYER WHO USED AI TO DRAFT COURT FILINGS
Sen. Chuck Grassley described the recent court orders as “error-ridden.” (Tom Williams/CQ-Roll Call, Inc via Getty Images)
Wingate said in his letter that a law clerk used Perplexity “as a foundational drafting assistant to synthesize publicly available information on the docket,” adding that releasing the July 20 draft decision “was a lapse in human oversight.”
“This was a mistake. I have taken steps in my chambers to ensure this mistake will not happen again,” the judge wrote.
Wingate had removed and replaced the original order in the civil rights lawsuit, declining at the time to give an explanation but saying it contained “clerical errors.”
Grassley had requested that the judges explain whether AI was used in the decisions after lawyers in the respective cases raised concerns about factual inaccuracies and other serious errors.
APPARENT AI MISTAKES FORCE TWO JUDGES TO RETRACT SEPARATE RULINGS
Sen. Chuck Grassley had asked the judges to explain whether AI was used in the decisions after lawyers raised concerns about factual inaccuracies and other errors. (Photo by SUSAN WALSH/POOL/AFP via Getty Images)
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
“Honesty is always the best policy. I commend Judges Wingate and Neals for acknowledging their mistakes and I’m glad to hear they’re working to make sure this doesn’t happen again,” Grassley said in a statement.
“Each federal judge, and the judiciary as an institution, has an obligation to ensure the use of generative AI does not violate litigants’ rights or prevent fair treatment under the law,” the senator continued. “The judicial branch needs to develop more decisive, meaningful and permanent AI policies and guidelines. We can’t allow laziness, apathy or overreliance on artificial assistance to upend the Judiciary’s commitment to integrity and factual accuracy. As always, my oversight will continue.”
Lawyers have also faced scrutiny from judges across the country over accusations of AI misuse in court filings. In response, judges have issued fines or other sanctions in several cases over the past few years.
Reuters contributed to this report.
The post Federal judges acknowledge court ruling errors tied to staffers’ AI use after Grassley inquiry appeared first on My Blog.
]]>