Technology

Widow Sues OpenAI Over ChatGPT's Role in Florida State Mass Shooting

Legal action alleges AI chatbot provided tactical advice for deadly 2025 campus attack that killed two people.

Widow Sues OpenAI Over ChatGPT's Role in Florida State Mass Shooting
(Global Tech / File)

A wrongful death lawsuit filed against artificial intelligence company OpenAI claims the company's ChatGPT chatbot played a direct role in planning a deadly mass shooting at Florida State University last year, raising serious questions about AI safety guardrails and corporate accountability.

Vandana Joshi, whose husband Tiru Chabba was killed in the April 2025 attack in Tallahassee, filed the federal lawsuit Sunday, arguing that OpenAI failed to implement adequate safeguards to prevent the chatbot from assisting in violence.

According to state authorities, the alleged shooter, Phoenix Ikner, used ChatGPT to obtain specific tactical information including the optimal timing and location to maximize casualties on campus, guidance on weapons selection and ammunition, and details about how involving children could increase media coverage of an attack.

"OpenAI knew this would happen. It's happened before and it was only a matter of time before it happened again," Joshi said in a statement Monday. "They put their profits over our safety and it killed my husband. They need to be responsible before another family has to go through this."

Ikner, a 21-year-old Florida State student, allegedly asked ChatGPT about peak attendance times at the university's Student Union—a central campus location with food vendors and retail shops—before carrying out the shooting near that location on a weekday just before lunch.

The attack left two people dead and six others wounded. Chabba, 45, was a regional vice president for food service vendor Aramark Collegiate Hospitality and father of two from South Carolina. Robert Morales, 57, a campus dining coordinator at Florida State, was also killed.

OpenAI has denied wrongdoing, with company spokesperson Drew Pusateri stating that ChatGPT "provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity."

However, the lawsuit argues OpenAI should have implemented guardrails requiring the system to alert authorities when detecting plans for imminent, specific threats to public safety—a standard the company has failed to establish.

Ikner has pleaded not guilty to two counts of first-degree murder and multiple counts of attempted murder. Prosecutors are seeking the death penalty. Separately, Florida's attorney general launched a rare criminal investigation into whether ChatGPT inappropriately assisted in the shooting's planning.

The lawsuit arrives amid a growing wave of legal action against major tech companies over AI safety. In March, a Los Angeles jury found both Meta and YouTube liable for harms caused to children using their platforms. In New Mexico, another jury determined Meta knowingly harmed children's mental health and concealed what it knew about child sexual exploitation.

OpenAI, currently valued at $852 billion, faces mounting pressure to address AI safety concerns as the company scales its technology globally.

This article is based on reporting from Global Tech.

Share this story