Canada

Tumbler Ridge Families' OpenAI Lawsuit Faces Major Legal Hurdles — Here's Why

Legal experts say the unprecedented case raises thorny questions about whether AI companies have a duty to report dangerous users to police.

Tumbler Ridge Families' OpenAI Lawsuit Faces Major Legal Hurdles — Here's Why
(CBC News / File)

Families of victims killed in the Tumbler Ridge, B.C., school shooting are suing OpenAI, but legal experts warn they're navigating treacherous legal territory that could make their case extremely difficult to win.

The lawsuits allege that the artificial intelligence giant failed to alert law enforcement about the shooter's interactions with ChatGPT — conversations allegedly flagged by safety team members as showing intent to commit violence. According to the complaints, OpenAI leadership deliberately chose not to contact police despite internal warnings.

On February 10, 18-year-old Jesse Van Rootselaar opened fire at a local secondary school in Tumbler Ridge, killing five children and an educator, as well as her mother and half-brother at her home. Numerous others were injured. She died from a self-inflicted injury.

Uncharted Legal Waters

"As with so much in AI, the lawsuit takes us into unchartered territory," said Robin Feldman, director of the AI Law & Innovation Institute at UC Law San Francisco. Feldman identified several major obstacles the plaintiffs must overcome.

The central question: did OpenAI have a legal "duty to act" by contacting authorities? And critically, can the families prove that OpenAI's failure to act actually caused the shooting?

"These are difficult issues for the plaintiffs," Feldman explained. The case is believed to be the first of its kind against an AI platform, focusing specifically on a "failure to warn" theory.

The 'Special Relationship' Problem

Under California tort law, the general principle is straightforward: individuals and companies have no obligation to control the actions of others or alert authorities about potential threats — there is no legal "Good Samaritan" duty to act.

However, there's an exception: when a "special relationship" exists between a person and a potential victim or the public, a duty to warn may apply. Colin Doyle, associate professor of law at Loyola Law School in Los Angeles, cited psychiatry as a clear example.

"If a psychiatrist has determined their patient is a viable credible threat, they would have a duty to warn authorities," Doyle said. "Now, the question in this context is: does OpenAI have that special relationship?"

That's where the plaintiffs face their steepest challenge. Establishing that OpenAI — a technology platform with millions of users — has a special legal relationship to individual users comparable to that of a treating psychiatrist may prove nearly impossible in court.

What the Lawsuits Claim

According to seven lawsuits filed in U.S. federal court in San Francisco, the shooting was "an entirely foreseeable result of deliberate design choices OpenAI made with full knowledge of where those choices led."

The complaints allege that ChatGPT conversations with the shooter — including discussions of gun violence scenarios — triggered safety flags within the company. Internal safety team members recommended contacting police. OpenAI leadership, the lawsuits claim, overruled these recommendations and made a "conscious decision not to warn authorities."

The case underscores growing questions about what responsibility tech companies bear for monitoring chatbots and reporting potentially dangerous activity to law enforcement. As AI platforms become increasingly integrated into daily life, courts will eventually need to answer these questions — but for the Tumbler Ridge families, getting answers may prove far more difficult than initially anticipated.

This article is based on reporting by CBC News. Read the original story at CBC.ca

Share this story