Technology

OpenAI Banned Tumbler Ridge Shooter's Account Months Before Attack — But Stayed Silent

The AI giant's decision not to alert Canadian police has sparked a national debate over privacy, surveillance, and corporate responsibility.

OpenAI Banned Tumbler Ridge Shooter's Account Months Before Attack — But Stayed Silent
(BetaKit / File)

OpenAI knew something was wrong. Eight months before Jesse Van Rootselaar walked into Tumbler Ridge, B.C., and committed a mass murder on Feb. 10, the American artificial intelligence company had already banned her ChatGPT account — flagging her messages for misusing its models "in furtherance of violent activities." What it did not do was pick up the phone and call Canadian police.

That decision has set off a political firestorm in Ottawa and reopened one of the most uncomfortable questions of the digital age: when does protecting individual privacy end, and when does public safety demand that tech companies act?

Ottawa Summons OpenAI — And Leaves Disappointed

Canadian AI Minister Evan Solomon wasted little time after the Wall Street Journal broke the story this week. He summoned senior OpenAI executives to Ottawa Tuesday night, demanding answers about the company's safety protocols and how it decides whether to escalate concerns to law enforcement.

The meeting did not go well. Solomon told reporters he and other officials left feeling let down, with no substantial new safety measures put on the table. In a public statement, the minister wrote that "internal review alone is not sufficient when public safety is at stake," and said OpenAI has promised to return with "more concrete proposals tailored to the Canadian context."

Justice Minister Sean Fraser made the government's position even blunter. "The message that we delivered, in no uncertain terms, was that we have an expectation that there are going to be changes implemented, and if they're not forthcoming very quickly, the government's going to be making changes," Fraser said Wednesday during a press scrum, signalling that new legislation could be on its way.

OpenAI, for its part, confirmed the account ban and the timeline to BetaKit, calling the Tumbler Ridge attack "a devastating tragedy" and saying the company is doing "all we can to support the ongoing investigation." The RCMP confirmed OpenAI reached out to investigators after the shooting and said police are reviewing the shooter's online activity.

Experts Urge Caution on Surveillance Legislation

Not everyone believes Ottawa's instinct to legislate is the right one — or the safe one.

Michael Geist, a University of Ottawa law professor and one of Canada's leading experts on internet law and privacy, warned that rushing toward mandatory disclosure requirements carries serious risks for ordinary Canadians.

"We'd be reluctant to say that we want Google to actively monitor emails and report on them. I don't see a significant difference between that and what takes place in an AI chatbot context."
— Michael Geist, University of Ottawa

Geist's concern is that lowering the threshold for when tech companies must report user activity to police could effectively turn AI chatbots — and the vast range of digital tools millions of Canadians use daily — into surveillance instruments of the state.

Mike Zajko, an associate professor of sociology at the University of British Columbia who focuses on internet policy, echoed that concern. "Companies like OpenAI collect vast amounts of highly sensitive personal information, and have a great deal of discretion about what they do with it," Zajko told BetaKit. "Privacy and surveillance concerns already exist, and mandating information sharing with law enforcement amplifies these concerns."

A Fine Line Between Safety and Surveillance

Sharon Polsky, president of the Privacy and Access Council of Canada, offered a pointed observation: OpenAI was not necessarily legally obligated to report anything to the Canadian government — and she expressed surprise the company co-operated as readily as it did, noting that Meta famously ignored a parliamentary summons in 2021.

"On one hand, it's good PR... they have to play nice," Polsky told BetaKit, but she said the episode raises deeper questions about the degree to which private corporations should be made to act as extensions of the state's law enforcement apparatus.

The Wall Street Journal reported that Van Rootselaar's messages included references to gun violence and were caught by an automated content review system. OpenAI employees reportedly discussed whether to contact Canadian police before ultimately deciding against it and proceeding with the account ban in June 2024 — the same month Solomon was publicly emphasizing that Canada needed "light, tight, and right" AI regulation.

Solomon said the federal government is now "reviewing broader measures to ensure that AI systems and platforms operating in Canada have clear standards and accountability," and promised further announcements in the coming weeks.

The tragedy has forced Canadians — and their government — to confront an uncomfortable reality: the tools increasingly woven into daily life are collecting intimate details about their users, and the rules governing what happens with that information remain murky, inconsistent, and, as Tumbler Ridge has shown, potentially life or death.

Additional reporting sourced from BetaKit. WestNet News has independently verified key facts contained in this report.

Share this story