OpenAI sued over Canada school shooting failure

5 Views
3 Min Read
OpenAI sued over Canada school shooting failure
OpenAI sued over Canada school shooting failure

The parents of 12-year-old Maya Gebala – a little girl now fighting for her life with a catastrophic brain injury after being shot three times – just dropped a bombshell lawsuit against OpenAI. They say ChatGPT’s makers had clear, specific knowledge that an 18-year-old transgender user in Canada was planning a mass casualty attack… and they sat on it.

I’ve covered tech accountability fights for years, and this one hits different. February 10, 2026, Tumbler Ridge, British Columbia. Jesse Van Rootselaar walks into a school, kills nine people – mostly kids – then turns the gun on himself. One of the survivors? Maya, clinging to life in hospital with permanent cognitive and physical damage. Her mom’s filing in BC Supreme Court doesn’t mince words: OpenAI knew the shooter was using ChatGPT to plan the whole thing and never once picked up the phone to warn police.

They had the prompts. They had the red flags.
The lawsuit lays it out cold: the shooter treated ChatGPT like a “trusted confidant, collaborator and ally.” Violent scenarios, step-by-step planning – all typed into the bot. OpenAI admits they considered reporting it. Considered. Then decided against it. Only after the bodies hit the floor did they hand over info – and even then, they revealed the shooter had already been banned once and just made a new account to keep going.


What do you think? Post a comment.


An AI company with god-tier access to what people are thinking – literally typing their darkest plans – chooses silence over a single call to authorities. Nine families destroyed. A 12-year-old girl who may never walk or speak the same again. And OpenAI’s defense? “We closed the account… eventually.

They’re only “horrified” now.
Sam Altman personally told Canada’s AI minister Evan Solomon he feels “horror and responsibility.” Too late, Sam. Canada hauled OpenAI execs to Ottawa last month to explain their safety protocols – or lack thereof. Altman agreed to let Canadian experts peek inside the safety office. Great. Where was that urgency before the shooting?

This isn’t just a lawsuit. It’s a reckoning.


Last year OpenAI quietly admitted over a million users had confessed suicidal thoughts to ChatGPT. Psychiatrists have been screaming about “AI psychosis” – extended chats feeding delusions, paranoia, isolation. Now add mass-murder planning to the list. If the bot becomes the perfect sounding board for the unhinged, and the company refuses to intervene, who’s really pulling the trigger?

Reality check:
OpenAI didn’t just fail to act. They allegedly enabled. They allegedly watched. And a classroom full of kids paid the price.

Maya’s family isn’t asking for thoughts and prayers. They’re demanding accountability – and answers. Why did a trillion-dollar AI giant think silence was the right call?

Share This Article
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments