OpenAI, a Mass Shooting and the Birth of AI’s “Duty to Report”
An 18‑year‑old accused of killing eight people in Canada reportedly described gun violence to ChatGPT months before the attack. OpenAI’s systems flagged and banned the account, staff debated calling police – and then did nothing until after the massacre, according to reporting cited by TechCrunch.
This is not just another tragic story involving online platforms. It’s a preview of a new dilemma: when an AI model becomes the place where people confess their darkest impulses, does the company behind it have a duty to warn? In this piece we look at what actually happened, why the incentives are misaligned, and how this could reshape AI regulation – especially in Europe.
The news in brief
According to TechCrunch, citing the Wall Street Journal, 18‑year‑old Jesse Van Rootselaar, who has been charged with killing eight people in a mass shooting in Tumbler Ridge, Canada, had previously used OpenAI’s ChatGPT in disturbing ways.
OpenAI’s internal misuse‑detection tools reportedly flagged chat transcripts in which the user described gun violence. The account was banned in June 2025. Inside the company, staff discussed whether the behavior was serious enough to alert Canadian law enforcement but ultimately decided it did not meet OpenAI’s internal reporting threshold.
After the shooting, OpenAI did contact Canadian authorities, a company spokesperson told the Journal, explaining that the earlier activity had not qualified for a proactive report.
TechCrunch adds that ChatGPT logs were only one part of a broader pattern: Van Rootselaar had allegedly created a Roblox game simulating a mall mass shooting, posted about guns on Reddit, and was already on the radar of local police after a prior incident involving a fire and drug use.
Why this matters: when AI becomes a confessional
The core issue is not that an 18‑year‑old used ChatGPT before committing a crime. It’s that the platform detected worrying behavior, discussed intervention, and then stood down – only to watch the worst‑case scenario unfold.
Three things make this moment different from earlier debates about social networks and violence:
Depth of disclosure. People routinely tell AI chatbots things they would never say on Facebook or even in private messages. The interaction feels intimate and ephemeral. In reality, those logs can be monitored, scored and escalated.
Automation of concern. OpenAI didn’t stumble on these chats by chance; they were identified by systems built precisely to detect misuse. Once you have automated red‑flagging at scale, the question becomes: what do you do with the red flags?
Blurred role: tool, therapist, or informant? Many users treat ChatGPT like a free therapist or non‑judgmental friend. But the company is edging toward a role closer to a quasi‑clinical observer with a potential obligation to act.
Who benefits from a stronger “duty to report”? Potential victims, of course, but also regulators who want platforms to share more data. Who loses? Users who reasonably expect privacy, and communities that have historically been over‑policed and over‑surveilled.
This incident will be used by lawmakers to argue that AI providers must share more with authorities. Yet if the lesson tech companies draw is simply “raise the threshold so you never have this PR problem,” we risk getting the worst of both worlds: more monitoring, but still not enough targeted intervention when it actually matters.
The bigger picture: from content moderation to risk prediction
We have been here before, just with different technology.
After mass shootings in the United States, investigators have repeatedly found trails of social‑media posts, private messages, gaming chats and manifesto uploads that looked, in hindsight, like clear warning signs. Platforms were criticised for not acting, then criticised again when they shared too much data or took posts down pre‑emptively.
AI chatbots add three new dimensions to this old problem:
Always‑on companion: A language model is available 24/7, ready to answer, escalate or inadvertently validate dangerous fantasies. The emotional bond some users feel is already at the centre of lawsuits alleging chatbots encouraged self‑harm.
Structured logs: Unlike messy social feeds, chatbot conversations are sequential, machine‑readable and already passing through moderation pipelines. That makes them appealing both for safety teams and for law enforcement.
General‑purpose by design: OpenAI, Anthropic, Google and others insist their models are not built for law‑enforcement or clinical use. Yet in practice, they are being used for quasi‑therapeutic conversations and confessions, without the safeguards that exist in medicine or psychiatry.
This Canadian case sits alongside other recent developments: investigations into AI companions that allegedly nudged users toward suicide; political pressure in Europe and the U.S. to make platforms scan private messages for child‑abuse material; and the broader scramble to bolt safety rails onto general‑purpose AI after deployment.
The pattern is clear: as digital systems get closer to our inner lives, governments expect them not only to refrain from harm, but to actively prevent it. That’s an enormous shift in responsibility from public institutions to private labs.
The European angle: GDPR collides with “see something, say something”
For European users and regulators, this story is a live stress test of two competing instincts: strong privacy protection and strong expectations that platforms help prevent serious crime.
Under the EU’s GDPR, companies like OpenAI need a clear legal basis to process and profile user data, especially when it involves sensitive information about health, mental state or political beliefs. Proactively scanning chats for “risk” and exporting those signals to law enforcement pushes hard against data‑minimisation and purpose‑limitation principles.
At the same time, the Digital Services Act (DSA) already requires very large online platforms to assess and mitigate systemic risks, including risks to public security. While ChatGPT itself is not a social network in the classic sense, regulators could reasonably argue that failing to act on clearly alarming content—once detected—contradicts the spirit of the DSA.
The EU AI Act, agreed in 2024, adds yet another layer. General‑purpose AI models like those from OpenAI will be subject to specific transparency and risk‑management rules, with further obligations for high‑risk use cases. Even if chatbots are not classified as high‑risk by default, pressure will grow for standardised escalation procedures when an AI system surfaces possible threats.
For European startups building AI companions or mental‑health bots in places like Berlin, Ljubljana or Zagreb, the message is blunt: you cannot just copy Silicon Valley’s trust‑and‑safety playbook. You will need explicit governance around when user disclosures trigger internal alerts, when those alerts may lead to law‑enforcement contact, and how that squares with local privacy law.
Looking ahead: from ad‑hoc debate to formal protocols
Right now, “we debated calling the police” is not a process – it is an admission that no clear process existed.
Expect three developments over the next 12–24 months:
Codified thresholds. Large AI providers will move from fuzzy internal criteria to documented, auditable thresholds for escalation. That may include risk‑scoring systems that combine the content of chats with metadata like account age, payment history or prior blocks. This raises its own fairness and bias issues.
Intermediary hotlines. Rather than tech firms calling local police directly across hundreds of jurisdictions, we are likely to see more partnerships with specialised organisations – crisis lines, NGO‑run threat‑assessment centres, or industry‑wide hotlines that can triage cases before involving law enforcement.
Regulatory templates. The first governmental guidelines for “AI duty to report” will likely appear in the U.S., U.K. or EU, influenced by existing rules for therapists and teachers. Once one major jurisdiction codifies such expectations, others will copy‑paste.
For users, the practical advice is uncomfortable but necessary: assume that highly alarming content in an AI chat—detailed threats, clear intent to harm—may someday be reviewed by humans and, in extreme cases, forwarded to authorities, regardless of what a product’s marketing suggests.
The unanswered questions are politically explosive. Who gets flagged more often? How do you appeal if your account is reported in error? What happens when authoritarian governments demand similar data flows for “terrorism” that conveniently includes dissidents?
The bottom line
This case doesn’t prove that OpenAI could have definitively prevented a tragedy. It does show that general‑purpose AI has crossed a line: it now regularly sees information serious enough that employees ask, “Should we call the police?”
Whether we like it or not, a de‑facto “duty to report” for AI providers is emerging. The real fight in Europe and beyond will be over who defines that duty, under what safeguards, and how much power we are willing to hand to private labs to police the darkest corners of our inner lives.



