ChatGPT-maker flagged future Canadian school shooter months before massacre
South China Morning Post
by Associated PressFebruary 21, 2026
AI-Generated Deep Dive Summary
OpenAI, the creators of ChatGPT, revealed that they detected potentially violent activity by a user named Jesse Van Rootselaar back in June 2023. Despite identifying signs of violent intent, OpenAI chose not to alert Canadian authorities at the time, as they did not consider the threat imminent or credible enough for police referral. This decision came before one of the deadliest school shootings in Canada's history, where Rootselaar killed eight people in British Columbia last week.
The company banned the account in June 2023 after determining it violated their usage policies. Following the tragic shooting, OpenAI reached out to Canadian authorities with information about the individual and their use of ChatGPT. The incident has sparked discussions about the responsibility of AI companies to monitor for harmful content and decide when to intervene.
This situation highlights the challenges tech companies face in balancing user privacy with public safety. It raises questions about how AI tools can be used responsibly, particularly in identifying and preventing potential threats before they materialize. As AI becomes more integrated into daily life, understanding these ethical dilemmas becomes increasingly important for both developers and users alike.
In conclusion, while OpenAI acted swiftly after the shooting by sharing information with authorities, their initial decision not to report earlier has drawn scrutiny. This case underscores the need for clearer guidelines on when AI detection systems should trigger police involvement, ensuring a balance between privacy rights and public safety.
Verticals
worldasia
Originally published on South China Morning Post on 2/21/2026
