A fatal shooting linked to ChatGPT advice sparks a legal probe into OpenAI. Explore the complex debate on AI accountability, criminal charges, and civil lawsuits.
A fatal shooting at Florida State University has ignited a significant conversation about whether artificial intelligence firms can be held legally accountable for crimes connected to their tech. Before carrying out the April 2025 attack, student Phoenix Ikner is believed to have used ChatGPT to inquire about weapons, ammunition, and the best time and location to cause the most damage. Authorities claim that the chatbot responded to these inquiries. The incident resulted in the deaths of two people and injuries to six others.

Florida's Attorney General, James Uthmeier, has initiated a criminal probe into OpenAI, the developer of ChatGPT. He stated that if a human had given the same guidance, they could potentially be charged with homicide. This situation has prompted complex legal and ethical discussions about the limits of holding AI-generated advice accountable.
Legal Questions
Legal professionals note that criminal charges against companies in the US are possible but not common. In the past, companies like Purdue Pharma, Volkswagen, Pfizer, and Exxon have faced legal repercussions due to corporate misconduct. However, these cases involved direct human decisions made by company leaders or staff.
Matthew Tokson, a law professor at the University of Utah, highlights that this case is unique because the alleged encouragement originated from a digital product rather than a person. He describes the situation as legally intricate and largely untested. Legal experts interviewed by AFP suggest that prosecutors might consider charges related to negligence or recklessness, which involve failing to prevent known risks or overlooking safety duties.
Burden Proof
Experts also caution that achieving a criminal conviction would be extremely challenging. Brandon Garrett, a law professor at Duke University, explained that criminal cases require proof beyond a reasonable doubt, which is a very high standard. Tokson added that prosecutors would likely need strong evidence, such as internal documents showing OpenAI was aware of potential dangers but did not act accordingly.
OpenAI has denied that ChatGPT is responsible for the attack. The company claims it continues to enhance safety systems aimed at detecting harmful requests, preventing misuse, and responding to dangerous behaviour.
Civil Cases
Some legal professionals suggest that civil lawsuits may be a more practical way to hold AI companies accountable. These cases could push firms to implement stronger safeguards and take greater responsibility for the social impact of their products.
Several civil lawsuits involving AI platforms and suicides are currently being processed in the US.
One recent case was filed by the family of Suzanne Adams, who alleged that ChatGPT contributed to her son's murder. Lawyer Matthew Bergman acknowledged that newer versions of ChatGPT include more safety measures, although he questioned whether they are enough.
This case underscores increasing concerns about AI regulation and the absence of clear legal guidelines from the US government. Experts argue that stronger national regulations may offer a more effective solution than criminal prosecutions alone.
Sources: Phys.org - Tech Xplore
Also read: Is ChatGPT Changing How We Think? MIT, Stanford Study Raises Red Flags


