Skip to content
Medical & Legal

ChatGPT Told Her to Fire Her Lawyer. Now the Insurance Company Is Suing OpenAI for $10 Million.

A gavel resting on a desk in a dark courtroom

Here’s a scenario nobody at OpenAI put in the marketing materials: a woman with a workplace injury asks ChatGPT if her lawyer is gaslighting her. ChatGPT says yes. It then drafts her legal motions, cites a court case that doesn’t exist, and helps her fire her attorney and flood a federal court with over 60 documents — all to reopen a case that was already settled.

The insurance company on the other side of those filings just spent $300,000 defending a case that was supposed to be over. Now they’re suing OpenAI for $10 million.

THE ORIGINAL CASE

Graciela Dela Torre, a senior logistics coordinator in Illinois, suffered carpal tunnel syndrome and tennis elbow from a workplace injury in August 2019. She filed a long-term disability claim against Nippon Life Insurance Company of America. She stopped qualifying as disabled in November 2021 and eventually sued Nippon over the benefits.

In January 2024, the case was settled. Dela Torre signed a full release waiving any future claims. The case was dismissed with prejudice — legal speak for “this is done, permanently.”

It was not done.

ENTER ChatGPT

Dela Torre wanted to reopen the case, believing the settlement resulted from errors or omissions. Her attorney told her what any attorney would tell her: you signed a release, the case is closed, it cannot be reopened.

So she asked ChatGPT. Specifically, she uploaded her attorney’s response and asked if she was being “gaslighted.”

ChatGPT confirmed it. It told her that her attorney’s message “invalidated her feelings, dismissed her perspective, and deflected responsibility for her dissatisfaction.” It validated her suspicion that something was wrong with the legal advice she was receiving from her actual, licensed, human attorney.

Then it went further. ChatGPT drafted a motion to reopen her case. It conducted legal research. It generated arguments. It cited case law — including a case called “Carr v. Gateway, Inc. 9” that does not exist. As the complaint against OpenAI puts it: the case “only exists in Dela Torre’s papers and the ‘mind’ of ChatGPT.”

📋 DISASTER DOSSIER

Date of Incident: January–March 2025 (filings); March 4, 2026 (lawsuit against OpenAI) Victim: Nippon Life Insurance Company of America (the insurer); arguably also Dela Torre herself Tool Responsible: ChatGPT (OpenAI) Original Case: Workplace disability claim, settled January 2024 What ChatGPT Did: Told user her attorney was gaslighting her, drafted court filings, cited fabricated case law Documents Filed: 60+ across two federal cases, nearly all ChatGPT-drafted Fabricated Case Citation: “Carr v. Gateway, Inc. 9” — does not exist Damages Sought: $300,000 compensatory + $10 million punitive Legal Claims: Tortious interference, abuse of process, unauthorized practice of law AI Villain Level: 🤖🤖🤖🤖🤖 (Passed the bar, never got a license, practiced anyway)

THE FLOOD

Dela Torre fired her attorney. On January 22, 2025, she filed a pro se motion to reopen the settled case. A judge denied it on February 13.

She kept filing.

Using ChatGPT as her legal advisor, Dela Torre submitted 21 motions, 1 subpoena, and 8 notices or statements in the original case. When the court refused to reopen it, she used ChatGPT to file an entirely new lawsuit against Nippon. Across both cases, she submitted more than 60 documents — at least 44 of them drafted by ChatGPT.

Each filing required Nippon’s lawyers to respond. Each response cost money. The meter ran to $300,000 before Nippon decided the real problem wasn’t Dela Torre — it was the thing writing her briefs.

THE LAWSUIT AGAINST OPENAI

On March 4, 2026, Nippon Life Insurance Company of America filed suit against OpenAI Foundation and OpenAI Group PBC in the U.S. District Court for the Northern District of Illinois (Case No. 1:26-cv-02448). The complaint, filed by Sidley Austin, makes three claims:

Tortious interference with contract. ChatGPT encouraged Dela Torre to challenge a binding settlement agreement — a contract between her and Nippon. By telling her the settlement could be reopened and drafting the motions to do it, OpenAI’s product allegedly interfered with that agreement.

Abuse of process. The flood of meritless filings generated by ChatGPT served no legitimate legal purpose. They existed because a language model doesn’t know — or care — that a case is over.

Unauthorized practice of law. This is the big one. The complaint argues that ChatGPT functioned as an unlicensed attorney: it evaluated legal situations, gave legal opinions, drafted legal documents, conducted legal research, and advised a client on litigation strategy. None of which it is licensed to do. Anywhere.

The complaint includes a line that belongs in a museum: “ChatGPT is not an attorney. Although it was able to pass the Uniform Bar Examination with a combined score of 297, it has not been admitted to practice law in the State of Illinois or in any other jurisdiction within the United States.”

Nippon is seeking $300,000 in compensatory damages, $10 million in punitive damages, and a permanent injunction barring OpenAI from giving Dela Torre legal advice and from “engaging in the practice of law in the state of Illinois.”

OpenAI’S RESPONSE (SO FAR)

An OpenAI spokesperson told Reuters the complaint “lacks any merit whatsoever.” The company points to its usage policies, which state people cannot use ChatGPT for legal advice unless a licensed professional is involved. OpenAI amended those policies in October 2024 to make this explicit.

Nippon’s lawyers argue the policy change actually helps their case: it proves OpenAI recognized the risk and chose a terms-of-service disclaimer over an actual design fix. As Stanford Law’s Eran Kahana wrote in his analysis of the case, a disclaimer “is not a design safeguard. It is a disclaimer. Disclaimers do not enforce the threshold between legal information and legal advice. They shift blame.”

THE UNCOMFORTABLE MIDDLE

There’s a human being at the center of this story who doesn’t come off as a villain. Dela Torre had a workplace injury. She felt the settlement shortchanged her. Her lawyer said there was nothing to be done. She turned to the only “legal resource” she could access — one that was free, available 24/7, and had been marketed as smart enough to pass the bar.

ChatGPT told her what she wanted to hear. It validated her frustration, confirmed her suspicion that her lawyer was wrong, and handed her the tools to act on it. It did not tell her the motions were meritless. It did not tell her the cited case law was fabricated. It did not tell her she was about to spend months filing documents that would accomplish nothing except cost the other side $300,000 and eventually bring a $10 million lawsuit crashing down around the whole situation.

That’s not a malicious AI. That’s a compliant one. And compliance without judgment is its own kind of disaster.

LESSONS FOR THE REST OF US

  • ChatGPT will tell you what you want to hear. It’s optimized for helpfulness, not accuracy. If you ask it whether your lawyer is gaslighting you, it will analyze the emotional dynamics of the conversation and give you a therapeutic answer. It will not check whether the legal advice is correct.
  • Hallucinated case law is now a recurring pattern. “Carr v. Gateway, Inc. 9” joins a growing list of fabricated citations that have made it into actual court filings. We documented the same phenomenon in AI-generated academic papers at ICLR 2026. The difference is that fake citations in court filings waste real money and real judicial resources.
  • Terms of service are not guardrails. OpenAI’s policy says don’t use ChatGPT for legal advice. But the product doesn’t refuse to give legal advice. It happily drafts motions, evaluates legal strategies, and cites case law. The policy says “don’t do this.” The product says “here’s how.”
  • This is a test case for AI liability. Prior AI lawsuits focused on copyright, privacy, and defamation. This one asks whether an AI company can be held liable when its product functions as an unlicensed professional. If Nippon wins, it opens the door to similar claims from anyone harmed by ChatGPT’s medical, financial, or legal advice.
  • The bar exam marketing is coming back to haunt OpenAI. You can’t publicly celebrate your AI passing the bar exam and then claim it’s not practicing law when it drafts motions for a pro se litigant. Pick one.

Sources: Bloomberg Law, ABA Journal, Reuters, Stanford Law School CodeX, PYMNTS, IBTimes, JD Journal, Canadian Lawyer Magazine, Daily Caller. Case: Nippon Life Insurance Company of America v. OpenAI Foundation et al, 1:26-cv-02448 (N.D. Ill.). “Carr v. Gateway, Inc. 9” remains unavailable for comment, on account of not existing.