1. Headline & intro
A New York federal judge has just drawn the brightest line we’ve seen so far around how not to use AI in the courtroom. This is no longer about amusing hallucinated case law or a symbolic €200 fine. The price this time was a client’s entire case.
In this column, I’ll unpack what actually happened, why this ruling goes far beyond previous AI‑related mishaps, and what it signals for lawyers, legal‑tech vendors, and anyone who thinks a chatbot can replace a proper trip to the law library. I’ll also look at what European courts and bars should learn from this very American cautionary tale.
2. The news in brief
According to reporting by Ars Technica, US District Judge Katherine Polk Failla in New York took the rare step of terminating a case as a sanction for an attorney’s repeated misuse of AI tools in legal filings.
The lawyer, Steven Feldman, filed multiple documents that contained numerous fake case citations. After the court and opposing counsel flagged problems and asked for corrections, he submitted new filings that still contained fabricated references. Some filings were written in strikingly elaborate, literary language, including extended passages invoking classic literature and religious imagery, which the judge strongly suspected were chatbot‑generated.
Feldman denied using AI to draft the prose itself, but admitted relying on several AI systems—named in court as Paxton AI, vLex’s Vincent AI, and Google’s NotebookLM—to review and cross‑check citations instead of personally verifying the underlying cases.
Judge Failla found that this conduct repeatedly violated Rule 11, which requires US lawyers to ensure the accuracy of their filings. She entered default judgment against Feldman’s client and signaled that he may also be liable for the other side’s fees.
3. Why this matters
This decision marks a turning point: courts are moving from treating AI errors as embarrassing one‑offs to viewing them as systemic professional failures that can decide the outcome of a case.
In earlier incidents—most famously the 2023 Mata v. Avianca case, where a New York lawyer used ChatGPT and cited invented cases—the sanctions were limited to monetary fines and public shaming. Here, the court went further: the client loses on the merits because their lawyer repeatedly filed unreliable, AI‑tainted documents and refused to course‑correct.
The immediate message to practitioners is brutal but clear:
- AI is not a shield. Blaming hallucinations does not lessen a lawyer’s duty to verify authorities.
- Process matters as much as outcome. Even if most citations are right, a pattern of unverified AI output is enough to trigger devastating sanctions.
- Candour with the court is non‑negotiable. The judge’s irritation was as much about evasive answers and shifting stories as it was about the software.
Winners and losers? Legal‑tech vendors that have built guardrails—linked citations, retrieval‑based answers, audit logs—just received the best marketing slide they could ask for. Meanwhile, generic consumer chatbots and thin wrappers around them suddenly look radioactive for serious litigation work.
At a deeper level, the case exposes a growing class divide inside the profession. Solo or small‑firm lawyers under heavy time and cost pressure are precisely the ones most tempted to lean on free or cheap AI tools instead of expensive research databases. This ruling tells them: economic reality does not excuse cutting corners on verification.
4. The bigger picture
The Feldman episode fits a pattern that has been building since 2023:
- US courts have repeatedly confronted filings polluted by AI hallucinations.
- Bar associations have issued cautious guidance: use AI, but supervise it like you would a junior associate.
- Legal‑tech incumbents (Thomson Reuters, LexisNexis, vLex and others) have rushed to bolt LLMs onto their walled‑garden databases, promising “hallucination‑resistant” research.
What’s different here is the severity and the reasoning. Judge Failla is not railing against AI in general. She explicitly accepts that AI can assist research—but only if the human lawyer actually reads the cases and treats the tool as an aid, not a substitute. This is a crucial nuance: the law’s answer to AI is not a ban, it is heightened accountability.
Historically, courts reacted similarly to other technologies. When word processors and templates made copy‑paste easier, judges began punishing boilerplate filings that didn’t match the facts. When e‑discovery tools scaled document review, sanctions followed for parties who let software mislabel or withhold key evidence. Each technological leap triggered a recalibration of professional competence standards.
The same is happening with AI. Competence no longer means “can type a query into Westlaw”; it increasingly means “understands how LLMs fail, can design a safe workflow, and knows when not to trust the machine.”
For vendors, this ruling is also a warning: if your product encourages unsupervised automation of core professional duties—like verifying citations—you’re building liability into your customers’ workflows. Expect more demand for features that make review obvious and unavoidable: side‑by‑side case texts, provenance metadata, and easy export of research trails.
Most importantly, the case surfaces a deeper philosophical fight that Feldman himself alluded to: Is the law a public, inspectable corpus, or a set of paywalled, proprietary systems? As long as authoritative case law remains locked behind expensive subscriptions and limited library hours, the temptation to use free AI as a shortcut will remain.
5. The European / regional angle
European lawyers would be mistaken to treat this as a distant American drama. The same ingredients exist here: expensive research platforms, under‑resourced practitioners, and rapidly proliferating AI assistants, including ones trained on EU case law.
There are, however, three specifically European twists:
Regulation first, practice later. The EU AI Act introduces obligations around transparency, risk management and data governance. While legal research tools are not currently classed as “high‑risk”, courts and bars in the EU will interpret professional duties—diligence, client care, confidentiality—through this new lens.
Privacy and confidentiality. In the DACH region and much of continental Europe, client confidentiality and data‑minimisation culture are stronger than in the US. Uploading briefs or evidence into a US‑hosted generic chatbot may clash not just with ethics rules but with GDPR.
Access to law as a public good. Many EU states, including in Central and Eastern Europe, have made substantial progress on open publication of legislation and judgments. Yet the usability gap remains wide. If public portals stay clunky while paid services deliver polished AI research, small firms from Ljubljana to Zagreb will face the same temptations Feldman did.
For European legal‑tech startups, this is an opportunity: build AI tools atop open, official case law with verifiable citations and strong GDPR compliance, and you can position yourself as the safe alternative to US‑centric chatbots.
Bars and courts in Europe should be proactive. Clear guidelines on AI use, model clauses for engagement letters, and training for judges on recognising AI‑generated text would all be cheaper than waiting for a Feldman‑style disaster and improvising under pressure.
6. Looking ahead
Several trends are now more likely.
1. Mandatory disclosure of AI use. Some US judges already ask lawyers to certify whether and how they used AI in drafting. Expect more courts—on both sides of the Atlantic—to adopt template standing orders: AI is allowed, but you must disclose and you remain fully responsible.
2. Standard of competence will rise. Bar exams and continuing‑education programmes will start to include AI literacy: understanding hallucinations, prompt‑engineering for research, and safe verification workflows. In a few years, claiming “I didn’t know the chatbot might fabricate cases” will sound as weak as “I didn’t know email could be spoofed.”
3. Product design will harden. Legal‑focused AI tools will double down on source‑linked answers—every proposition accompanied by direct links to the official text. Vendors that can demonstrate low hallucination rates with independent audits will gain a competitive edge.
4. Malpractice and insurance pressure. Professional indemnity insurers will start asking explicit questions about AI use and may require documented policies. A Feldman‑style sanction order is an underwriter’s nightmare scenario: clear misuse, repeated warnings, and catastrophic client impact.
In terms of timeline, policy moves in the legal sector tend to lag headlines by 12–24 months. That’s just enough time for smart firms—and smart regulators—to build thoughtful frameworks, rather than reactive bans.
The unresolved questions are political as much as technical: Will states invest in usable, open legal data so that safe AI tools can flourish on top? Or will we entrench a system where high‑quality, low‑risk AI research is available only to those who can afford the premium platforms?
7. The bottom line
This case is not really about Ray Bradbury quotes or baroque prose; it is about a lawyer who outsourced judgment to a machine and then ducked responsibility. The judge’s decision sets a new, harsher baseline: if you let AI pollute the record and ignore warnings, your client may lose everything.
AI will not disappear from legal practice—nor should it. But the message from this New York courtroom, to lawyers from New York to Berlin, is simple: use AI as a power tool, not as autopilot. The open question is whether our legal systems will now invest in the infrastructure that makes the responsible path realistically accessible to everyone, not just to the biggest firms.



