AI in the courtroom: why law is changing far slower than the hype

March 23, 2026
5 min read
Lawyer in a traditional robe using a laptop with AI icons in a courtroom

1. Headline & intro

Law may be the last great white‑collar profession still built on hourly billing, precedent books, and opaque expertise. Generative AI is now pressing against all three—but in a far messier way than the "AI lawyer" headlines suggest. From London coroners’ courts to global megafirms trimming back‑office staff, experiments are underway, yet very little is truly industrialized.

In this piece, we’ll look at what is actually happening inside the business of law, why AI’s impact will be uneven and political, and how European regulation could make or break the next wave of legal tech. Most importantly, we’ll ask who stands to gain power in an AI‑augmented legal system—and who risks being left out of justice altogether.

2. The news in brief

According to Ars Technica, summarizing reporting from the Financial Times, UK barrister Anthony Searle has been using tools like ChatGPT and specialised medical AI systems to prepare questions and understand complex clinical procedures in coroners’ inquests. He is careful not to feed client data into public models and manually checks all references.

The article describes how similar tools are being piloted more broadly in the UK justice system. Government plans reportedly include using AI for listing cases, translation and transcription in what has been billed as one of the biggest criminal justice overhauls in decades. Law firms are subscribing to platforms such as Harvey and Legora for contract analysis and drafting, and some, like Clifford Chance, have reduced back‑office headcount partly in expectation of automation.

Yet structural adoption remains limited. A LexisNexis survey cited in the piece found that while about half of barristers say they use AI in some way, only a tiny minority see it as fully embedded in their operations.

3. Why this matters

The temptation is to treat this as another "lawyers will be automated away" story. That’s the wrong frame. What is actually at stake is who controls the infrastructure of legal work.

On one side, you have underfunded public justice systems—like the UK’s coroners’ courts—where AI becomes a survival tactic. If a barrister can use a medical chatbot to ask sharper questions without paying for an expert report, that is a quiet but profound shift in access to expertise. Here, families and small claimants are the potential winners.

On the other side, you have global firms selling certainty and prestige to corporate clients. For them, AI is less about replacing lawyers and more about protecting margins: automating research, first‑draft contracts, and internal knowledge management so that expensive human time is reserved for strategy and client hand‑holding. Junior associates and support staff are the obvious losers; partners and shareholders are not.

There is also a competitive dynamic: the firms that master AI‑assisted workflows can quote fixed or blended fees more confidently. That directly attacks the billable hour, the industry’s core business model. General counsel will start asking why they pay for ten hours of research if a partner openly promotes having an "AI copilot". Expect friction between marketing departments that talk up AI and finance teams that still depend on selling time.

Finally, credibility risks are real. UK judges have already had to deal with filings polluted by invented citations from poorly used AI. Once a court publicly calls out fabricated authorities, every advocate using AI is under suspicion. That slows serious adoption and shifts power toward vendors who can prove traceability, logging and guardrails.

4. The bigger picture

What we’re seeing now is the second major automation wave in law. The first was e‑discovery and computer‑assisted legal research in the 2000s and early 2010s: tools that scanned millions of documents and cases, then helped lawyers prioritise what to read. Those systems did not kill the profession; they reshaped it. Entire careers in document review vanished, but high‑end advisory work flourished.

Generative AI widens that pattern from searching to synthesising. Tools like Harvey, Legora or Microsoft’s Copilot in Office can already:

  • summarise lengthy contracts;
  • generate first drafts of clauses based on playbooks;
  • create structured chronologies from messy email threads;
  • surface relevant case law from natural‑language prompts.

This mirrors what is happening in other expert industries: radiology, software development, financial analysis. The pattern is consistent: AI does not replace the professional; it compresses the lower‑value layers of their work and raises expectations on the remaining human layer.

Compared to US practice, however, adoption in Europe is more entangled with regulation and public systems. American firms have more freedom to treat AI as a competitive weapon. In many European countries, the state is both a massive buyer of legal services and the operator of courts. That means AI in law isn’t just a productivity story; it is constitutional infrastructure.

Crucially, this shift is happening while trust in institutions is fragile. If people believe AI‑driven triage decides which cases get heard, or that chatbots are quietly influencing settlements, every bias or hallucination becomes a political scandal, not simply a technical bug.

5. The European / regional angle

Europe is uniquely conflicted about AI in justice. On paper, the EU AI Act classifies most uses of AI in courts as "high‑risk", demanding strict transparency, human oversight and quality controls. At the same time, many member states face crushing case backlogs, underfunded legal aid, and pressure to "do more with less". The UK, although outside the EU, faces a similar budget reality.

This tension will shape the market. Cloud‑based US tools that casually send data across borders collide with GDPR, data localisation rules and recent case law on international transfers. European law firms—especially in privacy‑sensitive countries like Germany or the Nordics—are already signalling a preference for models that can run in EU data centres or even on‑premise.

That opens space for European legal‑tech players to build sector‑specific models: trained on national case law, local languages, and integrated with e‑filing and case‑management systems. It also favours bar associations and courts that can define certification schemes: "AI‑ready" or "AI‑compliant" workflows could become a competitive advantage when pitching institutional clients.

For European citizens, the risk is a two‑tier system. Wealthy corporates and well‑funded firms will enjoy AI as an efficiency booster. Ordinary litigants may experience AI primarily as a gatekeeper—automated triage for legal aid, or chatbots instead of in‑person advice. Whether this leads to more or less access to justice will depend on how strongly regulators and civil‑society groups push for rights like contestability and human review.

6. Looking ahead

Over the next three to five years, expect AI in law to move through three concrete phases.

  1. Shadow adoption. Individual lawyers quietly use public models for brainstorming, simplifying technical material, or drafting emails—while officially claiming they "don’t rely on AI". We are already here.
  2. Managed pilots. Firms and courts sign contracts with a small number of vendors, integrate them with document management and time‑recording systems, and wrap them in policies. Metrics like turnaround time, write‑off rates, and client satisfaction will decide which pilots survive.
  3. Process redesign. Only once tools are proven will billing models and staffing truly change. Expect more fixed‑fee and subscription offerings, bundled with AI‑rich self‑service portals for routine matters.

What should readers watch? A few signals:

  • Bar association guidance moving from "don’t hallucinate" to concrete standards on logging, training data and liability.
  • Court rules that explicitly allow or restrict AI‑generated drafts.
  • Clients insisting on seeing how AI is used on their matters—and asking for a discount if it is.

The unanswered question is who will own the core legal knowledge graphs in this world: cloud providers, traditional publishers, or new European players. Whoever does will have extraordinary leverage over pricing and innovation.

7. The bottom line

AI will not replace lawyers, but it will quietly reorder who gains power and profit from legal work. Public justice systems may use it to stay afloat; elite firms will wield it to defend margins and win clients; citizens could experience it as both lifeline and barrier. The real battle is not about robots in the courtroom, but about who sets the rules for AI‑augmented law. As this infrastructure solidifies, do we want our future "legal operating system" designed mainly for efficiency—or also for fairness and democratic accountability?

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.