AI Cites Real-Looking Cases - But They're Fake

How ChatGPT generates convincing legal citations that don't exist.

The Hallucination Problem

When you ask ChatGPT, Claude, or any large language model to draft a legal brief, it doesn't look up cases in a database. It generates text that looks like legal citations based on patterns it learned during training. The result? Citations with real-sounding case names, real reporter abbreviations (So. 3d, F. Supp. 2d), and real-looking page numbers.

To someone skimming the output, they look legitimate. The format is perfect. The party names sound plausible. The court and year make sense. But when you actually search for the case - it doesn't exist.

Why Surface-Level Checks Fail

A hallucinated citation like Johnson v. Florida Dept. of Revenue, 347 So. 3d 891 (Fla. 4th DCA 2022) passes every casual check. The reporter format is correct. The court designation is valid. The year is recent. The party names reference real government entities. Nothing looks obviously wrong.

The only way to catch it is to actually verify it exists by searching CourtListener, Westlaw, Google Scholar, or other legal databases. That's exactly what AI Detector Pro does automatically.

Real case: In June 2023, a federal judge sanctioned attorneys for filing a brief with six ChatGPT-fabricated cases. The citations had real-looking reporters and page numbers. None of the cases existed. The attorneys said they "did not think to verify" because the citations looked authentic.

How AI Generates Fake Citations

Large language models are trained on vast corpora of legal text. They learn the patterns of legal citations: how reporters are abbreviated, how party names are structured, what volume and page number ranges look realistic for each reporter. When asked to write a brief, they generate citations that conform to these patterns - but without any connection to actual cases.

It's not lying. It's not even trying to cite real cases. It's doing what language models do: predicting the next likely token. After "See" and a case name pattern, the next likely tokens are a reporter, volume, and page number. The model fills them in with statistically plausible values.

The Real Danger: Mixing Real and Fake

The most insidious scenario isn't a brief with all fake citations - that's relatively easy to catch. The danger is a brief with 15 real citations and 3 fake ones. The real citations build trust, making it less likely you'll question the fake ones. This is exactly the pattern we see in documents analyzed by ADP.

This is why automated verification matters. You can't trust some citations because others checked out. Every citation needs independent verification.

What ADP Does About It

AI Detector Pro extracts every citation from your document and verifies each one against multiple authoritative sources:

CourtListener - the largest free legal database, covering millions of court opinions.
Google Scholar - academic and legal opinion search.
Multi-engine web search - 9 search backends including DuckDuckGo, Bing, and Brave.
Florida Statutes index - all 637 chapters, 24,800+ sections from flsenate.gov.
Rules of Civil Procedure - all 78 valid Florida rules.

If a citation exists in any public record, ADP will find it. If it doesn't - you'll know before you file.