Let’s say you’re a federal judge, and you need to write an opinion about a securities case. You could do it the time-tested old-fashioned way: read the briefings, read the relevant caselaw, check your quotes, make sure you’ve got the holdings right. Or you could try one of these new AI tools that everyone’s talking about. Just feed it a prompt like “write me a securities opinion with lots of citations about scienter” and see what happens.

What could go wrong?

Well, you might end up like Judge Julien Xavier Neals of the District of New Jersey, who just had to withdraw his entire opinion after a lawyer politely pointed out that it was riddled with fabricated quotes, nonexistent case citations, and completely backwards case outcomes. The kind of errors that have a very specific signature—the same signature that’s gotten lawyers sanctioned for over a year now.

Now, there are a few possible explanations here. Maybe Judge Neals was having the worst research day in judicial history and just happened to make multiple errors that perfectly mimic AI hallucinations through pure coincidence. Maybe there’s some other explanation for why a federal judge would confidently cite cases for propositions they directly contradict.

Or maybe—and this is just a thought—Judge Neals used the same AI tools that have been getting lawyers in trouble for over a year, and somehow expected a different result.

The particularly puzzling part is that courts have been sanctioning lawyers for exactly these AI hallucination mistakes since 2023. If you’re a federal judge, you’ve probably seen some of these cases come across your desk. You know what AI hallucinations look like. You know they’re a problem. So what’s the excuse here?

Let’s catalog the damage, shall we? According to the complaint letter from lawyer Andrew Lichtman, Neals’ opinion included:

  • Multiple quotes attributed to cases that don’t actually contain those quotes
  • Three cases where he got the outcomes completely backwards (motions that were granted described as denied, and vice versa)
  • A case supposedly from the Southern District of New York that doesn’t exist there (probably confused with a similar case from New Jersey)
  • Quotes attributed to defendants that they never actually made

The fake quotes are particularly telling. They sound perfectly legal-ish: “classic evidence of scienter,” “false statements in their own right,” “the importance of the product to the company’s financial success supports the inference of scienter.” These are exactly the kind of plausible-sounding but ultimately fabricated language that large language models love to generate.

Now, if you’re thinking “this sounds familiar,” you’re right. We’ve been covering lawyers getting hammered for AI-generated fake cases since 2023. Just recently, three lawyers got kicked off a case for citing five hallucinated cases. The pattern is always the same: cases that sound real, citations that kind of make sense, but turn out to be complete fiction when you actually check.

And you’re supposed to check.

The legal profession has been learning this lesson the hard way. Courts have been clear: if you use AI tools, you’d better verify everything. But apparently that memo didn’t make it to the federal bench in New Jersey.

You might recall that Judge Kevin Newsom on the Eleventh Circuit actually wrote a thoughtful opinion about how AI tools could be useful in legal practice. He went into detail on all the ways that using a tool like this only makes sense in a very narrow set of circumstances: not for drafting an opinion, but for trying to query the common understanding of a word or phrase.

It’s almost like Neals read that opinion and thought, “You know what? I bet I can do better.”

But here’s the really concerning part: this stuff doesn’t stay contained. Other lawyers in a separate case had already cited Neals’ now-withdrawn opinion as persuasive authority. Those made-up quotes and backwards case outcomes were starting to burrow their way into the legal record, creating fake precedent that could influence future cases.

Neals’ June 30 opinion has already influenced a parallel case also playing out in the US District Court for the District of New Jersey. That case also centers on allegations by shareholders that a biopharma company—in this instance, Outlook Therapeutics Inc.—lied to them about a product.

Citing Neals’ decision as a “supplemental authority,” lawyers for Outlook shareholders argued against the company’s motion for dismissing the class action.

This is the nightmare scenario that legal tech experts have been warning about. When a private lawyer cites fake cases, it gets caught pretty quickly by opposing counsel or judges. But when a federal judge publishes fake legal standards in an official opinion? Other lawyers assume it’s reliable. They cite it. Courts rely on it. The hallucinations metastasize through the system.

In that other case, lawyers for Outlook had to also alert the judge that the CoreMedix decision “contains pervasive and material inaccuracies,” which is a nice term for “judicial AI slop.” But, still, what a world in which you need lawyers to waste time telling judges that the cases opposing counsel are citing may be real cases… but are based on a ruling by a judge who appears to have used AI.

Bloomberg notes that there’s “no mention of AI in the complaints the attorneys have directed at Judge Neals.” Which, sure, maybe the judge was just having a really, really bad day and happened to make multiple errors that perfectly mimic AI hallucinations through pure coincidence.

But come on. Everyone in this story—the judge, the lawyers, the reporters—knows exactly what this looks like. They’re just too polite to say it.

Look, we get it. AI tools are tempting. They can draft reasonable-sounding legal language faster than you can type. But as we’ve seen over and over again, they’re also perfectly happy to make stuff up with complete confidence. That’s why verification isn’t optional—it’s literally the bare minimum of professional competence.

This isn’t really a story about one judge making some mistakes. It’s about the broader pattern of people in positions of authority not understanding the tools they’re using.

The technology isn’t going away. AI tools will probably become more sophisticated, and they’ll certainly become more ubiquitous. But that doesn’t change the fundamental responsibility to verify what they produce. Lawyers learned this lesson the expensive way—through sanctions, being kicked-off cases, and professional embarrassment. Apparently, some judges are going to have to learn it too.

The fact that he had to withdraw the entire opinion suggests these weren’t minor errors that could be fixed with a quick correction. According to the lawyers who complained, the opinion contained “pervasive and material inaccuracies.” That’s not a typo—that’s a fundamental breakdown in the basic duty to get the facts right.

So what happens next? Maybe Judge Neals will issue a corrected opinion—one where he actually reads the cases he cites and verifies that the quotes are real. Maybe he’ll quietly implement some verification procedures in his chambers. Or maybe he’ll just hope everyone forgets this happened.

But the broader lesson is pretty clear: if you’re going to use AI tools to help with legal work, you’d better understand their limitations. They’re great at generating plausible-sounding text. They’re terrible at accuracy. And if you’re a federal judge whose opinions carry the weight of law, that’s probably something you should have figured out before hitting “publish.”

Leave a Reply