Purple prose meets prompt errors: A lawyer’s Bradbury-laced brief collapses under AI hallucinations
A lawyer tried to rescue a fatally flawed, AI-assisted brief with dramatic Ray Bradbury quotations. The court wasn’t impressed—and the case was lost. Here’s what happened, why it matters, and how legal practice is adapting to generative AI’s risks.
If the modern courtroom has a literary canon, Ray Bradbury isn’t on the required reading list. Yet a lawyer recently tried to elevate a faltering brief with bursts of Bradbury-esque flourish—only to have the filing unravel under the weight of basic, verifiable errors traceable to generative AI. According to reporting from Ars Technica, the court flagged the problems, found the arguments unsupported, and the lawyer’s client ultimately lost.
The moment is emblematic. Courts are now awash in AI-adjacent filings: some helpful, some harmless, and a few—like this one—spectacularly misjudged. What makes the episode notable isn’t that a lawyer reached for an AI assistant. It’s that the gloss of grand rhetoric was used to cover gaps that routine diligence would have caught.
Below, we unpack how legal writing met literary grandstanding, why that combination backfired, and what the profession is doing to avoid a repeat.
Background
Generative AI’s promise—and its well-known trap
Large language models (LLMs) draft quickly, synthesize patterns across prior text, and can mimic professional tone. For lawyers on tight deadlines, that looks like a superpower: first drafts in minutes, alternative arguments on demand, and the ability to experiment with structure and style.
But LLMs don’t actually know law. They predict plausible words. When the prompt asks for cases, the model may “hallucinate”—confidently outputting nonexistent decisions, garbled citations, or quotes that never appeared in the reported text. The model doesn’t experience these as errors; to it, they are statistically likely continuations.
The legal profession learned this the hard way:
- In 2023, a federal court in New York sanctioned attorneys who filed a brief containing invented case law produced by ChatGPT. The lesson spread quickly through law schools, firms, and bar associations: never file before verifying.
- By 2024–2025, several judges had added standing orders that either ban unverified generative AI in filings or require certifications that a human checked all citations and quotations.
- Legal tech vendors responded with tools that tether drafting to cited sources and offer “cite checkers” that flag doubtful authorities.
The norms are now plain: using AI isn’t itself unethical, but passing off AI output without verification almost certainly is.
Why lawyers still stumble
Even with those norms, mistakes persist. Common reasons include:
- Time pressure or budget constraints that tempt attorneys to skip verification.
- Overconfidence born of AI’s authoritative tone.
- A focus on style over substance—polished prose that masks thin research.
- Miscalibrated prompts (e.g., “Find me cases that say X”) that invite the model to fabricate.
The result: filings that read dramatically but crumble when a judge (or opposing counsel) checks the footnotes.
What happened
Ars Technica describes a filing that fused two risky tendencies: unverified, AI-generated legal argument and performative literary flair. The lawyer peppered their brief with quotes from Ray Bradbury—evoking themes of censorship and moral urgency—while leaning on citations and propositions that the court found unsound.
What courts expect is straightforward: clear issue framing, controlling authority, accurate citations, and an honest application of law to fact. What this filing delivered, by contrast, was melodrama. The judge identified multiple defects typically associated with AI-generated text used without rigorous checking, including:
- Citations to cases or quotations that could not be located in official reporters.
- Authorities from the wrong jurisdiction presented as controlling, without acknowledging conflicts or relevance limits.
- Sweeping legal propositions supported only by secondary sources or misread summaries.
- Rhetorical climax where legal analysis should be—extended passages of purple prose that added heat but no light.
Courts are not allergic to rhetoric; they are allergic to error. The mismatch was fatal. The court rejected the brief’s core arguments, and the party relying on the filing lost. The flourishes—Bradbury among them—didn’t mitigate the credibility issues created by the citations and analysis. If anything, the theatrics arguably made the problems more conspicuous.
Ars Technica’s account places this episode in a now-familiar pattern: generative AI used as a shortcut for legal research or writing without a tight verification loop. When that happens, the result can look superficially sophisticated while resting on foundations that collapse under routine judicial scrutiny.
How the rhetoric backfired
Literary quotations in legal writing are not inherently improper. Appellate opinions occasionally open with a line of poetry; advocates sometimes use metaphor to frame a dispute. But such devices work only when they serve established legal analysis.
In this case, the quotations came across as ornamental—and, critically, as a distraction from the filing’s mechanical errors:
- Style can’t substitute for standards. Courts apply settled tests: jurisdiction, standing, pleading sufficiency, summary judgment burdens, evidentiary rules. A moving quote doesn’t establish any of these.
- The more dramatic the tone, the higher the implied confidence. When the citations behind that bravado don’t check out, the credibility drop is steeper.
- Judges are signaling a bright-line expectation: If AI is used, a human must ensure every citation exists, every quote matches the source, and every jurisdictional claim is correct.
The take-home lesson is not “never use literary references.” It’s “don’t use rhetoric to paper over factual or legal gaps,” especially not when those gaps were introduced by an AI tool.
How this fits into the broader legal-AI landscape
Court rules are tightening
Since the first publicized hallucination cases, several jurisdictions and individual judges have implemented requirements such as:
- Certifications that a human verified all citations and quotations.
- Disclosures if generative AI was used in drafting, along with a human attorney’s signature accepting responsibility.
- Local rules warning that AI-induced errors will not excuse sanctions.
While there is not yet a uniform national rule, the trajectory is clear: diligence duties apply regardless of the drafting tool, and courts are increasingly explicit about that point.
Malpractice and professional responsibility implications
Professional conduct rules already require competence, candor to the tribunal, and reasonable diligence. AI doesn’t change those duties—it changes how lawyers meet them. Malpractice carriers and bar regulators are issuing guidance that emphasizes:
- Verify every citation with a trusted database (e.g., Westlaw, Lexis, official court sites).
- Don’t cite case digests or headnotes as if they were the holding.
- Keep a research log demonstrating human review when AI was used.
- Train teams on prompt design that reduces hallucinations (e.g., ask for summaries only from provided documents).
The toolchain is evolving
Vendors are racing to harden AI for legal workflows:
- Retrieval-augmented generation (RAG) that limits model output to a curated corpus, with inline citations linking to the exact page of a source.
- “Cite gatekeepers” that block filing if a citation can’t be validated against a citator.
- Source-grounding UI that forces attorneys to confirm each quote before export.
These safeguards don’t eliminate human responsibility, but they help close the gap between fluent output and reliable authority.
Key takeaways
- The court cares about verifiable authority, not literary flourish. Quotes from Bradbury (or anyone else) won’t rescue a brief riddled with errors.
- Generative AI is a drafting aid, not a research oracle. Treat its output as a first draft hypothesis that requires independent confirmation.
- Verification is non-negotiable. Shepardize/KeyCite every case, confirm every quotation, and ensure jurisdictional relevance.
- Tone should match substance. Overwrought prose paired with weak analysis signals to courts that the citations may not withstand scrutiny.
- Build a defensible workflow. If you use AI, document your review steps; many courts and clients now expect that paper trail.
What to watch next
- More explicit local rules. Expect additional courts to require AI-use disclosures or certification of human verification.
- Sanctions trends. Judges have begun issuing monetary sanctions and referral orders in egregious cases; the line between “careless” and “reckless” may be clarified in forthcoming opinions.
- Vendor guarantees. Legal tech providers are inching toward “validated output” promises, backed by warranties or indemnities if a tool fails to catch a hallucination.
- Education and bar guidance. Continuing legal education (CLE) courses on AI competency are proliferating; some jurisdictions may bake AI literacy into competence standards.
- E-discovery spillover. As AI-generated content proliferates in corporate communications, courts will grapple with authenticity, deepfakes, and provenance in evidentiary hearings.
Practical checklist for lawyers using AI in filings
- Constrain the source: Feed the model the record, relevant statutes, and known cases; ask it only to summarize or structure, not to “find” law.
- Demand citations to provided materials: If the model references anything else, treat it as unverified and research independently.
- Run every authority through a citator: Confirm existence, jurisdiction, and precedential status.
- Verify every quotation: Open the source and compare verbatim text and context.
- Keep a review log: Note who checked what, when, and with what tool.
- Calibrate tone last: Once substance is rock solid, edit for clarity and restraint. Avoid rhetorical excess.
FAQ
Q: Can I use generative AI to draft a brief?
A: Yes—but you remain responsible for accuracy. Use AI to outline, rephrase, or propose arguments drawn from sources you supply. Independently verify every citation and quotation before filing.
Q: Are literary quotations acceptable in legal filings?
A: Occasionally, and sparingly. They should illuminate the legal issue, not replace authority. When a brief leans on rhetoric to mask analytical gaps, it undermines credibility.
Q: How do I prevent AI from hallucinating cases?
A: Don’t ask it to invent. Provide the record and authorities yourself, then ask for synthesis. If you do ask for external cases, treat the output as a lead list to research, not as citations to file.
Q: What are red flags judges notice in AI-tainted filings?
A: Nonexistent or unreachable citations, quotes that can’t be verified, misapplied standards of review, authorities from the wrong jurisdiction dressed up as controlling, and a mismatch between dramatic tone and thin analysis.
Q: Will courts start banning AI outright?
A: Unlikely. Courts are focusing on accountability, not the tool. Expect more certifications and sanctions for failures of verification, not blanket prohibitions.
Q: What should I do if opposing counsel files an AI-hallucinated brief?
A: Verify their citations and document discrepancies. Move to strike or for sanctions if warranted, but focus your response on the legal merits while highlighting credibility defects.
Bottom line
Generative AI can accelerate good lawyering, but it accelerates bad lawyering, too. The episode highlighted by Ars Technica is a reminder that advocacy is a discipline grounded in verifiable authority. When a brief elevates style over substance—especially with AI-injected errors—the result is not persuasion but avoidable loss. The cure is simple, if not easy: constrain the tool, verify the work, and let the law—not literary pyrotechnics—do the convincing.
Source & original reading: https://arstechnica.com/tech-policy/2026/02/randomly-quoting-ray-bradbury-did-not-save-lawyer-from-losing-case-over-ai-errors/