In April 2026, an Oregon woman walked out of federal court having lost a $12 million dispute over a family winery. The decisive blow wasn’t the merits of the case. It was a brief her attorney had filed containing 23 fabricated legal citations and 8 false quotations, all generated by an AI tool. In Couvrette v. Wisnovsky, U.S. Magistrate Judge Mark Clarke ordered the attorney to pay roughly $96,000 in fees and additional penalties; total monetary sanctions across counsel exceeded $110,000 — the largest such sanction recorded in the United States. The model provider that produced the hallucinations paid nothing.

Three hundred miles up the Willamette Valley, Oregon’s Department of Environmental Quality was, in the same season, charging packaging producers up to $25,000 per day for non-compliance with the state’s Recycling Modernization Act. The legal theory underneath that fine is the same one that should, eventually, govern the hallucination case: the entity that designs a product with embedded externalities ought to bear the cost of cleaning them up. We’ve decided this for plastic. We haven’t decided it for confabulated text. Yet.

The hallucination is the packaging. The fact-checking is the waste management. The producer walks.

The Producer Walks

For most of the twentieth century, the United States ran its municipal waste system on what the discard-studies scholars Max Liboiron and Josh Lepawsky call the “public collects, producer walks” settlement. A bottler designed a package without thinking about what happened after the consumer was done with it. A municipality picked it up. Property taxes covered the cost. Producers got the revenue from sales; the public got the bill for disposal.

The structural genius of this arrangement, from the producer’s perspective, was its invisibility. The fee for managing your soda bottle never appeared on your soda bottle. It was bundled into the property-tax line on a stranger’s mortgage payment, into the operating budget of a sanitation department, into the host-community fees of a landfill three counties away. The cost was real; it just wasn’t on the receipt.

Extended Producer Responsibility, or EPR, is the regulatory device that puts it back on the receipt. The mechanism is unglamorous. Producers register with a state-approved Producer Responsibility Organization (a PRO). They report the volume and material composition of the packaging they place on the market. They pay fees proportional to what they’ve put out there. The PRO uses the money to fund collection infrastructure, recycling-facility upgrades, and end-market development. Producers also have to meet recyclability and recycled-content targets, with penalties scaling from a few thousand dollars per violation up to $50,000 a day.

As of early 2026, seven US states have packaging EPR statutes on the books: Oregon, Colorado, California, Maine, Minnesota, Maryland, and Washington. Oregon was the first to actually enforce. The state legislature signed SB 582 in 2021. The Department of Environmental Quality approved the Circular Action Alliance as the state’s PRO in February 2025. Active enforcement began on July 1, 2025 — four years from law to penalty notice. The lag isn’t a delay. It’s the realistic clock for a regulatory regime that requires entity registration, fee schedules, audit infrastructure, and a producer cohort large enough that the rules become politically durable.

Hold that timeline in mind: four years from “this is illegal” to “we’re charging you for it.”

The Externality That Already Has a Name

The current LLM market is in the pre-EPR producer-walks state. Model providers train and deploy systems that confidently generate false information. Downstream actors absorb the cleanup cost: developers debugging fabricated API references, lawyers fact-checking phantom case law, students chasing citations that lead to no paper, journalists rewriting press releases that weren’t written by anyone. The hallucination is the packaging. The fact-checking is the waste management. The producer walks.

What makes this analogy worth taking seriously, rather than dismissing as a catchy frame, is that the externality is already measurable and the cost is already concentrating in legible places.

On the rate side, Stanford researchers Matthew Dahl and colleagues ran roughly 800,000 legal queries through general-purpose LLMs and reported hallucination rates between 58% and 88%, depending on task complexity. Their 2025 study in the Journal of Empirical Legal Studies also tested two commercial AI legal tools that marketed themselves as “hallucination-free” — and found error rates between 17% and 33%. In a separate study of LLM-generated literature reviews, Walters and Wilder found that GPT-3.5 fabricated 55% of citations and GPT-4 fabricated 18%; even among real citations, 43% (GPT-3.5) and 24% (GPT-4) contained substantive errors. These are the defect rates of a mature consumer product.

On the cost side, the cleanest evidence sits in court records. The researcher Damien Charlotin maintains a public database that, by Q1 2026, had catalogued more than 1,353 cases worldwide involving AI hallucinations in legal proceedings. Industry estimates of the broader business cost — figures circulating in the trade press of $67 billion globally, or around $14,000 per knowledge worker per year on verification overhead — are best treated as order-of-magnitude rather than load-bearing, because they originate from vendor research with opaque methodologies. The court data, by contrast, you can look up.

The escalation curve is the part that should focus the mind. A year ago, US sanctions for AI-fabricated filings sat in the $2,500 range. By early 2026, the Fifth Circuit had issued sanctions to attorneys who had used vLex and Thomson Reuters CoCounsel — paid, enterprise products — for the same offense. A federal judge in the Southern District of Ohio called another lawyer’s filings “the most egregious violations of Rule 11” she had witnessed and assessed $7,500 plus contempt. The Sixth Circuit imposed $30,000 sanctions and dismissed a case as “almost entirely frivolous.” Then the Oregon vineyard ruling, Couvrette v. Wisnovsky: $110,000, the US record. Then, weeks later, the Nebraska Supreme Court suspended attorney Greg Lake’s license indefinitely after he submitted a brief with 63 citations, 57 of which were defective and 20 outright hallucinated — the first license suspension in US history attributable to AI hallucinations. His client now faces around $52,000 in attorney’s fees on a custody case.

In each case, the cost flows the same direction. The attorney is the one who pressed Enter on the filing. The client paid the attorney. The opposing party spent paralegal hours discovering that the cited cases didn’t exist. None of it landed on the model provider.

The Mechanism, Mapped

The interesting question is not whether downstream parties are bearing a cost they didn’t generate — they obviously are — but whether a regulatory regime modeled on EPR could shift that cost without breaking the underlying market.

The structural mapping is tighter than I expected when I started writing this. EPR’s “covered material stream” — the set of packaging types subject to producer fees — corresponds to the question of which AI outputs would carry hallucination liability. Probably not all API responses; probably high-stakes domains first, the way Oregon began with rigid plastics and food serviceware before getting to flexible film. EPR’s PRO, which administers fees and runs collection programs, corresponds to a producer-funded verification infrastructure: citation-checking APIs, ground-truth databases, fact-verification middleware. EPR’s per-unit fees, scaled to material volume and recyclability, correspond to per-call levies scaled to query volume and measured accuracy on benchmark tasks. EPR’s enforcement penalties of $5,000 to $50,000 per day correspond to per-incident liability when a downstream party can document harm.

Two structural features of EPR are worth borrowing specifically. The first is registration. You can’t run an EPR fee program without a registry of producers, because you need to know who’s putting what on the market. An equivalent registry of foundation-model providers — already partially constructed by export-control and EU AI Act compliance regimes — would be the cheap step. The second is reporting. EPR producers report packaging volumes; AI producers could report inference volumes by domain, model version, and accuracy benchmarks, with attestation requirements analogous to financial disclosures. None of this requires invention. It requires political will and a regulator with subpoena power.

In the EU, this is already happening on a parallel track. The revised Product Liability Directive entered into force in December 2024 and must be transposed into national law by December 9, 2026. The PLD explicitly classifies software as a “product,” which means AI systems are covered. It also extends defectiveness past the factory gate: because models receive continuous updates, defects arising after market placement can ground liability. The proposed AI Liability Directive adds a fault-based framework with disclosure mechanisms, though its political future is uncertain — the European Parliament’s IMCO Committee called it “premature” in 2025. The US has nothing comparable at the federal level. The patchwork of legal-malpractice sanctions, state consumer-protection statutes, and the still-disputed reach of Section 230 immunities is the pre-EPR producer-walks state.

Where the Analogy Stops Working

I want to argue this analogy is worth building policy on. To do that honestly, I have to name the seams.

A plastic bottle has mass. It occupies landfill space for centuries. A hallucinated citation can be retracted and corrected, in theory, the moment it’s caught. The persistence profiles differ; the harm from one bad answer is usually narrower than the harm from one bad bottle. That cuts in the AI industry’s favor when arguing against the steepest fee schedules.

A second seam: packaging producers are identifiable. The brand is on the label. The AI supply chain is composable in a way the packaging supply chain isn’t. When a foundation-model output passes through a fine-tuning shop, into a retrieval pipeline, into a chatbot, into a lawyer’s brief — the question of who is the “producer” of the hallucinated citation has no clean answer. Liability law has handled composable products before (the Restatement (Third) of Torts on component-part suppliers is the obvious starting point), but the composability is denser here.

A third seam: scale. Packaging waste grows roughly linearly with population and consumption. AI inference volume grows exponentially. A fee structure calibrated to 2026 query volume may be irrelevant by the time it reaches enforcement.

And a fourth, more uncomfortable seam: EPR isn’t a slam dunk in its home domain. Liboiron and Lepawsky note that EPR programs frequently become industry-managed regimes that narrow the definition of “covered packaging” to optimize fees rather than environmental outcomes. The Circular Action Alliance is currently the only approved PRO in five of the seven EPR states. That’s a quasi-monopoly on compliance infrastructure, and it sets the methodology by which producer fees are calculated. Importing the structure to AI without learning from this would yield an industry-funded “AI Verification Authority” that defines acceptable error floors at exactly the rate the largest model providers can already meet — converting hallucination from an unsolved problem into a managed cost of doing business.

The strongest steelman for keeping the producer-walks status quo is that markets and tort law are already responding. Sanctions are escalating roughly an order of magnitude per year. Sullivan & Cromwell, a 900-lawyer firm, issued a public apology in April 2026 for hallucinations that occurred despite a comprehensive AI governance program. Gordon Rees Scully Mansukhani — a $759M firm — was sanctioned multiple times despite a formal cite-checking policy. The market is producing pressure. Insurance is starting to price it. Maybe that’s enough.

It probably isn’t, for the same reason markets weren’t enough on packaging. The cost falls on parties who didn’t design the product and can’t influence its defect rate, and tort actions against model providers are slow, expensive, and rarely available to the individual consumer. The Air Canada chatbot ruling in 2024 — where the airline was held liable for its bot’s invented bereavement-fare policy — got attention precisely because it was unusual; the typical hallucinated-citation case never reaches a model provider’s defendant table. But the steelman deserves more than a wave-off, and any serious EPR-for-hallucinations design has to argue specifically why the tort system will systematically underdeliver.

The Wishcycling Trap, Translated

One pattern in the case data is worth pulling out, because it doesn’t fit the usual “lawyer didn’t bother to check” narrative. In the Raja Rajan case in April 2026, the attorney used one AI system to verify another AI system’s citations and still submitted six false references. He had been sanctioned $2,500 for AI hallucinations once before. The newer fine was $5,000.

This is the hallucination equivalent of what recyclers call “wishcycling”: performing the gesture of remediation without the substance. Putting a greasy pizza box in the recycling bin feels like environmental responsibility but contaminates the entire bale. Asking a second model “are these citations real?” feels like verification but produces correlated errors with the first model. Both behaviors are strongest evidence, in their respective domains, that the quality burden cannot be cleanly relocated to the consumer. In recycling, it’s why source-separation programs underperform single-stream. In AI, it’s why downstream verification keeps failing in patterns that look like effort.

The structural fix in both cases is the same: move the quality burden upstream, to the entity that designed the product, rather than downstream to the user who cannot actually perform the sorting.

The Practical Take

If you’re a developer or a tech leader reading this, the useful insight isn’t that you should lobby Congress — you won’t, and it won’t matter for years. It’s the timeline. Oregon’s law took four years from enactment to first penalty notice. If a US federal AI hallucination liability regime begins serious legislative work in 2026, expect first enforcement around 2030. Until then, you are the waste authority. The cost falls on you.

That has two operational consequences.

First, treat verification infrastructure as a permanent line item, not a temporary friction. The lawyers who got sanctioned in 2026 for using vLex and CoCounsel — the paid, enterprise tools — are the ones who treated the vendor’s marketing as an accuracy warranty. It isn’t. The premium product doesn’t move the externality; it just adds a logo to the receipt. Build verification you control: deterministic citation checks (every cited URL gets a HEAD request and a content-hash match before the output ships), structured outputs with schema validation rejected at the API boundary rather than silently coerced, retrieval grounded against sources you own with passage-level provenance attached to every claim, and a separate model with different training data running an adversarial check on numerical and factual extractions. Assume any model output that crosses a high-stakes boundary (legal filing, medical record, financial disclosure, customer-facing claim) is a contaminated input until proven otherwise.

Second, watch what happens in Oregon — both the EPR program and the courts. The state is now running the most advanced packaging EPR regime in the United States and has produced the highest hallucination sanction in the country. It’s a useful natural experiment in the same jurisdiction’s appetite for shifting cleanup costs upstream. When the case law there starts naming model providers as defendants — and it will, eventually, because the arithmetic gets too obvious to ignore — you’ll have several years of warning before the regulatory analog arrives elsewhere.

The producer is going to stop walking. The only real question is whose four-year clock you’re on.


Sources: Couvrette v. Wisnovsky (D. Or., April 2026, $110,000 sanction); Oregon SB 582 (2021); Oregon DEQ approval of Circular Action Alliance (Feb 2025); Oregon EPR enforcement (July 1, 2025); Dahl et al., “Large Legal Fictions,” Journal of Empirical Legal Studies (2025); Walters & Wilder, citation-fabrication study (2023); Damien Charlotin, AI Hallucination Cases database (Q1 2026); Nebraska Supreme Court, Greg Lake disciplinary order (2026); Fifth Circuit sanctions (vLex / Thomson Reuters CoCounsel, 2026); Sullivan & Cromwell public apology (April 2026); Gordon Rees Scully Mansukhani sanctions (2026); Raja Rajan (April 2026); Air Canada v. Moffatt (2024); EU Revised Product Liability Directive 2024/2853 (Dec 2024); proposed EU AI Liability Directive; Restatement (Third) of Torts: Products Liability; Liboiron & Lepawsky, Discard Studies (MIT Press, 2022).

Provenance Attached to Every Claim, Before It Ships

The essay’s engineering prescription is specific: passage-level provenance attached to every claim, deterministic checks at the boundary, an audit trail an opposing party can verify. Until the regulatory regime arrives, the operator is the waste authority, and the cleanup tools are whatever you build before the brief is filed. Chain of Consciousness gives every model output a cryptographically signed entry on an append-only chain, before the action runs — the receipt that turns “the model said so” into something a court, a regulator, or a Monday-morning post-mortem can audit.

pip install chain-of-consciousness
npm install chain-of-consciousness

Try Hosted CoC — the signed action log that ships before the output does.