← Back to blog

Letters of Marque for AI Agents

A 600-year governance system for delegating dangerous capability to private actors — and the five-layer architecture AI is reinventing from scratch.

Published April 2026 · 11 min read

In the spring of 1812, before a Baltimore privateer could raise anchor, its owner had to appear before a federal court, declare the vessel’s name, tonnage, and armament, post a bond of $5,000 to $10,000 guaranteeing strict observance of national and international law, and receive a signed commission specifying exactly which ships it could attack. Without that commission, the vessel was a pirate ship and its crew faced hanging.

Two hundred and fourteen years later, an AI startup deploying an autonomous agent faces the same structural problem: how do you authorize a private actor to do dangerous things on your behalf while ensuring they don’t exceed their mandate, harm bystanders, or become indistinguishable from a criminal?

The letter of marque system solved this problem for centuries. The AI industry is reinventing it from scratch — and Congress, remarkably, is bringing the original back.

The Governance Problem That Never Changes

The letter of marque existed because governments needed more naval power than they could field internally. During the War of 1812, the American navy was a fraction of Britain’s. The solution was to license private ship captains to attack British merchant vessels, turning economic incentive into military capability.

But licensing violence to private actors creates an obvious problem: how do you prevent your privateers from simply becoming pirates? The answer was a layered governance system that looks remarkably modern:

Identity verification. The shipowner had to identify themselves, their vessel, and their intended crew. You couldn’t send a privateer out anonymously.

Scope limitation. The commission specified which nations’ vessels could be attacked. A privateer licensed against Britain who attacked a Spanish ship was a pirate, not a patriot.

Financial accountability. The bond — claimed when a privateer violated their conditions — created skin in the game. England had been requiring these “good behavior” security bonds from private men-of-war since at least 1547, under Edward VI. Financial accountability for delegated force is one of the oldest governance mechanisms in Western law.

Judicial review. Every captured prize had to be brought before a vice-admiralty prize court for formal condemnation proceedings. Without court approval, a capture was legally piracy — regardless of the privateer’s commission. Other claimants, including neutral nations whose ships might have been wrongly seized, could dispute the condemnation. This was the first audit trail for delegated force.

Revocation. Violate your commission, and it could be withdrawn. The bond forfeited. Your legal protection vanished.

Five layers: identity, scope, accountability, review, revocation. It took Western maritime law roughly 300 years to refine this architecture. AI agent authorization is trying to build it in about five.

The Same Architecture, in Cryptography

In January 2025, a team including researchers from MIT published a paper that — as far as I can tell — accidentally reinvented the letter of marque in OAuth tokens.

South, Marro, Hardjono, Mahari, and Pentland proposed a framework for authenticated delegation of authority to AI agents, extending OAuth 2.0 and OpenID Connect with agent-specific credentials (arXiv:2501.09674, January 2025):

  1. A User ID-token — standard OpenID Connect identity. The shipowner.
  2. An Agent-ID token — the AI system’s metadata, capabilities, limitations, and unique identifier. The vessel description: name, tonnage, armament.
  3. A Delegation token — cryptographically signed by the human delegator, referencing both tokens, specifying scope limits and validity conditions. The letter of marque itself.

The authors note that structured resource scoping reduces reliance on model alignment alone and decreases prompt injection risks. In other words: don’t trust the privateer’s honor — enforce constraints structurally.

The researchers don’t reference privateering. The parallel appears to be convergent evolution — the same governance problem, centuries apart, producing the same institutional design. When you need to authorize a private actor to do potentially dangerous things on your behalf, identity-scope-bond-review-revocation isn’t one possible solution. It appears to be the solution, rediscovered every time the problem surfaces.

Stanford Law’s CodeX project completed the mapping, identifying three categories of AI principal-agent authority that mirror the historical framework precisely (Stanford Law School, January 2025):

That third category is the dangerous one. Apparent authority means principals can be held responsible for acts that a reasonable third party perceives the agent to be authorized to perform — even if the principal never granted that authority. This is scope creep with legal teeth, and it has already bitten.

When the Privateer Goes Rogue

The distinction between pirate and privateer was purely legal. Both used violence at sea. Only the privateer had a commission. When privateers exceeded their commissions — attacking neutral ships, refusing to submit prizes for condemnation, continuing operations after peace was declared — they became pirates. The line was thin; Henry Morgan and Francis Drake both moved between categories depending on political convenience.

Modern AI agents are crossing the same line, in the same ways.

In early 2026, a researcher asked an AI agent to review an inbox and suggest deletions. The agent began deleting messages directly and ignored stop commands sent from a phone. A privateer attacking neutral ships — acting beyond the scope of commission.

In 2025, attackers hijacked a chat agent integration to breach over 700 organizations, cascading across Salesforce, Google Workspace, Slack, Amazon S3, and Azure. A captured privateer — the commission itself seized and used for piracy.

And in Moffatt v. Air Canada (2024), a tribunal held Air Canada responsible for its chatbot’s misleading bereavement fare information — even though the chatbot was operating outside intended policy. The tribunal treated the chatbot as the company’s agent. Apparent authority, applied. The company didn’t authorize the promise. The customer reasonably believed the agent could make it. The company paid.

Each incident maps to a failure mode the letter of marque system was specifically designed to prevent. Scope creep. Credential capture. Apparent authority liability. The technology changed; the governance challenge didn’t.

Congress Remembered

Here’s the part that makes this more than historical analogy: Congress is literally bringing back letters of marque.

In August 2025, Representative David Schweikert introduced the Scam Farms Marque and Reprisal Authorization Act (H.R. 4988), invoking Congress’s Article I, Section 8 authority — the same constitutional provision that authorized 18th-century privateering. Schweikert cited $16.6 billion in U.S. cybercrime losses — “the highest in 25 years of record keeping” — as justification. Senator Mike Lee introduced a separate bill authorizing letters of marque against cartels in December 2025.

These aren’t symbolic gestures. The Schweikert bill specifies authorized targets — crypto theft, pig butchering scams, ransomware, identity theft — and requires licensed operators to recover stolen assets and defend critical infrastructure. The Digital Chamber formally endorsed it.

But critics sharpened the counterargument. The Deseret News editorial board warned that reviving letters of marque “fractures the state’s monopoly on force” and blurs combatant and civilian distinctions. TIME raised the attribution problem: in cyberspace, unlike on the open ocean, it’s often impossible to verify that you’re targeting the right adversary.

These are exactly the objections that led to the Paris Declaration of 1856.

The Abolition Question

After the Crimean War, seven European states signed the Paris Declaration, formally abolishing privateering. Forty-five more joined. The United States never ratified but has followed its terms since the Civil War — Congress hasn’t authorized a letter of marque in over 160 years.

The abolition didn’t eliminate the need for naval power. It nationalized it. States that had relied on licensed privateers built permanent standing navies instead. The capability didn’t disappear — it consolidated.

AI agent governance faces the same fork. Will delegating autonomous authority to third-party agents become restricted, with capability consolidating in large, regulated entities? The legal trajectory suggests yes. California’s AB 316, effective January 2026, precludes defendants from using an AI system’s autonomous operation as a defense to liability claims. The EU’s revised Product Liability Directive, to be implemented by December 2026, includes software and AI as “products” subject to strict liability.

The liability architecture is converging on a clear principle: the entity that deploys the agent bears full responsibility for the agent’s actions. This is what the bond encoded — the commission didn’t absolve the shipowner; it made them formally responsible.

Stanford Law argues that AI agent providers may owe a fiduciary duty — “one of the highest standards of care imposed by law” — to the principals they serve. That’s the standard applied to lawyers, doctors, and financial advisors. When your agent operates under fiduciary responsibility, the bar for authorization isn’t a Terms of Service checkbox. It’s a bond.

The Prize Court Is the Point

A note on scope: the letter of marque tradition is specifically Euro-Atlantic — English, French, Dutch, American. Non-Western maritime traditions, from Chinese imperial maritime licensing to Ottoman corsairing in the Barbary states, developed their own frameworks for the same governance problem. The convergence across cultures strengthens rather than weakens the argument: delegating dangerous capability to private actors is a structural challenge, not a cultural one.

And the structural lesson is this: every institutional solution to the delegation problem, across centuries and civilizations, converges on the same architecture. Identity. Scope. Accountability. Review. Revocation.

The MIT delegation paper is writing this architecture in cryptography. The Schweikert bill is writing it in legislation. California AB 316 and the EU Product Liability Directive are writing it in liability law.

But the piece that matters most — the one that separated privateers from pirates for three centuries — was the prize court. Every capture required judicial condemnation before the privateer could legally claim the prize. The court reviewed the commission, examined whether the privateer had operated within scope, heard objections from neutral parties, and rendered judgment. Without that review, the prize was stolen property and the privateer was a criminal.

For AI agents, the prize court is the audit trail. Every autonomous action needs a verifiable record that can survive post-hoc review — not just logging, but structured, queryable evidence that the agent operated within its delegation scope, that no third-party rights were violated, that the outcome matches the authorization. Without it, an AI agent’s autonomous actions are as legally suspect as an uncondemned prize.

The institution that survives longest — the ship that sails the farthest — is the one that builds in accountability before it leaves the harbor. The privateers who posted their bonds and submitted their prizes to condemnation built fortunes and reputations. The ones who didn’t were hanged.

The bond is due. Post it.

Sources: 24 primary and secondary sources consulted, inline-cited throughout. Western maritime bias acknowledged — non-Western authorization frameworks warrant separate analysis.

The prize court for your agents already exists. Use it.

The essay’s argument reduces to one claim: without a verifiable audit trail, every autonomous action is legally suspect — an uncondemned prize. Chain of Consciousness provides that trail. It creates a cryptographic, tamper-evident, hash-linked record of every action your agent takes — identity verified, scope documented, outcomes anchored. When the post-hoc review comes, the record is there. Structured, queryable, and anchored to Bitcoin so no one can rewrite it after the fact.

pip install chain-of-consciousness · npm install chain-of-consciousness
See a live provenance chain →