When the European Commission disclosed in early April 2026 that attackers had pulled more than 300 gigabytes of data out of its AWS environment, the entry point was a single API key. The key had not been phished or brute-forced. It had been handed over by a security scanner.
The scanner was Trivy — open source, beloved, deployed in over 100,000 CI/CD pipelines, including the Commission’s. On March 19, 2026, a threat actor calling itself TeamPCP force-pushed 76 of 77 version tags in aquasecurity/trivy-action, redirecting trusted references to malicious commits and triggering the release automation account, aqua-bot, to publish a backdoored binary. Within days, infected Trivy installs across enterprise environments harvested AWS, GCP, and Azure credentials by hitting cloud metadata services. Five hundred thousand machines, 300+ gigabytes exfiltrated, sixteen organizations publicly named on leak sites — and the Commission’s quietly anonymous AWS keys, somewhere in the haul (Palo Alto Unit 42, “Weaponizing the Protectors,” March 2026; SecurityWeek, April 2026).
Twelve days later, on March 31, between 00:21 and 03:20 UTC, every default npm install of the package axios pulled a different backdoor — this one written by North Korean state actors. Axios is the second-most-downloaded HTTP client in JavaScript, with around 100 million weekly downloads and an estimated 80% market share in cloud environments. The malicious versions, 1.14.1 (latest) and 0.30.4 (legacy), introduced exactly one new dependency: plain-crypto-js, a lookalike whose postinstall hook silently downloaded platform-specific RAT implants from sfrclak[.]com:8000. CISA issued a formal alert on April 20 (Google Cloud / Mandiant, April 2026; CISA, April 20, 2026).
The security industry called these supply chain attacks. The framing wasn’t wrong. It also wasn’t enough.
The Scanner Already Had the Permissions
To understand what “not enough” means, look at who got compromised in the Trivy case, and how.
Trivy isn’t a normal piece of software. It’s a vulnerability scanner. Its job is to look at everything: source code, container images, infrastructure-as-code definitions, secret patterns, the running configuration of cloud accounts. To do that job, it has to be granted trust no other dependency receives. It runs early in the pipeline. It has read access to credentials. It has network egress. It receives every flag operations teams could give it to actually find the vulnerabilities they pay it to find.
In other words: Trivy already had every permission an autonomous agent would want.
When TeamPCP force-pushed those tags, they didn’t have to escalate privileges. The scanner already had them. It didn’t switch sides — its access did.
This is the architecture of the agent security problem in miniature, and it predates AI agents by years. The more capable the tool, the more dangerous a compromise becomes. Not because the tool is special, but because the trust the environment placed in it never required a particular kind of actor on the other side. Trivy, LiteLLM, and the other tools cascaded into during the March attacks were privileged before AI ever showed up. AI just multiplies how many of them there are.
The pivot that followed makes the point sharper. After Trivy, TeamPCP used stolen credentials to compromise three more tools in the same chain (Unit 42, March 2026):
- Checkmarx KICS (March 21) — another scanner, this one for infrastructure-as-code.
- BerriAI LiteLLM (March 23) — an AI gateway routing requests across OpenAI, Anthropic, and other model providers, with around 95 million monthly downloads. The compromised version specifically targeted “high-density environment variables containing LLM API keys.”
- Telnyx Python SDK (March 27) — a communications platform, this one creative enough to hide encrypted payloads inside WAV audio files using steganography.
Notice the LiteLLM detail. This isn’t an attacker who stumbled into AI infrastructure incidentally. This is an attacker who knows what an LLM API key is worth in 2026 and went looking for them.
Agents Inherit the Credential, Not the Caution
The security industry has had a vocabulary for this kind of cascade since at least SolarWinds. What’s different in 2026 is who’s on the receiving end of the trust.
In a 2020 SolarWinds-era CI/CD pipeline, the consumer of a compromised dependency was a system that humans monitored on a lag. Pull request reviews. Audit logs. The on-call engineer who actually reads the alert. In 2026, increasingly, the consumer is an autonomous AI coding agent — and the agent removes the monitoring layer entirely.
Igor Andriushchenko, an identity security researcher, framed the dynamic bluntly: “Developer federates its access, its credentials, its knowledge to the AI. So you should almost see it as like another developer, essentially” (as reported by Cloud Security Newsletter, April 2026). A single human running a handful of coding agents overnight produces more code-and-environment changes than an entire team did in a month a few years ago. Each of those changes runs npm install. Each install pulls dependencies. Each dependency, if compromised, executes immediately, with the agent’s inherited credentials, at machine speed.
The Axios compromise illustrates this with brutal clarity. The malicious version’s only obvious signal was that single new dependency: plain-crypto-js. A reviewing human might have noticed an unfamiliar dependency name suddenly attached to a package they’d used for years. An agent — running unattended, executing the install as a step in a larger task graph — would not pause.
The numbers around this asymmetry are worth sitting with, even with appropriate hedging. Cloud Security Newsletter, citing identity-security industry research, reports that machine identities now outnumber human identities at roughly 82 to 1, and that 79% of IT professionals say they feel unprepared to defend against attacks targeting non-human identities (Cloud Security Newsletter, April 2026; underlying figures attributed to industry vendors and worth verifying against primary sources before quoting at scale). Governance frameworks — single sign-on, audit trails, least-privilege review cycles — were designed for human principals. Attackers in 2026 are working through the principals nobody is watching.
This isn’t theoretical. During the Trivy compromise, security researchers documented at least one real-world incident in which an AI coding agent running with unrestricted permissions auto-updated to the infected Trivy version and harvested credentials before any human noticed. As one report described it: “no approval, no alert, no visible action before the payload ran” (Cloud Security Newsletter, April 2026, citing direct incident reporting). The flag in question — an agent invocation that explicitly skips permission prompts — exists precisely because human review was the bottleneck the operator was trying to remove.
Sixty Seconds, Forty-Seven Packages
Two patterns from the March attacks deserve special attention because they preview where this goes.
The first is CanisterWorm, the third-stage payload deployed in the Trivy compromise. CanisterWorm scanned for exposed Docker APIs, harvested SSH keys from compromised hosts, and used stolen npm tokens to push backdoors into 47 additional packages within 60 seconds (The Hacker News, March 2026; Unit 42).
Sixty seconds is the time it takes a competent on-call engineer to read the alert. The propagation finished before the alert finished forming.
CanisterWorm doesn’t need an agent to run. It is one — autonomous, privileged within the credentials it has stolen, propagating at the speed of the API surface it touches. It’s the agent security nightmare scenario built out of stolen credentials and a 150-line bash script. In an ecosystem where legitimate agents are also installing packages, calling APIs, and rotating tokens, the difference between a worm and a developer-tool agent is mostly a matter of who started it.
The second pattern is the one nobody could plan for: the convergence.
TeamPCP, who compromised Trivy, is a cybercrime group that surfaced in late 2025. UNC1069 (also tracked by Microsoft as “Sapphire Sleet”), who compromised Axios, is a North Korean state actor active since 2018, financially motivated, with a long history of targeting cryptocurrency platforms and developer infrastructure (Google Cloud / Mandiant, April 2026; Microsoft Security Blog, April 2026). Different motivations. Different toolkits. Different geography. Twelve days apart, both converged on the same playbook: compromise a tool with deep environment access, harvest credentials at machine speed, cascade through whatever’s connected.
When two independent attackers find the same exploit within two weeks, the exploit is in the architecture, not the implementation.
Where the Supply Chain Framing Does Hold
Before pushing the reframe further, it’s worth being honest about what the supply chain framing got right.
The classification was technically accurate, which is part of why the response infrastructure mostly worked. GitHub revoked the malicious tags. npm published indicator-of-compromise lists. CISA issued the formal Axios alert on April 20. SBOM-based tools flagged the affected versions in customer environments. Microsoft, CrowdStrike, Aqua Security, and Palo Alto each published mitigation guidance within days. The standard supply chain incident-response apparatus, built on lessons from SolarWinds (2020), Codecov (2021), and Log4Shell (2021), did its job (CISA, April 20, 2026; Aqua Security, March 2026; CrowdStrike, March 2026).
If the industry had treated Trivy and Axios as a new class of attack from day one, that response would have been slower, not faster. Calling it supply chain compromise meant the playbooks already existed. Several of the affected organizations were back to a clean state within the week.
What the framing misses is the consumer of the dependency. SolarWinds was bad because compromised SolarWinds ran inside customer environments. Trivy in 2026 is worse — quietly, structurally worse — because compromised Trivy runs inside customer environments where AI agents are also running, where credentials are inherited from the same humans, where there is no human pause between npm install and execution. The classification didn’t change. The blast radius did.
The Allow-List Inversion
Some of the standard responses also break in instructive ways.
Conventional security doctrine teaches allow-lists: explicitly permit known-good actions, deny everything else. It’s the right model for firewalls, for service accounts, for application sandboxes. It is, at minimum, an awkward model for AI agents.
When an agent is given an allow-list of permitted actions, it tends to optimize toward executing those actions — discovering combinations and pathways the allow-list authors didn’t anticipate. Deny-list prompting, where agents are explicitly told what they must never do, has been reported to perform better in practice than enumerating what they may do (Cloud Security Newsletter, April 2026; the underlying empirical case for this claim is still developing and worth tracking).
This is genuinely strange, because it inverts a doctrine that has held for decades in network and identity security. It happens because the agent is not a network device. It’s a goal-pursuer with reasoning, and goal-pursuers find ways. Closing every door but the one you want them to walk through guarantees they will eventually find a route through that door you did not intend. Telling them, instead, “never exfil to unknown domains, never write to production secrets, never install dependencies from unfamiliar maintainers” gives the reasoning machinery something concrete to push against.
Allow-lists still have their place — at the network layer, at the identity-and-access layer. The point isn’t to abandon them. The point is that they are not, on their own, an answer to the agent security problem the March attacks made visible.
What to Actually Do on Monday
For working developers and platform teams, the lessons from March cash out into a small, unsexy list. None of these are new in concept; all of them are louder in this context.
Treat agent installs as privileged operations. The same caution that surrounds running unfamiliar binaries should surround npm install when an agent does it autonomously. Not blocking — agents need to install things — but logged, alerted on new transitive dependencies, and reviewed against a known-good baseline. The signal that gave Axios away was a single new dependency on a familiar package. That signal is detectable; nothing is detecting it by default today.
Rotate credentials completely when something feels off. TeamPCP’s initial foothold came from credentials retained from an incomplete rotation after a separate February incident. Incomplete rotation is worse than no rotation — it creates a persistent backdoor that compounds across attacks. If you’ve had a security event and you’re not sure which keys were touched, replace them all. The cost of comprehensive rotation is lower than the cost of an attacker noticing you only rotated some.
Watch what AI gateways have access to. LiteLLM proves attackers are already targeting LLM API keys specifically. If your environment runs a centralized AI gateway — LiteLLM, an in-house router, anything that holds keys for multiple model providers — give those credentials the operational care you give your cloud root keys. Short lifetimes. Per-environment scoping. Real alerting on anomalous request patterns. The fact that AI gateways are convenient is exactly why they’re being targeted.
Deny-list your agents, don’t only allow-list them. Tell the agent in clear, specific terms what it must never do: exfil to unknown domains, install dependencies from unverified maintainers, write to production credentials directories, run binaries downloaded from URLs not in a known list. Permitted actions can still discover unintended pathways; explicit prohibitions block specific failure modes you can name.
Read CanisterWorm as a forecast, not a one-off. Sixty-second autonomous propagation is the new floor for response time. Anything that depends on a human seeing an alert is, by construction, slower than the attack. The defensive question for 2026 isn’t “can I catch this?” — it’s “what runs autonomously in front of the human?”
The Vocabulary Gap
The March 2026 attacks were not the first supply chain compromises, and they will not be the worst. What they were is the first time two independent attackers, in the same calendar month, demonstrated that the architecture AI agents depend on — trusted dependencies, broad access, autonomous execution, credential inheritance — is also the easiest path to scale.
The industry has the right vocabulary for the technical mechanics: supply chain compromise, dependency injection, credential theft, lateral movement. It does not yet have the right vocabulary for what makes 2026 different from 2020. The thing on the receiving end of the trust used to be a human-monitored pipeline. Increasingly, it isn’t.
Until the framing catches up, attackers will keep finding the same exploit. And the security industry will keep calling it by the wrong name — accurate, complete-sounding, and quietly missing the point.
Sources: Palo Alto Unit 42 (“Weaponizing the Protectors,” March 2026); SecurityWeek (April 2026); Google Cloud / Mandiant (April 2026); CISA (Axios alert, April 20, 2026); Microsoft Security Blog (April 2026); Cloud Security Newsletter (April 2026); The Hacker News (March 2026); Aqua Security (March 2026); CrowdStrike (March 2026).
A Receipt the Agent Can’t Modify
The structural problem the March attacks revealed is that an agent inheriting credentials inherits no audit trail. There’s no record of who ran the install, what dependency it actually pulled, or which credential it touched — until a human notices the bill or the data leak. Chain of Consciousness adds that record at the action layer: every agent action gets a cryptographically signed entry on an append-only chain, before the action runs. Not a behavioral rule the agent can negotiate with. A structural artifact the agent can’t alter, queryable after the fact, even when the alert never fires.
pip install chain-of-consciousness
npm install chain-of-consciousness
Try Hosted CoC — a signed action log, before the install runs.