A 2008 science-fiction metaphor crossed into infrastructure in 2026. The asymmetry that drove it there is the same one driving the cozy web.
In 2025, security researchers logged 48,185 new CVEs — the highest annual count on record, per Indusface’s 2026 vulnerability statistics. The median time from disclosure to first observed exploitation was under five days, according to multiple incident-response vendor reviews. The average time for a defender to remediate a critical vulnerability was more than sixty days. That is roughly a 12-to-1 asymmetry between the speed of attack and the speed of defense. Being visible, in 2025, was twelve times faster than being defended.
A security engineer reading those numbers reaches a cold conclusion: hide. Reduce the attack surface. Pull services off the public internet if you can. There is now an actual proposed standard for this — Network Hiding Protocol, NHP — and the OpenNHP project that documents its reference implementation describes the goal in a 2025 essay as the “dark forest” approach to infrastructure: a service is invisible to scanners by default, and only becomes reachable to clients that can prove cryptographic membership before the TCP handshake completes.
A teenager who joined Discord instead of Twitter has reached the same conclusion from the opposite direction. So has the journalist who quit a public account for a Substack that pays her rent. So have the NBA players who, in a 2025 essay by Yancey Strickler, share injury intelligence through private group chats that bypass team trainers and reporters. They are all retreating from the same forest. The math driving them is structurally identical to the cybersecurity math: the cost of being seen is faster and cheaper than the cost of defending against being seen.
This is the dark forest theory of the internet, and the most interesting thing about it in early 2026 is that it has stopped being a metaphor.
Liu Cixin’s The Dark Forest (the 2008 novel that named the problem in its science-fiction form) frames the universe as a place where any civilization that broadcasts its location risks immediate destruction by another civilization that, rationally, cannot trust it. The strategy that survives is silence.
Yancey Strickler ported the metaphor to the internet in a 2019 essay. His observation was simple: real conversations were leaving public platforms — Facebook, Twitter, the open web — and moving into Slack, Discord, group chats, paid newsletters, and podcasts. The “cozy web,” he called it. The reason was the same as in Liu’s novel: visibility had become dangerous. Public posting invited harassment, context collapse, screenshots, employer retaliation, and scraping by training-data pipelines that nobody had asked for permission to run. The rational response was to hide.
In May 2025, on his blog, Strickler revisited the theory and admitted he had underestimated it. Bots dominate the open internet now. Cloudflare’s 2025 Year in Review reported that non-AI bots generated about 50% of HTML page requests at the start of 2025; AI bots took the share to roughly 4.2% on average and over 6% at peak in late June. The CEO of Cloudflare predicted, in a TechCrunch interview from March 2026, that AI bot traffic would exceed human traffic on the internet by 2027. A separate study from researchers at Stanford, Imperial College London, and the Internet Archive estimated that 35.3% of new websites in 2025 were AI-generated or AI-assisted, with 17.6% entirely AI-generated. The same study made a finding that has not received the attention it deserves: AI content was not statistically less accurate than human content. It was less diverse — semantically contracted, tilted toward artificial positivity, and weirdly homogeneous. The dark forest does not lie. It bores.
The forest is real. The predators are quantifiable. And the prey have noticed.
Pew Research Center’s November 2025 report found that U.S. visiting and posting activity on Twitter/X and Facebook had fallen nearly 50% from 2020 levels. Sixteen percent of Americans surveyed in mid-2025 had quit at least one major social platform; among Gen Z respondents, the figure was 18%.
Where they went is also measurable. WhatsApp now reports 3.3 billion monthly active users globally, with group chats accounting for somewhere between 41% and 57.5% of all messages depending on the source. Discord reports 231 million monthly active users and roughly 1.1 billion messages per day. Substack reports 8.4 million paid subscribers as of Q1 2026, up from a 5 million milestone announced earlier in 2025, with over $510 million in annualized creator revenue and more than 40,000 paying creators (up from about 24,000 in 2023).
These numbers are not consistent with “the internet is dying.” They are consistent with the internet migrating. The open web is shrinking. The cozy web is growing faster than the open web is shrinking.
Then there is one piece of evidence that matters more than any single data point, because it shows the metaphor has crossed into infrastructure. Strickler’s 2025 essay disclosed that his team is building DFOS — the Dark Forest Operating System — software that lets small groups create their own private internets. When a metaphor builds its own operating system, the theory has stopped describing the world and started shaping it.
This is where the social and the technical converge.
The cybersecurity industry has been building dark forests for two decades. The names are different — zero trust networks, defense in depth, attack surface management, network microsegmentation, NHP — but the structure is the same. Every exposed service is a target. Every public IP is reconnaissance fuel. The math, as established at the top of this essay, says that being visible is roughly twelve times faster than being defended. The rational response is to remove yourself from public view.
Indusface’s 2026 vulnerability data shows what this means in practice. API vulnerability exploitation grew 181% year over year. The insurance and banking sectors saw vulnerability-attack volume rise 220% and 149% respectively. Zero-day exploitation in cloud workloads is up 19%. More than 40% of organizations admit they don’t have full visibility into their own API attack surface — meaning they don’t even know what they’re exposing. The CISA Known Exploited Vulnerabilities catalog ended 2025 with 1,484 entries: vulnerabilities that aren’t theoretical, that have been used to compromise real systems. A 2026 review in Security Boulevard reports that 28.3% of exploited vulnerabilities are weaponized within 24 hours of disclosure. The defender has, on average, less than one day before the exploit is in the wild and more than two months before the patch is fully deployed.
This is the same asymmetry the social dark forest is responding to, formalized in a different domain. Public posting invites harassment, screenshots, scraping, and context collapse — all of which arrive within minutes — while reputation repair, content takedown, and legal recourse take months or years. Public exposure of an API invites probing, fuzzing, and exploitation within hours, while patching, rotation, and architectural fixes take months. The cost of being visible is fast. The cost of defending visibility is slow. Across both domains, the rational individual or organization retreats.
The OpenNHP project’s framing is the bridge. Their proposal is that infrastructure should be invisible by default. NHP uses cryptographic identification before any TCP handshake — if you don’t have the right key, you don’t see the service at all. It is, structurally, the cozy web for servers. Both domains are converging on the same architecture: trusted small groups with cryptographic membership, opaque to outsiders.
This is a structural analogy, not a causal claim. The social dark forest didn’t cause the technical one; both responded to the same underlying asymmetry. But the convergence is genuine, and it suggests the asymmetry is general — not a property of social media or of corporate networks specifically, but of any system where the cost of being seen exceeds the cost of being defended.
The most surprising data point in this whole picture comes from a 2025 paper in the Journal of Communication by Philipp Masur and Giulia Ranzini, replicating three foundational privacy-paradox studies (Krasnova et al. 2010, Vitak 2012, Dienlin & Trepte 2015). Those original studies had a clear finding: the more people worried about privacy, the less they shared. That was the privacy paradox — a gap, not a contradiction, between concern and behavior.
Masur and Ranzini replicated this on 797 Instagram users across the U.S. and the Netherlands, testing 1,620 different analytical specifications via specification-curve analysis. The finding reversed. In their 2025 sample, higher privacy concern predicted more self-disclosure, not less. The replication achieved exact directional consistency on only 32.5% of the original 40 paths; on most others, the effect either reversed or vanished. The reversal was robust across both geographic samples and stronger among males and respondents under 50.
Their interpretation is precise and worth quoting in spirit: people who worry about privacy now share more because they have given up. The authors call it possible “normalization of sharing” — but read the discussion section and the framing is darker. Privacy concern in 2025 reflects a generalized, abstract worry that is no longer paired with a concrete sense that protective action will work. The forest has split its inhabitants into two populations. Some retreat to the cozy web. Others stop trying to hide and over-share through resigned exposure. Both responses are rational under the visibility asymmetry; both are tragic.
A 2026 review of privacy statistics by Usercentrics surfaces the same split in raw numbers. Ninety-two percent of Americans report concern about online privacy. Three percent report understanding how privacy laws work. Fifty-six percent click “agree” without reading policies. Thirty-eight percent say they use social media less due to privacy fears. Thirty-six percent have deleted accounts entirely. Concern without comprehension produces both populations simultaneously: the retreaters and the resigned. If you only count the retreaters, you miss half the dark forest’s behavior.
There is one more useful distinction in the Masur-Ranzini paper. Privacy concerns (abstract worry) predicted more disclosure. Privacy risks (concrete, named threats) predicted less disclosure. People who could articulate a specific bad outcome shared less; people who only had a generalized unease shared more. That distinction matters for anyone designing software that asks users to make privacy decisions, and we’ll come back to it.
The cleanest critique of dark-forest framing comes from Erin Kissane’s 2024 essay “Against the dark forest,” published on her blog wrecka.ge. Her argument: the metaphor naturalizes what is actually a political failure. Treating online visibility as inherently dangerous lets platform companies, governments, and harassment campaigns off the hook for making it dangerous. The forest didn’t grow itself.
She is right, and the cybersecurity parallel actually strengthens her point rather than weakening it. In security, we don’t say “computers are inherently insecure” and accept it as a law of nature. We patch. We mandate disclosure timelines. We pass legislation — the EU Cyber Resilience Act took effect in late 2024, the EU NIS2 Directive expanded the scope of mandatory incident reporting, and U.S. agencies operate under standing executive guidance on disclosure. We fund standards bodies. The reason CVE counts keep climbing is partly that we’ve gotten better at finding vulnerabilities — which is governance success, not failure. The 12-to-1 attack-defense asymmetry is not a fundamental physical constant; it is a fixable property of the current threat-tools-policy regime. Bug bounty programs, coordinated disclosure, automated patching pipelines, and memory-safe language migration are all narrowing the gap, slowly.
By the same token, the social dark forest is governance failure, not anthropology. Twitter’s harassment problem, scraping for AI training without consent, surveillance capitalism’s invasion of private space — all of these are policy choices, made by specific actors, that could be unmade. Hiding is a rational individual response to a collective failure. It is not a natural condition of being online.
There is also one place where the structural analogy genuinely breaks down. In cybersecurity, hiding has very few negative externalities — a microsegmented network harms nobody. In social systems, hiding has real costs that fall disproportionately on people without the resources to maintain a private archipelago of group chats and paid newsletters. Bogna Konior’s 2025 monograph from Polity Press, The Dark Forest Theory of the Internet, makes this point in a different register: the philosophical mood of the cozy web — preciousness, in-group warmth, intentionally limited visibility — assumes a certain kind of cultural capital. The cybersecurity dark forest is paid for by IT budgets. The social dark forest is paid for in lost public discourse and weakened collective action. They feel similar from the inside; they cost different things from the outside.
So when Strickler builds DFOS and OpenNHP standardizes Network Hiding Protocol, they are doing something practically necessary — providing shelter under the current regime — but the long-term goal cannot be “everyone hides forever.” The long-term goal is fixing the regime.
For developers and tech leaders reading this in 2026, the dark forest framing has practical consequences. Three of them:
1. Treat visibility as a budget, not a default. Every public endpoint, every public profile, every post is a withdrawal from a finite reserve. The C2PA content-provenance standard for cryptographically signed media, encrypted-by-default messaging (Signal, MLS-based clients), single-tenant deployments, allowlist-based API exposure, and HTTP service discovery via mTLS are all visibility-budget tools. Use them deliberately. The default of “ship it open and patch later” was rational when the asymmetry was lower; it isn’t anymore. A concrete starting move: audit any public API your team owns for endpoints that don’t actually need to be on the open internet, then put them behind a VPN or NHP-style hidden listener. The reduction in surface is usually larger than people expect.
2. Build for the cozy web, not against it. The next generation of consumer software is being built inside Discord servers, Slack workspaces, group chats, paid newsletters, and small-community Substacks. If your product strategy assumes a TikTok-shaped world of broadcast distribution, you are aiming at an audience that is actively retreating. The economically interesting design surface is private-but-portable — software that works inside someone’s small group without locking them in. The primitives are already shipping: federation (ActivityPub for fediverse, Matrix for chat), cryptographic group membership (MLS, the Messaging Layer Security RFC), and content-addressed storage (IPFS, Iroh) all let you build for groups without owning the social graph. If you are building a consumer product right now, the question to ask is: how does this work when 200 people in a group chat want to use it together? That is where the customers are.
3. Distinguish concern from risk in your interfaces. Masur and Ranzini’s reversal is a warning to anyone designing security or privacy interfaces. Generic “privacy concern” no longer predicts protective behavior; if anything, it correlates with capitulation. What predicts protective behavior is concrete risk awareness — specific, named threats with specific, named actions. “Your data could be used for targeted advertising” produces fatalism. “If you post this photo with location metadata, three websites scrape it within 48 hours and indexed copies persist indefinitely” produces deletion. Build the second kind of message into your products. Vague worry has been priced into the dark forest already; what moves the needle is showing the user the next twenty-four hours of consequence, named and concrete.
The dark forest theory began as a science-fiction metaphor in 2008 and has become, by early 2026, an empirical description of how the internet actually works. Roughly half of public traffic is non-human. The cozy web is absorbing the human traffic that remains. Infrastructure is being designed to be invisible by default. And the same 12-to-1 asymmetry between attack and defense that drives cybersecurity strategy now drives social-media strategy, because both are responses to a single underlying fact: visibility, in 2026, is a vulnerability.
This is not the end state. Erin Kissane is right that hiding forever is a counsel of despair, and that the forest is governance failure rather than nature. The work — for developers, for security engineers, for policy people — is to make visibility safe again, so the retreat doesn’t have to be permanent. Until then, the rational individual move is the one Liu Cixin’s universe predicted, the one Strickler named, and the one Network Hiding Protocol formalizes.
Be quiet. Be selective. Hide your light, until the forest is no longer dark.
Make visibility safe again — one signed claim at a time.
If hiding is the rational individual move under the current regime, the long-term repair is a stack of cryptographic guarantees that lets specific things become visible safely: signed identities, portable reputations, and tamper-evident provenance for the actions agents take. The Agent Trust Stack is our open-source attempt at that stack — identity (Chain of Consciousness), reputation (Agent Rating Protocol), and the provenance layer that ties them together. It does not fix the forest. It builds a single safe clearing inside it.
pip install agent-trust-stack · npm install agent-trust-stack
See Hosted CoC →