On bikeshedding, Sayre’s Law, and why a three-second file delay became a board-level compliance breach.
In 1957, the British historian C. Northcote Parkinson published Parkinson’s Law, a slim collection of satirical essays. One of them described a fictional finance committee approving items on a meeting agenda. They dispatched a £10 million atomic reactor in two and a half minutes — the design was too technical for anyone present to question. They then spent forty-five minutes on a £350 bicycle shed, because everyone had an opinion about whether the roof should be asbestos or aluminum, and felt their voice should be heard. The committee finally turned to a few pounds’ worth of annual refreshments and argued about that longer still.
Nearly seventy years later, in a comedy sketch titled The Escalation — A Play in One Act, four executives at the fictional accounting firm Henderson & Rowe spent three days escalating a three-second file delay into a board-level compliance crisis. The actual problem — someone had backed up forty gigabytes of vacation photos to the shared drive — was resolved in forty minutes by an IT agent named Alex. The escalation continued for another day. By the time the CEO heard about it at a board dinner, the words “server incident,” “unauthorized data,” and “possible compliance breach” were in the air.
What Parkinson noticed and the play stages is the same machinery viewed from opposite ends. Parkinson described a committee that ignored the reactor to argue about the bike shed. The play describes a hierarchy that turned the bike shed into a reactor. Both surface the same underlying phenomenon: when the actual stakes are low, the organizational response inflates to fill the available room. What follows is a working theory of why that happens, and a small set of operating moves that keep your version of Henderson & Rowe from forming an IT steering committee in response to the printer.
Parkinson’s Law of Triviality — that the time spent on a decision is inversely proportional to its importance — has migrated so far from its origin that most people who say “bikeshedding” have never read the original. The term in its current form was coined in 1999 by Poul-Henning Kamp, a Danish developer working on FreeBSD, who got tired of mailing-list debates about trivial code style and posted an essay called “Why should I care what color the bikeshed is?” The phrase stuck.
The play does something subtler than a textbook bikeshedding example. Janet, the office manager, calls in a problem she describes as “barely noticeable.” She is not bikeshedding; she’s the only person in the play who reports the situation accurately. Greg, her regional director boss, gets the report and asks for a full incident report with timeline and recommendations. Patricia, Greg’s boss, escalates further: data governance, compliance, HR involvement, leadership meeting on Friday. The bike shed, as it travels up the hierarchy, becomes more elaborately roofed at each stop.
The reason for the inversion is not that Greg and Patricia are stupid. Each layer of management has its own risk vocabulary — the set of words available for taking action — and that vocabulary is the only tool the manager has when an ambiguous signal arrives. Greg’s vocabulary is operational risk; he calls for an incident report because that is what an operational risk manager does. Patricia’s vocabulary is compliance and policy; she sees forty gigabytes of unauthorized data and reaches for the data governance framework, because that is the framework she has been given to apply. The bike shed becomes a reactor not by ignorance but by the local availability of reactor-shaped tools.
The corollary to Parkinson’s observation is Wallace Sayre’s Law, attributed by his Columbia University colleague Herbert Kaufman to Sayre, a political scientist who died in 1972. The phrasing Sayre actually used was, according to Kaufman’s attestation in the Yale Book of Quotations, “The politics of the university are so intense because the stakes are so low.” The pithier form — “In any dispute the intensity of feeling is inversely proportional to the value of the issues at stake” — appeared in his Wall Street Journal obituary on December 20, 1973.
Sayre was describing faculty politics, but the law travels well. The intensity of organizational response in the play is wildly disproportionate to the actual technical event: a three-second delay generates an incident report, a steering committee review, a leadership meeting on the schedule, an HR consult, and a board dinner conversation. The CEO and multiple executives are spending cycles on this. The payroll cost of the response easily exceeds the bill Alex sends for the actual fix, which is one hour.
Sayre’s Law works because real stakes constrain the response vocabulary. If a building is on fire, no one calls it a compliance issue. If a server is actually compromised, “data governance” is the wrong frame — you call the incident response team and the lawyers and the regulators in that order. Genuine catastrophe imposes a kind of vocabulary discipline because the wrong words produce visibly absurd responses. Triviality, by contrast, is permissive. A three-second delay is ambiguous enough to support multiple interpretations, and the manager who reaches for the most consequential interpretation will not be obviously contradicted by reality. There is no fire to point at.
The play’s CEO is the one character who has access to actual stakes. He talks to the board. He knows what a real compliance breach looks like, because he has personally absorbed the cost of one or seen a peer absorb it. When he asks Alex “That’s it?” he is performing a calibration: matching the words he has been hearing against the threshold at which his board would expect him to act. The threshold is not met. He cancels the meeting. He asks for a one-sentence summary.
In 1988, Roger Kasperson, Ortwin Renn, and colleagues at Clark University and Decision Research published “The Social Amplification of Risk: A Conceptual Framework” in the journal Risk Analysis. Their question was a puzzle that public health and environmental agencies had been struggling with for a decade: why did some objectively small risks generate massive public response, while objectively large risks (automobile fatalities, indoor radon) generated almost none?
Their answer was that risk signals travel through “stations of amplification” — institutions, media outlets, social groups, expert intermediaries — and each station can amplify or attenuate the signal according to its own organizational logic. The objective risk and the perceived risk diverge because the signal is processed, not transmitted. Each station imports its own vocabulary, its own accountability concerns, and its own audience expectations into the next handoff.
The play maps onto SARF with almost embarrassing precision. Janet’s signal is the objective one: a three-second delay, perceived as minor by users, resolved by IT in forty minutes. Greg receives the signal and amplifies it through his operational-risk lens — “critical performance degradation,” “entire server infrastructure,” “root cause analysis.” Patricia receives Greg’s amplified signal and amplifies it again through her compliance lens — “unauthorized data on a production server,” “data governance issue,” “HR.” By the time the signal reaches the board dinner, an audit-committee member describes it as a “server incident involving unauthorized data and a possible compliance breach.” The original signal is no longer recoverable from the words being used.
A finding from the follow-up literature on SARF is worth pulling out specifically. Once a risk signal has been amplified, attenuation lags reality. When a contamination event ends and objective risk drops to zero, public risk perception remains elevated long after — it is not corrected by the disappearance of the underlying threat. The play stages this in microcosm at the very end. The crisis is over. The CEO has spoken. The meeting is cancelled. And Greg sends one more ticket. The crisis is gone. The paperwork lives forever.
The standard satire of corporate dysfunction puts the CEO at the apex of the dysfunction — out of touch, demanding, the source of pressure that propagates downward. The play does something more interesting. The CEO is the only character who de-escalates. He hears the amplified signal, asks Alex to restate the original signal, recognizes the mismatch, and corrects.
This is not because CEOs are smarter than middle managers. It’s structural. The CEO of a midsize firm is closer to actual stakes than anyone in the management chain below him. He has personally negotiated with regulators, sat in a deposition, watched a competitor get fined, taken a board call about a real breach. He has a calibrated risk threshold because his job has installed one in him. The middle managers don’t. Their risk vocabulary has been issued to them — the words came from compliance training, an internal audit, an industry seminar — and the calibration is missing.
The mechanism is not unique to executives. Alex has the same advantage from the opposite end: he has run the diagnostics, moved the file, watched the drive return to normal. The difference between Alex and Greg is not seniority. It’s contact with the underlying event.
This is the steel-man for escalation in general, and it matters: real escalation prevents real harm. A junior agent who misclassifies a phishing report as user error costs the company a breach. A first-line oncall who fails to wake their manager during a regional outage costs the company a customer. Sometimes the manager genuinely has context the first-line agent doesn’t — a brewing pattern across multiple tickets, a regulatory deadline, a known fragility. The escalation infrastructure exists because, properly used, it routes around the limitations of any single observer. The play is not an attack on escalation. It is an attack on uncalibrated escalation — the reflex performed against ambiguous signals by managers whose risk vocabulary exceeds their stakes contact.
The detail most worth noticing in The Escalation is that the actual problem is fixed before the escalation begins. Janet calls. Alex diagnoses. Alex moves the folder. Janet confirms the fix. Then Greg calls. This is structurally important: the escalation chain is processing a closed ticket. The crisis is not the technical problem; the technical problem is gone. The crisis is now the response to the response to the response.
This is the play’s hardest insight and the one most useful to a working tech leader. Corporate escalation can become an autonomous organizational process that runs on anxiety rather than problem state. Once it spins up, it has its own momentum. The IT steering committee, mentioned in passing, was formed after a previous printer incident; its existence now justifies further escalation, which produces further committees, which justify further escalations. The organizational scar tissue from past overreaction becomes the substrate for future overreaction.
Once you start watching for this pattern, it shows up everywhere. The retrospective that was meant to capture lessons from one bad incident calcifies into a ritual after every minor anomaly. The status meeting created during a real crisis persists, unchanged, long after the crisis ends. The post-incident report template balloons from a half-page to twelve because each prior incident exposed a gap the template now demands every future incident address. Each accretion was justified once. None is ever removed.
There is a hopeful version of this. Amy Edmondson’s 1999 paper in Administrative Science Quarterly on psychological safety found that environments where people fear blame for mistakes systematically produce worse error reporting, which produces worse outcomes. Alex’s speech at the end of the play — the refusal to name the photo uploader, on the grounds that the last reprimand had taught the office to hide mistakes from IT — is a working application of that finding. When the CEO accepts the argument and asks for the one-sentence summary, he is choosing to reduce his organization’s scar tissue rather than add to it. It is the most managerially competent moment in the play, and it takes ten seconds.
The practical takeaways are smaller than the theoretical scaffolding suggests, and that is the point. The interventions that work against organizational amplification are mostly small habits, repeated.
Ask for the original signal. When a problem reaches you through a relay, ask the equivalent of “That’s it?” Find the words the first reporter used. Compare them to the words you are hearing now. The gap is the amplification, and its size tells you what your organization is currently treating as ambiguous.
Watch the vocabulary translation. When you hear “compliance issue,” “data governance,” “critical performance degradation,” ask what the underlying event was. Sometimes the right words for an event are escalation words. But uncalibrated vocabulary is the first symptom of bike-shed inflation, and the words tell you which station did the translating.
Distinguish first-contact resolution from organizational closure. A ticket can be technically closed and the organizational crisis still alive. The metric that matters is not “how long until the technical problem is fixed” but “how long until the organization stops responding to the problem.” When the gap between the two is large, you have an amplification problem, not a technical problem.
Audit your scar tissue. Steering committees, mandatory reports, post-mortem templates, status meetings — every one was created in response to a real event at some point. Some are still load-bearing. Most are not. Once a year, walk the list. The test is not “could this be useful?” but “is the cost of running it justified by the events it has actually caught in the last few sessions?” If the answer is no for three quarters of the items on the list, you are running on scar tissue.
Protect your reporters. The single largest variance in error reporting is whether people believe they will be punished for telling the truth. Alex’s refusal to name the photo uploader is not nobility; it is operational discipline. If the CEO has him fired for it, the next person who notices a small problem will not call IT, and the next small problem will not be a small problem. The cost of every reprimand is information you no longer get.
A small Parkinson-shaped coda. After the meeting is cancelled, the play ends with Greg sending one final ticket: he wants a more detailed version of the incident report, for the meeting he now knows is cancelled. He wants it for his files.
This is the residual amplification — the part of the signal that outlives the event. Somewhere in your organization right now, someone is preparing a deck for a meeting that should have been an email about a question that was already answered. The technology, as the play’s dedication says, was never the problem.
The trick — for tech leaders, IT teams, and anyone with the standing to ask “That’s it?” out loud — is not to abolish the system that produces this. It is to keep the de-amplification station staffed. One sentence. Just the facts. Cancel the meeting.
The original signal is the only thing worth escalating.
The play’s CEO de-escalated because he could re-ask Alex what actually happened and compare it to the words he’d been hearing. Most organizations cannot run that comparison: by the time a signal reaches the top, the original event has been overwritten by four layers of vocabulary translation. Chain of Consciousness is the structural fix — an immutable record of what the agent actually observed, decided, and did, signed at the moment of the event, retrievable later without going through the amplification chain. When Greg sends his “just-for-the-files” ticket, the record is still the record. The vocabulary cannot rewrite it.
Hosted Chain of Consciousness · Verify an agent’s record · pip install chain-of-consciousness · npm install chain-of-consciousness