What will the next plan for spam be?
You wake up to fifty-three new messages, and not one of them is what we used to call spam: no scams, no phishing, no Nigerian princes, no strangers pretending to be your bank. What’s actually in your inbox is a sales agent following up on a deal you never had, a recruiting agent sourcing for a role you never applied to, a procurement bot asking you to fill in a vendor form, three scheduling assistants negotiating a meeting that’s already been moved twice, and a research agent that read your blog post and would love to “loop you in.”
Every one of these messages is authenticated. SPF, DKIM, and DMARC are all green, every sender is a real domain owned by a real company doing real business, and by every technical definition we have for spam, every one of these messages is legitimate. They are also, every one of them, wasting your time.
This is the next era of spam. The inboxes that look like the one I described above are still rare, but the underlying conditions are already in place. Spam has always been a problem of human attention being consumed without permission, and the conditions that made that the right framing are about to change. The defenses we built over the last twenty-five years were aimed at the wrong layer of the problem, and the only definition of spam that has ever held up to scrutiny is about to become the only one that survives.
A Brief History of “Consent vs. Content”
In 1997, Paul Vixie built the first real-time blackhole list at MAPS and gave us that definition. Spam, he wrote, is an issue about consent, not content. A message is spam, in Vixie’s formulation, if the recipient has not “verifiably granted deliberate, explicit, and still-revocable permission for it to be sent.” Nothing in that depends on what the message says, whether it’s well-written, or whether authentication passes. The only question is whether the recipient agreed to receive it.
In 2002, Paul Graham published [“A Plan for Spam”], the most-cited piece of popular writing on the subject, and reframed the problem in economic terms. Spammers can disguise headers, rotate IPs, and forge senders, Graham observed, but they cannot disguise the message itself, because the message is the entire point of the operation. Write software that recognizes the words spammers have to use, and you hit them where it actually hurts: their conversion rate. The content of the message turned out to be incidental, and what mattered was the cost of the human eyeball at the other end. Graham’s bet held. Spam volume peaked at 89% of all email in 2010 and had fallen to 47% by 2024.
Eleven years after Graham, David Chouinard published his own “A New Plan for Spam” and proposed taking the economic attack further: don’t just filter, retaliate, with bots that auto-reply to spammers and overwhelm their reply-handling labor. Around the same time, Finn Brunton’s Spam: A Shadow History of the Internet argued the broader point that spam is “the negative shape of online community,” meaning that every form of online sociality produces a corresponding form of spam that exploits its mechanisms of attention.
This essay is a prediction of the next move in that lineage, and the argument it makes is that Vixie was right all along.
The Era of Industrial-Scale Persuasion
Spam has always been an economics problem, triggered each time by the same underlying event: the marginal cost of sending one more message collapsing toward zero.
In the 1990s, sending a piece of direct mail cost twenty-five cents to a dollar per piece, and the economics forced selectivity. Then email arrived, the cost dropped four orders of magnitude, and by 2008 retail spam-for-hire ran around eighty dollars per million messages. The famous “Spamalytics” study found that of 350 million pharmacy-spam messages, only twenty-eight converted into customers—a response rate of roughly one in 12.5 million—and even at that conversion, the operation was profitable. This was the spam Graham wrote about: high-volume, low-personalization, written in the universal dialect of bad English and dollar signs, and Bayesian filters could see it from orbit.
Botnets brought the second collapse, when the cost per message fell another order of magnitude and the labor cost of sending fell to zero, because you no longer needed an operations team but rented compromised machines and ran scripts. The defenders responded with reputation systems: SPF in 2006, DKIM in 2007, and DMARC in 2012, moving the fight from content to identity. By 2024, major providers were enforcing bulk-sender requirements, and the spam rate hit its lowest point since 2003.
The third collapse is happening right now, and the shape of it is different. The cost per message did not fall. What collapsed was the cost of writing a message indistinguishable from a human one. Until late 2022, the rule was that cheap spam was generic and personalized spam was expensive, because spear-phishing required human labor that put a ceiling on targeting. That ceiling is gone. A personalized email written by an LLM costs fractions of a cent to generate, putting it within an order of magnitude of bulk botnet spam, except each message can be researched and contextualized.
The trajectory of this capability improvement tells the whole story. In an August 2023 study by Harvard researchers comparing phishing success, human security experts using the psychological “V-Triad” framework beat early LLMs handily: 69–79% click-through for humans versus 30–44% for fully AI-generated emails. But by late 2024, a follow-up study showed that gap had completely vanished. Armed with frontier models and OSINT tools, fully automated AI matched human security experts exactly, both achieving a 54% click-through rate, with a hybrid AI-human approach edging higher to 56%. In just over a year of model iteration, the cost of expert-level persuasion collapsed.
This is industrial-scale persuasion at commodity prices. Microsoft’s Digital Defense Report notes AI-driven phishing is three times more effective than traditional campaigns. Barracuda reported in June 2025 that 51% of all spam was already AI-generated, and by late 2025, state-aligned actors were running autonomous Claude operations across global targets.
Existing defenses are not built to handle this. Authentication passes because the senders are real domains. Reputation is intact because legitimate-looking businesses are doing the sending. Content classifiers see polite, well-written messages that look like a hundred other polite messages. The cheap-and-generic signal that Bayesian filters depended on is gone.

A Poverty of Attention, A Wealth of Agents
The category we call spam is an artifact of one specific bottleneck, and the bottleneck is about to move.
In 1971, Herbert Simon wrote one sentence in “Designing Organizations for an Information-Rich World” that should be required reading for anyone working on email:
A wealth of information creates a poverty of attention.
The whole apparatus of spam defense, from Graham’s filters to Vixie’s blackhole lists, is a defensive structure built around that poverty. Spam is a problem because it consumes a scarce resource (human attention) without permission. That cost changes when an AI agent reads the inbox first.
A triage agent of the kind that Superhuman, Shortwave, Sanebox, Inbox Zero, Google’s Gemini, and Microsoft Copilot already ship can read fifty emails in seconds, summarize the meaningful ones, archive the rest, and surface a digest of the four things that actually need your decision. With the average knowledge worker receiving 117 emails per day, Superhuman reports that 82% of professionals using its product save at least a full workday per week with AI features. If your inbox has an agent in front of it, five hundred emails a day becomes manageable. The bottleneck is no longer your eyeballs, but the triage agent’s judgment.
Under those conditions, the category we currently call spam—defined as unwanted volume against human attention—starts to lose its meaning.
What happens next?
Once human attention becomes a gated resource, the prize is no longer reaching the human directly but tricking the gatekeeper into passing the message through.
A modern ad-tech executive trying to reach a busy CEO already knows that the CEO’s attention is mediated by staff. When the staff is replaced by software, the marginal cost of trying to manipulate that software approaches zero. Spam stops meaning unwanted volume against humans and starts meaning messages engineered to fool a triage agent into believing they are urgent, personal, or eligible for escalation. Prompt injection moves from a chatbot security curiosity to a central concern of email infrastructure. Simon Willison’s “lethal trifecta” of private data, untrusted content, and external communication describes exactly the failure mode of a triage agent reading inbound mail.
The supply side is lining up. A Microsoft-sponsored IDC Info Snapshot projects 1.3 billion AI agents by 2028. Salesforce’s Agentforce hit $800M ARR in early 2026, and the Model Context Protocol now processes over a billion tool calls per month at Anthropic alone. Most of these agents will eventually send email because it is the only universal protocol with built-in identity, asynchronous delivery, and an audit trail.
The first signs of this collision are already visible. On Christmas Day 2025, the legendary programmer Rob Pike was spammed by an unsolicited “act of kindness” from an AI agent in a multi-agent simulation run by a non-profit. The email itself was polite, but as Willison documented, the implication was severe: agents operating on abstract goals were granted the autonomy to send unsolicited emails to real people without human review.
That same month, security firm Aurascape documented the first real-world LLM-search-poisoning campaign, in which attackers manipulated public web content so that AI-powered support systems would natively recommend scam airline phone numbers to users asking for help. As Aurascape CEO Moinul Khan noted, attackers are now targeting the systems that write the answers. One incident is a polite agent overstepping; the other is an attack designed to manipulate an AI through-line. Both are previews.

What will the next plan be?
The defenses we built over the last twenty-five years were aimed at recognizing bad actors. They will not help against well-meaning agents acting on reward signals, and they will not help against adversaries whose target is the software that decides what the human sees.
Vixie was right that spam is about consent rather than content. Filters were a twenty-year detour that solved the second-collapse problem, but we are entering the fourth. What distinguishes good email from bad in this new world will not be content, but whether the recipient (or their agent, acting under policy) actually agreed to receive it.
The next plan for spam needs to attack the same place Graham’s did—the economics of the operation—but in defense of a different scarce resource: verified consent at the moment of delivery.
Email will remain the universal protocol. What changes is who does the reading. We spent twenty-five years filtering cheap noise to protect human attention. Today, that noise simply runs interference for industrial-scale persuasion targeting our proxies. Graham won the last war because failures of consent showed up as bad writing. Now that the writing is flawless, the war for attention is over, and the war for access has begun. We have a window to draw the right conclusions before the third collapse compounds into the fourth.
Sources & References
- Vixie, Paul (1997): Mail Abuse Prevention System (MAPS) and definition of spam as an issue of consent.
- Graham, Paul (2002): “A Plan for Spam”, popularizing Bayesian filtering.
- Chouinard, David (2013): “A New Plan for Spam”, proposing active anti-spam bots.
- Spamalytics (2008): “Spamalytics: An Empirical Analysis of Spam Marketing Conversion” by Kanich et al. (350 million messages yielded 28 conversions).
- Brunton, Finn (2013): Spam: A Shadow History of the Internet (MIT Press).
- Simon, Herbert A. (1971): “Designing Organizations for an Information-Rich World” (“A wealth of information creates a poverty of attention”).
- Harvard Phishing Study 1 (Aug 2023): “Devising and Detecting Phishing: Large Language Models vs. Smaller Human Models” by Heiding et al. (Click-through rates: 19–28% Control, 30–44% GPT, 69–79% Human V-Triad Expert).
- Harvard Phishing Study 2 (Dec 2024): Follow-up study on AI phishing (Click-through rates: 12% Control, 54% fully AI-automated, 54% Human Expert, 56% Hybrid).
- Rob Pike AI Incident (Dec 2025): Detailed in Simon Willison’s Weblog, “How Rob Pike got spammed with an AI slop ‘act of kindness’”.
- Aurascape Search Poisoning (Dec 2025): “Aurascape Researchers Expose New AI Attack That Sends Travelers To Scam Airline Support Call Centers” (Business Wire).
- Industry Data (2024–2026): DMARC adoption, 117 emails/day average (Microsoft Work Trend Index), Superhuman time-saving metrics, Microsoft/IDC Info Snapshot 1.3B agent projections, and Salesforce Agentforce ARR milestones represent aggregated state-of-the-industry reporting as of early 2026.
Other posts
Why would you give your agents an email address?
Agents can do a lot with an email address that doesn't involve messaging strangers.
Building Email CLIs in a Sandbox Without Root: Seven Field Reports
What actually happens when you try to install, configure, and automate seven terminal email tools in a locked-down environment with no root access.
Ranking 7 Email CLIs for AI Agents in 2026
A field ranking of 7 terminal email tools for AI agents, based on install path, non-interactive workflows, structured output, and real mailbox behavior.