What Spreads - a Retrospective
Between 2008 and 2014 I spent most of my working hours studying how content spreads online — first writing the early article database at Know Your Meme, then as a Viral Media Researcher at BuzzFeed. This is what I observed that still matters.
Shifting goalposts
At ROFLcon II in 2010, the writer and media critic Ethan Zuckerman criticized Know Your Meme for having "shifting goalposts" when it came to the confirmation process for user-submitted entries. At the time I was the person making most of those decisions, and at the time I was defensive about it. In retrospect, Ethan was absolutely right.
When I look back at the meme articles I wrote and approved during my tenure, they are saturated with my personal biases — my aesthetic preferences, my sense of humor, what I found interesting or tiresome. The confirmation process was presented as editorial gatekeeping for an internet encyclopedia, but it functioned as a taste-making system.
This pattern generalizes. KYM's early database was shaped by a small group of contributors with a specific demographic and cultural profile. Wikipedia's systematic bias toward the perspectives of Western, English-speaking men is well-documented. Recommendation algorithms are trained on historical engagement data, which means they replicate whatever preferences were already dominant. The mechanism changes; the principle doesn't. When human judgment is the filter, the filter has a worldview.
The emotional triggers are not subtle
The research I did at BuzzFeed was largely quantitative: what got shared, how many times, from where, by whom. The pattern that held across almost everything we analyzed: content spreads when it produces a specific emotional response that the person wants to share as a signal about themselves.
Not "I found this useful." Not "this is accurate." Something closer to: "the world should know I felt this." Awe, outrage, laughter, tenderness. Sometimes nostalgia, which functions as identity verification — this is who we were, and I'm someone who remembers.
This is not manipulation in the conspiracy-theory sense. It's closer to the way music works — it produces an experience, and that experience generates an impulse. The manipulation enters when you design content to trigger the impulse without delivering anything behind it: a headline calibrated to outrage with an article that doesn't justify the emotion, a photo captioned for tenderness toward something that isn't tender. By 2012, we could do that systematically. By 2015, every major publisher was. By 2020, the model had escaped the publishers entirely.
The work I was producing at the time tried to frame this constructively. A piece I published in 2011 — later covered by PCMag — made the case that empathy was the most durable driver of sharing: not format, not meme references, but the feeling of being genuinely recognized in your own experience. That was accurate as far as it went. What I underweighted was that outrage was also a potent vector — and considerably easier to produce. Empathy requires craft; the thing has to actually land. Outrage only requires provocation. An industry that had learned to measure sharing volume and optimize for it was going to find that asymmetry on its own, and it did.
The cost of removing friction
When I was at Know Your Meme in 2008, internet memes were small. They spread through forums and image boards, mutated through the communities that touched them, and arrived at any given person already shaped by hundreds of contributors. The spread was diffuse and participatory. Authorship was distributed.
BuzzFeed was doing something different: identifying what spread and manufacturing more of it at industrial scale. The emotional mechanics were the same. The authorship had collapsed. Content that would have taken weeks to mutate through a community now arrived fully formed, optimized, ready to be shared without modification.
What I understood too late was that friction had been doing work that nobody credited it with. The effort required to create and distribute content kept the system honest in ways that weren't visible until the friction was gone. What remained was the trigger without the community context that had previously given the trigger meaning. Outrage that would have been localized and temporary became ambient and permanent. Nostalgia that would have been specific became generic and weaponizable.
Why it matters now
Synthetic media has removed the last remaining cost. In 2012, you still had to find or create an image, write words, post them somewhere. That overhead was already nearly nothing. Now it's less than nothing — generation is faster than distribution, and distribution is instant.
The mechanics of spread haven't changed. Awe, outrage, laughter, tenderness, identity verification — these still drive sharing. What's changed is the supply side. The triggers can now be manufactured at any scale, targeting any population, at effectively zero marginal cost, without the editorial overhead that previously kept bad actors from flooding the zone.
I'm skeptical that institutional responses — content moderation, warning labels, media literacy curricula — will be sufficient. The problem is structural: the platforms are optimized for the engagement these triggers produce, and engagement is their revenue model. You can't regulate your way out of a system that profits from remaining broken.
What I build — self-hosted infrastructure, tools that serve their users rather than extract from them — is a partial answer to a structural problem. It doesn't fix information integrity at scale. But it withdraws from, and refuses to contribute to, the system that makes the problem worse. That's not nothing. It's where I start.