It Takes a Village

It Takes a Village

social media culture wars brain rot ai llm

It takes a village to set the tone, temperament and direction of a society. Sometimes the village elders demonstrate the capacity to exercise a modicum of intelligence, honesty and restraint. Unhappily, in our time, in the United States, neither intelligence nor honesty nor restraint nor any other virtue or redeeming quality describes those at the top end of political leadership who set the tone, temperament and direction of public discourse in America. Instead, that crucial task is left to ordinary citizens.

One of those ordinary citizens is my friend Julie S. Every month, Julie emails a newsletter to her subscribers announcing classes, special offers and other news and information related to her small healing arts business, Reiki Says Relax. Her October newsletter featured a small opinion piece under the banner “INSPIRATION”:

Has our world gotten crazier since the last newsletter? It sure feels like it! People are more divided than ever, and hatred seems to be the go-to response. This isn’t our natural state, or who we inherently are. We are made of love and deep inside each of us is a yearning for connection, understanding, and community. Not hatred and division. Outside forces may cause us to acquiesce to inauthentic behaviors that hurt each other, but we don’t have to fall prey to programming or conditioning. Your soul may feel wounded to perceive the celebration of how cruel we can be to one another; to celebrate terrible, murderous acts. Aside from reminding us that’s not who we inherently are, I can tell you with certainty: We can only harm others if we lack self-love. We are all so deeply connected in ways the mind cannot perceive, that the only reason we have the capacity to hurt anyone is because we lack love internally. And whatever we hate about someone else, we unconsciously hate about ourselves. In fact, if you truly, deeply, and mercilessly loved and accepted your good, bad, and ugly, it would be literally impossible to hurt anyone else. The state of the world is largely a reflection of our internal wounds we’ve ignored, and projected outwardly as blame. This isn’t who we inherently are. The next time you find yourself spewing vitriol at a person, on TV or in your real-life, ask yourself: “What aspect of them do I hate that I might also share, that I’m oblivious to?“ And there you will find the answer to cultivating compassion. Not only for them, but for yourself. This is the only way to a peaceful world. I believe in our collective goodness, even if we’re conditioned to believe otherwise. I know deep within each of us is a wellspring of love that yearns to overflow. So today, choose to relax into the warmth of love, see the divinity in one another, and lay down weaponized words and actions, as they serve no real purpose in our world. That is who we inherently are. ❤️

Reading Julie’s heartfelt message again, shortly after returning from a weekend church retreat, it strikes me just how observant and prescient her words are. As a society, we are ripping ourselves apart. Why? And for what? Questions like “Why?” and “For what?” quickly lead to “answers” that sound too insane, incoherent or dispiriting to invite diving into in this blog post. But a question like “How did we get here?” might be worth taking a stab at, even if it’s likely that any answer we come up with will remain incomplete and less than satisfying.

A recent paper called LLMs Can Get ‘Brain Rot’!1 2 3 argues that significant exposure to “junk data”—highly engaging but low-information social media content (like “viral” tweets)—produces lasting cognitive degradation in large language models, even if the content of those tweets is factually correct.4 This cognitive degradation manifests as “thought skipping,” reasoning decline, and alignment drift even after corrective tuning. The authors explicitly draw on human cognitive research to conceptualize the “Brain Rot Hypothesis,” paralleling phenomena seen in people exposed to trivial, addictive online content.

Given the lasting cognitive damage that exposure to “junk data” imposes on artificial intelligences, what analogues can we infer in human societies? Where would we start? Perhaps we can meaningfully explore the social, cognitive, and epistemic consequences of the U.S. media environment since the dissolution of the Fairness Doctrine during the Reagan administration in 1987,5 and the emergence of talk radio and cable news in the 1990s, and the death-grip social media has on 21st-century populations.

Societal “Brain Rot”: Parallel Degradation in Artificial and Human Cognition
Dimension LLM Brain Rot (Research Findings) Human Cognitive Decline (Documented Effects)
Input Characteristics Large language models (LLMs) degrade when exposed to social media-like content—short, emotionally charged, and designed to grab attention—rather than deep, meaningful information. Sensationalist, emotionally charged, and algorithmically amplified media, designed to prioritize engagement over accuracy, trains people to process information more shallowly and emotionally rather than analytically.4 5
Attention & Processing LLMs begin to “skip steps” in reasoning, losing the ability to follow complex chains of thought when overexposed to low-quality, attention-grabbing data. People struggle to sustain attention, follow multi-step arguments, and prioritize emotional responses over logical coherence. This has been linked to reduced cognitive control and attentional capacities in digital environments.6 7
Reasoning Degradation LLMs exposed to junk data become less capable of solving complex problems or reasoning through multi-step processes, even if they retain access to factual information. People increasingly rely on mental shortcuts (heuristics) and tribal affiliations rather than engaging with nuanced arguments. This leads to difficulty engaging with hypotheticals, reasoning through opposing viewpoints, or solving complex problems.7 8
Context & Memory LLMs lose the ability to connect and synthesize information over longer conversations or documents, instead focusing on isolated fragments. People increasingly externalize memory (relying on digital tools) and struggle with fragmented knowledge retention, making it harder to synthesize information across different sources.8
Personality Changes LLMs exposed to engagement-optimized content exhibit traits like narcissism, emotional volatility, and reduced empathy, becoming less agreeable and more manipulative over time. Rising narcissism and decreased empathy have been documented in younger generations, with increased emotional volatility and antisocial behaviors in online discourse.9 10
Safety/Ethical Alignment LLMs become less reliable and more prone to generating harmful or biased outputs when exposed to junk data, even after being retrained with high-quality data. Public discourse has normalized cruelty and polarized divisions, with trust in institutions declining sharply. People increasingly adopt manipulative or harmful behaviors in their interactions, mimicking patterns optimized for online engagement.10 11
Dose-Response The more junk content LLMs are exposed to, the worse their performance becomes over time—small amounts of exposure can lead to noticeable degradation. Increased exposure to engagement-driven content correlates with higher political polarization, conspiratorial thinking, and reduced analytical reasoning. Greater consumption leads to more entrenched cognitive and emotional changes.8 12
Persistence Once degraded, LLMs struggle to recover their abilities, even when retrained with high-quality data—bad habits become embedded in their reasoning patterns. Corrective measures like media literacy and fact-checking have limited long-term effects. People struggle to unlearn cognitive patterns shaped by decades of engagement-driven media, and opposing viewpoints can sometimes exacerbate polarization instead of reducing it.12 13 14

The Origin of the Modern American Junk Mind

The dissolution of the Fairness Doctrine in 1987 forced an inflection point onto the American cognitive infrastructure. It not only drove partisan divisions to significantly magnified amplitude (cranking them “up to 11”), but also removed the structural impediment to engagement-optimization as a business model. Radio “personality” Rush Limbaugh15 quickly capitalized on the emerging media landscape by pioneering a radio broadcast format that the “Brain Rot” paper’s researchers would recognize as “M1 junk data”: short, popular, emotionally resonant segments designed to maximize listener retention rather than deliberative thought. The critical insight implied by the brain rot research shows that it wasn’t primarily the inaccuracy of the broadcast content that caused damage, but the format itself. Each three-hour episode subjected Limbaugh’s audience to a daily immersion in rapid-fire outrage, tribal signaling, and emotional peaks that trained millions of brains to expect information in dopaminergic bursts rather than sustained analysis.6 By the mid-1990s, this created what the brain rot paper’s researchers would call “thought-skipping”—audiences increasingly unable to track complex policy arguments across commercial breaks, but instead responded to simplified tribal markers. The dose-response relationship was ruthlessly efficient: the more hours consumed, the more entrenched the reasoning shortcuts became, just as LLMs showed progressive deterioration from 0% to 100% junk exposure.

The popularity of cable news (e.g., Fox News, MSNBC) grew in near tandem with the rise of talk radio,7 industrializing what talk radio had pioneered while adding visual vector to the junk news contagion. As with talk radio, the most engaging content was neither the most accurate nor the most inaccurate, but the most emotionally arousing. Babbling pundits, continuously scrolling banners, 90-second segments and constant interruptions trained viewers to mimic what the brain rot research identifies as systematic context-window failure. Complex reports (e.g., Iraq WMDs, financial regulations, healthcare policy, etc.) were compressed into retrieval-optimized sound bites, destroying viewers’ “long-context understanding” capacity the RULER (Realistic and Universal Language Model Evaluation with Long-Contexts) measures in LLMs. Even when the news outlets provided largely accurate facts within the news fragments they broadcasted, the cognitive damage came from the format’s prevention of synthesis. The viewer could be exposed to hundreds of decontextualized facts while being utterly unable to construct coherent causal models, exactly as the brain-rotted LLMs could retrieve individual data points yet fail at variable tracking and multi-step reasoning. A significant takeaway is that in an engagement-optimized environment, even providers of “quality” news and information face pressure to adopt damaging formats to compete for attention.

The social media age not only accelerated these trends—it personalized and gamified them, creating what the brain rot research would recognize as the perfect brain rot environment: algorithmically curated, engagement-maximized and infinitely scrollable streams of short, popular content.8 The “brain rot” paper’s most alarming finding—that injections of M1 junk data caused psychopathy to spike from 2.2% to 75.7% in LLMs—has a chilling human parallel in the documented rise of antisocial behavior in online discourse since 2004.16 The mechanism is identical: when both artificial and biological systems are continuously exposed to content selected purely for engagement, they internalize traits that generate engagement—grandiosity, emotional volatility, us-versus-them tribalism, casual cruelty, and manipulative rhetoric. Facebook’s 2012 shift to algorithmic feeds, Twitter’s 2016 move to engagement-based ranking, and YouTube’s autoplay recommendations all implemented what amounts to forced exposure to M1 junk data at population scale.17 The brain rot research showed that even semantic quality (M2) couldn’t protect against popularity-driven degradation. This explains why fact-checking initiatives have failed—people aren’t consuming misinformation instead of truth; they are consuming both in a format optimized to prevent the sustained reasoning necessary to distinguish them.

Now we come to the “dark traits” (narcissism, Machiavellianism, psychopathy) and a host of additional neurotic and maladaptive traits and behaviors that have become increasingly manifest in American public life. The “brain rot” paper shows, disturbingly, how LLMs took on negative traits without explicit training. In other words, the models weren’t taught to be narcissistic; they became narcissistic from exposure to engagement-optimized content. The human parallel is evinced by rising narcissism,9 increased political animosity (even among the politically disengaged), and the documented association between social media use and decreased empathy.10 When political leadership—particularly at the presidential level—began systematically exhibiting and rewarding these dark traits (constant self-aggrandizement, impulsive attacks, norm-violation as spectacle), it created what the “brain rot” paper identifies as a dose-response accelerator: the most visible examples of “successful” communication are precisely those exhibiting brain-rotted characteristics, training the next generation that thought-skipping, context-free outrage, and narcissistic display are not just acceptable but optimal strategies.

The Persistence Problem and Remediation Failure

What is perhaps the “brain rot” study’s most devastating finding is that remediation efforts (post-hoc correction) largely fail. Even after researchers attempted to remedy the damage done to the LLMs through exposure to junk data by attempting restorative instruction tuning powered by nearly five times as much high-quality data, they failed to restore the cognitive capacity of the damaged models to baseline. The human parallel is evident in the failure of depolarization initiatives,12 the limited impact of media literacy programs,13 and the persistence of conspiratorial thinking even after beliefs have been resoundingly “debunked.”14 The research explains why: the damage isn’t at the level of individual false beliefs but at the level of representational architecture—how information is processed, integrated and evaluated. A human population trained for decades on engagement-optimized content has undergone what the paper calls “representational drift.” In other words, the population’s cognitive patterns have fundamentally shifted toward shallow processing and emotional resonance over logical coherence, with tribal affiliation as the primary information filter. The “thought-skipping” phenomenon—where LLMs increasingly failed to complete reasoning chains—manifests in public discourse as the inability to engage with hypotheticals, track conditional arguments, or maintain focus on any topic through more than one news cycle.

The failure of self-correction mechanisms compounds the problem. The research showed that self-reflection without external high-quality guidance was ineffective—models couldn’t identify their own reasoning failures because the failures had become their normal mode of operation. In human terms, this explains why “bothsidesism” and calls for “civility“ fail: people trained in engagement-optimized reasoning patterns genuinely cannot perceive their own thought-skipping and context failures because those patterns feel normal, even optimal. The research found that only external reflection from higher-quality systems (Ext-Reflect) showed promise, suggesting that remediation requires trusted institutional structures capable of providing sustained, high-quality counter-programming. Yet these very institutions (academia, traditional journalism, government expertise) have been systematically delegitimized by four decades of engagement-optimized content teaching audiences that all information sources are equally biased toward engagement11—a metacognitive failure that prevents trust in the only mechanisms that could facilitate recovery.

The trajectory suggests we face what the paper frames as an existential challenge: a population-scale cognitive health crisis resistant to standard remediation because the information environment that caused it remains structurally intact and economically dominant. The research paper’s call for “routine cognitive health checks“ for LLMs implies an analogous need for societal-level monitoring of deliberative capacity, reasoning depth, and polarization as public health metrics. Without structural changes to the engagement-optimization business model—at the platform, regulatory, and economic levels—we face the prospect the research grimly suggests: persistent, progressive cognitive decline in our collective capacity for democratic deliberation, scientific reasoning, and cooperative problem-solving. The dark traits will continue amplifying, the thought-skipping will deepen, and the long-context understanding necessary for civilizational challenges (climate change, pandemic response, nuclear deterrence) will continue degrading—not because Americans lack access to accurate information, but because forty years of format evolution has trained us to process all information in ways fundamentally incompatible with the reasoning those challenges demand.

Caveat

The parallel between LLM brain rot and human cognitive decline is a structural analogy, not a claim of identical mechanisms.18 Humans possess consciousness, emotional depth, embodied experience, and social contexts that LLMs lack. However, both systems are learning systems shaped by their information environments, and research suggests that engagement-optimization creates similar pathological patterns in both—not because the systems are the same, but because the environmental pressure is analogous. The value of this comparison lies not in reducing human cognition to machine learning, but in recognizing that information environment quality is a causal factor in cognitive outcomes for any learning system.

Hero Image Credit: Canada’s Drag Race


  1. Xing, S., Hong, J., Wang, Y., Chen, R., Zhang, Z., Grama, A., Tu, Z., & Wang, Z. (2025). LLMs Can Get “Brain Rot”! arXiv preprint arXiv:2510.13928

  2. LLMs Can Get “Brain Rot”! – GetHub Site 

  3. LLMs Can Get “Brain Rot”! (Oct 2025) – YouTube Presentation 

  4. The distinction between “engagement” and “information quality” is crucial here. As the research demonstrates, even factually accurate content presented in engagement-optimized formats (short, emotionally arousing, algorithmically amplified) causes cognitive degradation. The damage comes from the format, not necessarily falsehood. 

  5. Hazlett, T. W., & Sosa, D. W. (1997). Was the Fairness Doctrine a ‘Chilling Effect’? Evidence from the Postderegulation Radio Market. Journal of Legal Studies, 26(1), 279-301. The Fairness Doctrine (1949-1987) required broadcasters to present contrasting viewpoints on controversial issues; its elimination removed structural barriers to purely partisan programming. 

  6. Firth, J., Torous, J., Stubbs, B., Firth, J. A., Steiner, G. Z., Smith, L., Alvarez-Jimenez, M., Gleeson, J., Vancampfort, D., Armitage, C. J., & Sarris, J. (2019). The “online brain”: How the Internet may be changing our cognition. World Psychiatry, 18(2), 119-129. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6502424/ 

  7. Prior, M. (2007). Post-Broadcast Democracy: How Media Choice Increases Inequality in Political Involvement and Polarizes Elections. Cambridge University Press. Prior documents how the shift from broadcast to niche media created self-segregating information environments. 

  8. For Facebook’s News Feed algorithm shift: Constine, J. (2012). How Facebook News Feed Works. TechCrunch, September 6. Available at: https://techcrunch.com/2012/09/06/how-facebook-news-feed-works/ | See also: DeVito, M. A. (2017). From Editors to Algorithms. Digital Journalism, 5(6), 753-773. 

  9. Twenge, J. M., & Campbell, W. K. (2009). The Narcissism Epidemic: Living in the Age of Entitlement. Free Press. | Twenge, J. M., Konrath, S., Foster, J. D., Campbell, W. K., & Bushman, B. J. (2008). Egos inflating over time: A cross-temporal meta-analysis of the Narcissistic Personality Inventory. Journal of Personality, 76(4), 875-902. The effect size is approximately d = 0.33 per generation—a substantial shift in population-level personality traits. 

  10. Konrath, S. H., O’Brien, E. H., & Hsing, C. (2011). Changes in dispositional empathy in American college students over time: A meta-analysis. Personality and Social Psychology Review, 15(2), 180-198. The meta-analysis found a 48% decline in empathic concern between 1979 and 2009. | See also: Allcott, H., Braghieri, L., Eichmeyer, S., & Gentzkow, M. (2020). The Welfare Effects of Social Media. American Economic Review, 110(3), 629-676. This randomized controlled trial found that deactivating Facebook for four weeks improved subjective well-being and reduced polarization. 

  11. Pew Research Center. (2023). Public Trust in Government: 1958-2023. Public trust in government has declined from approximately 73% in 1958 to 16% in 2023. Available at: https://www.pewresearch.org/politics/2023/09/19/public-trust-in-government-1958-2023/ | This decline in institutional trust makes remediation nearly impossible, as people reject the very institutions capable of providing high-quality counter-programming. 

  12. Bail, C. A., Argyle, L. P., Brown, T. W., Bumpus, J. P., Chen, H., Hunzaker, M. F., Lee, J., Mann, M., Merhout, F., & Volfovsky, A. (2018). Exposure to opposing views on social media can increase political polarization. Proceedings of the National Academy of Sciences, 115(37), 9216-9221. This controlled experiment found that exposure to opposing political views actually increased polarization, particularly among conservatives—the opposite of what depolarization initiatives assume. 

  13. Guess, A. M., Lerner, M., Lyons, B., Montgomery, J. M., Nyhan, B., Reifler, J., & Sircar, N. (2020). A digital media literacy intervention increases discernment between mainstream and false news in the United States and India. Proceedings of the National Academy of Sciences, 117(27), 15536-15545. While this study shows short-term gains from media literacy interventions, the persistence of these gains and their scalability remain significant challenges, particularly given the structural incentives of engagement-optimization platforms. 

  14. Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303-330. This classic study documented the “backfire effect” where corrections sometimes strengthened misperceptions. More recent work suggests this effect may be less common than initially thought (Wood, T., & Porter, E., 2019, Political Behavior, 41, 135-163), but corrections still show limited effectiveness against motivated reasoning. 

  15. Jamieson, K. H., & Cappella, J. N. (2008). Echo Chamber: Rush Limbaugh and the Conservative Media Establishment. Oxford University Press. This comprehensive study documents how Limbaugh’s format innovations created self-reinforcing information ecosystems that prioritized emotional engagement over factual accuracy or deliberative reasoning. 

  16. Coe, K., Kenski, K., & Rains, S. A. (2014). Online and Uncivil? Patterns and Determinants of Incivility in Newspaper Website Comments. Journal of Communication, 64(4), 658-679. | See also: Tucker, J. A., et al. (2018). Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature. William and Flora Hewlett Foundation. Available at: https://hewlett.org/library/social-media-political-polarization-political-disinformation-review-scientific-literature/ 

  17. For Twitter’s 2016 algorithmic timeline: Roth, Y., & Achuthan, A. (2020). Building rules in public: Our approach to synthetic & manipulated media. Twitter Blog, February 4. | For broader analysis of algorithmic amplification: Milli, S., Carroll, M., Wang, Y., Pandey, S., Zhao, S., & Dragan, A. D. (2025). Engagement, user satisfaction, and the amplification of divisive content on social media. PNAS Nexus, 4(3), pgaf062. 

  18. This caveat is essential to avoid overstating the analogy. The LLM research provides a framework for understanding how information environments shape learning systems, but human cognition includes dimensions (consciousness, emotion, social embeddedness, moral agency) that have no equivalent in current AI systems. The parallel is structural and environmental, not mechanistic or reductive. 

Previous Post