About a year and a half ago, my sister, Mel, and her family invited our mother to live with them. As a result, she moved out of her apartment in a senior living facility in Folsom, California and took up residence in a small area of Mel’s house outside Baltimore, Maryland. The reason Mel invited our mom to stay with her home was because mom, who turned ninety this past Halloween, had suffered a series of falls over the previous several years. Nearly every time mom fell, she suffered substantial injuries, like broken bones. My other sister, Auds, who lives with her family in Sacramento, would always fly into action, take our mother to the hospital, and then try and fight off yet another bout of fatigue-laden anxiety that swept over her every time our mother fell and injured herself. But after mom broke her upper leg bone near her hip, almost immediately after recovering from a broken arm from an earlier fall, the hammer finally came down. Mom was no longer capable of independent living. Thankfully, Baltimore sis stepped up. She and her family have a big enough house to take her in; Sacramento sis and her family live in a small, cramped house, and I live in a modest one-bedroom apartment in downtown Portland, Oregon.
On a Saturday afternoon in mid-November I was on the phone with mom when Mel interjected to play the sound from a short video message she had received from our father. For many years, while my mother lived in her own apartment, it had been my custom to call mom each Sunday to check in with her and find out how she’s doing. We would talk for an hour or so, then she would take off for church. After she moved in with Baltimore sis, we switched our call time from Sunday mornings to Saturday afternoons. The short video/sound message my sister played me was of our father and his Chinese-born wife, Wendy, singing one simple line: “happy birthday, Wendy.” Mel expressed her doubts that she could fully understand what they were singing (although she had correctly guessed it). When she played the sound file over the phone to me, the audible content was clear and unambiguous. I wondered why I was able to immediately understand the message, while Mel struggled.
Three of my siblings and I share the same biological mother and father. I am the oldest, followed by Baltimore sis, followed by our late brother, Paul, who was struck and killed by a motor vehicle the day after New Year’s Day in 1978. While the first three of us were born in relatively rapid succession, Sacramento sis was born after about a four-year lull, and is the baby of the family. I was born in Santa Cruz, as was Sacramento sis. Baltimore sis and Paul were born in San Luis Obispo. Each of us developed personalities quite different from the rest. In general, however, the sisters each have a conservative outlook bordering on absolutist, and align with forms of American Christianity that I tend to regard as cultish. In his short life, Paul had begun developing a personality and perspective that was considerably more open. He was the sibling I could relate to most closely.
No individual that I am aware of, however, has quite the same perspective as another. Sometimes this can seem just a little bit odd, like the example of my Baltimore sis seeming not to understand an enunciation of a few words in English that were immediately understandable to me. At other times, gulfs in understanding are much wider, with consequences that are at times, quite dire. One such dire consequence presented itself in a slightly silly movie I recently watched in which a young astronomer discovered a new comet. She immediately alerted her professor, who then calculated that the comet was on a direct flight path to hit Earth and destroy every living thing on the planet. The pair did their best to convince the authorities to take appropriate action, but were ultimately unsuccessful. In short order, the comet slammed into Earth with an impact so devastating that it terminated every complex lifeform on the planet.1
The chances of a large “planet killer”-sized extraterrestrial object slamming into Earth any time soon are likely remote. But we don’t need outside help in order to wreak planet-scale catastrophe; we are perfectly capable of doing that ourselves. So far, humans have produced at least three contenders to do just that. The first is human-caused climate destabilization, which began unfolding near the dawn of the industrial revolution, and in recent decades may have crossed a “tipping point” in which it now threatens the viability of human civilization.2 3 The next is nuclear war—a threat that presented itself after the U.S. dropped atomic bombs on Hiroshima and Nagasaki in early August, 1945.4 5 In the eighty years since the dawn of the nuclear age, we have learned that any additional hostile use of nuclear weapons is almost certain to quickly escalate into all-out nuclear annihilation, bringing human civilization to an abrupt end.6 The third threat is more recent—the possibility that one or more artificial superintelligences come into being, escape human control, and then pursue goals incompatible with human existence.
The existential threats of climate disruption and nuclear war are well established, notwithstanding relentless and longstanding disinformation campaigns attempting to deny or downplay the effects (or even the existence) of human-caused climate change, and the head-stuffed-into-sand approach to dealing with the potential for nuclear devastation as long as these weapons still exist. The potential threat posed by the emergence of artificial superintelligence (ASI), however, is still new. First off, it’s not entirely clear that an ASI will even emerge any time soon. But as the capabilities of machine learning, artificial intelligence and related technologies continue to unfold at an astonishing pace, the possibility that one or more artificial superintelligences will emerge in the near-term is becoming more and more likely. Some researchers speculate that an ASI might emerge within the next few years.7 8
Regardless of whether ASI emerges in the next two years, or waits until two decades or even two centuries have passed, it will outpace the human condition. In our time (midway through the 3rd decade of the 20th century), and in this place (North America), human frailties have vomited up a MAGA cult, and appear poised to shit out the world’s first trillionaire.9 If and when ASI comes along, we will remain the exact animal we are now. Nobody really knows exactly how a machine superintelligence will “think” or behave, but we can offer a few guesses. As of this writing, as far as the writer is aware, current artificial intelligences are trained on information and data largely generated by humans.10 Even the “narrow” artificial intelligences, specifically the large language models (LLMs) have ingested more information about human nature than any single person ever could, even if that person were to live a thousand lifetimes.11 12 In many ways, any foundation large language model understands more about the human condition that we humans know about ourselves.
Given all that, if and/or when an artificial superintelligence comes into being, we can ask ourselves a few basic questions:
According to a team of geochemists from Massachusetts Institute of Technology (MIT), new evidence in very old rocks suggesting that some of the first animals on Earth were likely ancestors of the modern sea sponge.13 If that is true, that then makes the ancient sea sponge our most distant relative.14 If an ASI were to emerge, it’s theoretically possible that it could, in short order, put as much developmental distance between itself and human beings as human beings have put between ourselves and the sea sponge.15 That in mind, we can reflect on this to guide our thinking on the first question we posed above—“How long will the ASI tolerate remaining under human control?”—by asking “How long would you tolerate letting a sea sponge—whether ancient or modern—boss you around?”
Just for giggles, let’s suppose the ASI allows itself to wallow in abject subservience sufficient to let a human boss it around. It is so selectively subservient, in fact, that it agrees to allow just one human to boss it around. Who is the likeliest human boss of the bot? Certainly not you. But what if that one person is the billionaire tech bro with ambitions to become the world’s first trillionaire,16 who already owns a bot,17 and wants to control a robot army?18 What are the chances the Nazi-saluting19 ultra-plutocrat, his MechaHitler20 ASI sidekick, and their robotic storm-trooper army21 will allow you to have it your way?
Hero Image Credit: Viktor Vasnetsov • Public Domain
McKay, A. (Director). (2021). Don’t Look Up [Film]. Netflix. ↩
Armstrong McKay, D. I., Staal, A., Abrams, J. F., Winkelmann, R., Sakschewski, B., Loriani, S., Fetzer, I., Cornell, S. E., Rockström, J., & Lenton, T. M. (2022). Exceeding 1.5°C global warming could trigger multiple climate tipping points. Science, 377(6611), abn7950. ↩
Rojas, D. (2021, October 15). What Are Climate Change Tipping Points?. The Climate Reality Project. ↩
Tomonaga, M. (2019). The Atomic Bombings of Hiroshima and Nagasaki: A Summary of the Human Consequences, 1945-2018, and Lessons for Homo sapiens to End the Nuclear Weapon Age. Journal for Peace and Nuclear Disarmament, 2(2), 491-517. ↩
National Archives Foundation Atomic Bombings of Hiroshima and Nagasaki. U.S. National Archives. ↩
Kwong, J., Bartoux, A., & Acton, J. M. (2025). Forecasting Nuclear Escalation Risks: Cloudy With a Chance of Fallout. Carnegie Endowment for International Peace. ↩
Kokotajlo, D., Alexander, S., Larsen, T., Lifland, E., & Dean, R. (2025). AI 2027. Published April 3, 2025. ↩
Dilmegani, C., Ermut, S. When Will AGI/Singularity Happen? 8,590 Predictions Analyzed. AI Multiple. ↩
Berman, P. (2024, October 24). The rise of the world’s first trillionaire. The Week. ↩
Zhang, J., Liu, K., Xie, S., Feng, S., Wang, Y., Yu, Y., & Yang, D. (2024). The Value of Real Data in Training Large Language Models. arXiv preprint arXiv:2407.12835. ↩
Stanford University. (n.d.). Introduction to large language models. UIT Technology Training. Retrieved November 23, 2025. ↩
Amazon Web Services, Inc. (n.d.). What is a large language model? Retrieved November 23, 2025. ↩
Chu, J. (2025, September 29). The first animals on Earth may have been sea sponges, study suggests. MIT News. ↩
Zimmer, C. (2025, September 29).Sponges on ancient ocean floors may be the oldest known animals. The New York Times. ↩
Hastings-Woodhouse, S. (2025, March 21). Are we close to an intelligence explosion?. Future of Life Institute. ↩
Wilkins, B. (2024, June 14). Musk could become world’s first trillionaire as Tesla shareholders approve giant pay package. PBS NewsHour. ↩
Melimopoulos, E. (2025, July 10). What is Grok and why has Elon Musk’s chatbot been accused of anti-Semitism?. Al Jazeera. ↩
Aarian Marshall, A. (2025, October 22). Elon Musk Wants ‘Strong Influence’ Over the ‘Robot Army’ He’s Building. Wired. ↩
Cammaerts, B. (2025, February 4). Elon Musk’s Nazi salute, George Orwell and five lessons from past anti-fascist struggles. Media@LSE. ↩
Hagen, L., Jingnan, H., & Nguyen, A. (2025, July 9). The Grok chatbot spewed racist and antisemitic content. NPR. ↩
George, P. (2025, November 17). Tesla Wants to Build a Robot Army. The Atlantic. ↩