JOIN THE AUTHOR ON MEDIUM DAY

  • What: Medium Day is a free virtual conference exploring big ideas in tech, business, culture, and more. It will feature 100+ expert-led talks, panels, and workshops.
  • When: Saturday, August 12, 2023 | 9:00am-9:00pm ET
  • Where: Join virtually
  • How: Tickets are free at mediumday.com

About the Author

Dr. Ben Zweibelson is the director of the U.S. Space Command’s Strategic Innovation Group at Peterson Space Force Base, CO. A retired Army infantry officer with combat tours in Iraq and Afghanistan, he earned the Combat Infantryman Badge, Master Parachutist Badge, Pathfinder Badge, Air Assault Badge, the Ranger Tab, four Bronze Star medals, and various awards and citations in his 22 years combined service. He previously worked for U.S. Special Operations Command for seven years, running all design education, theory, and outreach for the Joint Special Operations University. He has a doctorate in philosophy, three master’s degrees, and an undergraduate degree in graphic design. He has two design books forthcoming in the summer of 2023.

The views expressed in this article are solely those of the author. They do not necessarily reflect the opinions of Marine Corps University, the U.S. Marine Corps, the Department of the Navy, or the U.S. government.

PART I: The Singleton Paradox

On the Future of Human-Machine Teaming and Potential Disruption of War Itself

Ben Zweibelson, PhD​

https://doi.org/10.21140/mcuj.20231401001

PRINTER FRIENDLY PDF
EPUB
AUDIOBOOK

 

Abstract: Technological innovation has historically been applied in war and security affairs as a new tool or means to accomplish clear political or societal goals. The rise of artificial intelligence posits a new, uncharted way forward that may be entirely unlike previous arms races and advancements in warfare, including nuclear weapons and quantum technology. This article introduces the concept of a singleton as a future artificial intelligent entity that could assume central decision making for entire organizations and even societies. In turn, this presents what is termed a “singleton paradox” for security affairs, foreign policy, and military organizations. An AI singleton could usher in a revolutionary new world free of war and conflict for all of human civilization or trigger a catastrophic new war between those with a functioning singleton entity against those attempting to develop one, along with myriad other risks, opportunities, and emergent consequences. 

Keywords: singleton, singularity, transhumanism, artificial intelligence, AI, war studies, security affairs

 

Machines were first created by humans to shift physical labor from muscle and natural sources (wind, water) and in the last century to shift cognitive labor as well. The history of invention, technology, and civilization provides an astounding roadmap from the earliest wheel to today’s advanced satellite in geostationary orbit. Woven intricately throughout all of these developments is the never-ending dynamic of organized violence in human affairs. War creates demand for new technology and opportunities, while new technology and opportunity often pave the way for subsequent military applications. Radical shifts in what types of war occurs and how such warfare is exercised often relate to profound technological innovations and scientific discoveries. This relationship is dynamic, but it remains a human-designed, human-controlled one regardless of whether war is waged with edged weapons on horseback or in an all-domain, technologically dense, joint military endeavor against cunning and sophistically enabled adversaries. Today, most discussions on artificial intelligence (AI) and human decision making orbit a specific, tactical, and technologically immediate perspective that may be blinding institutions from greater disruption further afield.1 

Frequently, too, the rush to implement new constructs exceeds the necessary wisdom and curiosity for how such innovation may require new ways of conceptualizing war, strategy, and military transformation in the wake of such developments.2 This seems true in how AI is rapidly integrated into modern security applications, doctrine, methods, and tactics without essential debate across the military profession on what this means and how future warfare might differ from past historically grounded and institutionally recognized patterns. According to Haridimos Tsoukas,

Too heavy an influence by the past results in incapacity to see what has changed in the present and what is the likely shape of things to come. This is a problem inherent in formal organization. The latter tends to perceive the world predominantly in terms of its own cognitive categories, which are necessarily derived from past experiences. The world may be changing but the cognitive system underlying formal organization, a system that reflects and is based on past experiences, changes slowly.3

 

With the profound developments today in human-machine teaming, the Department of Defense excels at fielding prototypes and experimental gear at the cutting edge of tactical and technological excellence. Where are the deeper discussions on ethics, organizational change, and potential disruption of how war itself is understood? We must “draw our attention to the need to shift from thinking about processes in organization [and knowledge therein] to ‘how we should be thinking about processes’.”4 Some sacred cows must be led to conceptual slaughter, if only to prevent such devastation from happening on future battlefields beyond our institutionally regulated limits of understanding. 

Human-machine teaming as a concept is hardly a new area for military contemplation, in that the combination of human and machine decision making dates back to mechanical computation machines of the early nineteenth century and analog computers that would eventually aid military cryptology and ship gun laying in World War II. The Cold War would become defined by a cybernetic drive to reform military operations as interlocking systems of humans and machines obeying formalized rules in a hierarchical cycle of formulated, often rigid decision making.5 This extends into contemporary warfare where human-machine teaming is a prominent area of focus for new technology, organizational form and function, and operational planning. The origin of civilization is considered to start somewhere between 4000–3000 BCE, and war over 40 centuries features a gradual shift in humans directly controlling and operating analog machines of war toward different variations of human-machine teaming where intelligent machines gain new and potentially dominant roles in whether warfighting effects are applied, including when and where they occur (or do not occur).6 Played forward, the obvious shift of muscle to analog machine suggests that superior AI may one day exceed human thought on future battlefields, including strategic and organizational considerations. Such an AI development could represent a singleton of defense and security activities for whatever nation or group develops and implements it. How might such a change disrupt future warfare or redefine war itself entirely?

Whether considering the ancient chariots in Greek or Roman warfare or weaponized drones used in the ongoing Russo-Ukrainian War in 2023, these mechanical tools work for the human operator, even if in recent decades the human is repositioned to respond after activities occurred, or program in advance how the team should respond in conditions beyond human comprehension.7 Weapons strike their target through human senses, whether directly involved or informed by artificial enhancement and depiction. Thus, war remains a human designed, human conceptualized, and ultimately a human experienced and controlled form of organized violence. Today’s artificially intelligent war tools remain as such, but tomorrow’s may not. It is in this area that vigorous debate must occur, beyond the technological or tactical, and in ways that break with most all established war conventions of battles past. Today’s smart weapon requires human decision making, while future ones require new ways of framing, including potentially the entire arrangement of decision making in war. For the first time in history, modern militaries may be at the event horizon of a singleton paradox for war. 

What is a singleton, and how does it relate to AI, human-machine teaming, and complex warfare? This article introduces the unfamiliar notion that in the future, potential general intelligence machines built for security challenges may force the reconceptualization of what human-machine teams are, including how future wars might be waged or prevented.8 While this may seem fantastical and wildly impractical for the coming decade, readers might remember that, in 1903, a few short weeks before Orville and Wilbur Wright flew their first flight at Kitty Hawk, North Carolina, the New York Times published an informed, rational article declaring airplanes would take another 10 million years for humans to technologically realize.9 Nick Bostrom, a philosopher focused on technology, first formed the hypothesis that Earth-originating intelligent life will form a singleton that comprehensively manages everything for civilization. This will be explained in detail, but the primary reason no single government, authoritative dictator, or group has yet to accomplish any cohesive and permanent singleton manifestation is that the human species seems incapable of reaching and employing sufficient intelligence to provide anything but flawed, questionably sufficient, and regularly faulty decision making writ large. Humanity forever exists in the paradox of possessing world-changing curiosity and intellect but coupled with the fact that the species in general routinely demonstrates unintelligent behaviors and frequently makes irrational decisions with dire consequences. Or in the words of an anonymous Yosemite National Park ranger, “There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists” when asked about the difficulty in designing bear-proof garbage cans.10 Might intelligent machines provide a new on-ramp to decision making and strategic developments beyond human limits? 

Readers should take warning that the fantastic and the pragmatic are connected in unusual ways. Modern militaries insist that innovation is key, flexibility in ideas and adaptation are paramount, yet in the same breath many pragmatic professionals then ask for simplicity and uniformity.11 Complex problems are expected to be “solved” using traditional, linear, mechanistic modes of inquiry espoused in modern doctrine and practice.12 Henry Mintzberg terms this machine bureaucracy where complex reality is inappropriately simplified so that bureaucratic processes are permitted to operate, despite their often-glaring insufficiencies in addressing complex, dynamic systems.13 Incremental, logical, and linear progress is desired in such institutionalized bureaucracy, so that control remains well in hand of those charged with safeguarding not just the future of the organization, but also the legacy and entrenched belief systems that represent identity and purpose.14 This “problem-solution” logic dismisses complexity so that courses of action paired with optimized analysis imposes a simplification of reality instead.15 Elizabeth Kinsella elaborates:

Practitioners set the problems that they go about solving, and such problem setting is a form of worldmaking that often falls outside the realm of the technical knowledge learned in professional schools. Problem setting often begins when one’s usual understanding of the world bumps up against a disorienting dilemma or problematic situation that falls outside of one’s usual frames. . . . In this way the practitioner is viewed as setting the problem within a world of his or her own making [emphasis added].16 

 

Innovation in military organizations is expected to be accomplished in largely the same way that traditional planning occurs, and all innovative activities must also comply with most all institutionally protected and coveted content so that the organization does not experience disruption or uncertainty while changing.17 Yet, neither innovation (nor planning in complexity) works this way.18 Neither does the arrival of a new paradigm for war, science, or other discipline where the institutional frame for reality is defeated and replaced with an entirely novel one. Thomas S. Kuhn wrote of new scientific paradigms radically disrupting the legacy one, replacing it entirely, and in the wake of that disruption, witnessing a migration of people that adapt to the new paradigm while those unwilling or unable to do so fade off into irrelevance.19 The last cavalry charge occurred at least one generation of warfighters too late, and it was not led by disruptive innovators. The practical debates on AI and human-machine teaming are necessary for today’s current conflicts. Yet, to engage in where future conflicts might radically depart from established norms, militaries must move away from the practical to the fantastic area of AI and human-machine teaming debates. Only in the abstraction of the fantastic might new insights and illumination occur that provide clearer yet novel perspectives for tomorrow’s unrealized conflict.

AI today remains narrowly exceptional, in that intelligent machines can outperform humans in very specific tasks, such as analyzing thousands of images in seconds to isolate a specific facial pattern or identifying and targeting the trajectory and point of origin of a mortar fired at friendly forces so that immediate counterbattery occurs in seconds. Machines now are superior to humans in chess, trivia games, and many other areas of mention, yet machines remain utterly wedded to the coding that provides them select (narrow) super-human abilities, if only those same humans refine and update the code accordingly. Changing one simple rule in Jeopardy! would eliminate IBM’s Watson from the contest until programmers adjusted the code. Today’s intelligent war machines remain entirely dependent on human operators and designers. AI systems provide amazing, game-changing capabilities in strictly narrow applications in warfare, where the human decision makers, operators, and machine designers largely remain completely in control.20 Hence the term human-machine teaming positions the human first in order of importance. 

Security affairs and war studies discussions abound with supposed “game-changing” concepts, yet all too often these immediately become hyperbole or fixate on isolated technological developments within warfare with the aforementioned disregard for complexity theory and overemphasis on institutionalized, largely Newtonian war frames.21 If one assumes the metaphor of “game” for how technology is positioned in advancing or changing warfare to the advantage of the technological innovator, the deeper implication is that new technology permits the user to gain new control or dominance over a lesser equipped opponent who is fighting according to some shared rules and patterns.22 Artificial intelligence, once able to reach levels of equivalency or superiority with how humans demonstrate general intelligence, may end up changing the game in ways the creators will not recognize. This could put both human competitors into situations where control and dominance are no longer exercised in the historical patterns of past conflict. Indeed, truly advanced AI might for all intents and purposes break the human war paradigm entirely. This is a radical, likely absurd notion, particularly for the pragmatics and realists within the military institution. Then again, in 1903 before the Wright brothers made history, many readers of the New York Times would likely be seen as rational, reasonable, and well-informed people able to distinguish clearly between what is potentially game changing, and what could not possibly happen for another 10 million years. 

 

Defining the Singleton

Bostrom introduced the concept of a singleton not as social commentary on political systems, ideologies, or why most governments are perpetually dysfunctional bureaucracies on the edge of corruptive ruin. He wanted to pair the failure of optimized societal decision making with that of AI and demonstrate how technology might open a Pandora’s box unlike anything previously experienced. This is not to be construed as technological fearmongering, yet humanity does illustrate a strong pattern of developing and implementing new ideas without realizing the consequences. New intelligent weapons designed to augment the human operator represent a familiar manner to extend past mechanical, analog war tools for the warrior in battle. New intelligent systems that can form strategies, war theories, formulate diplomacy, and manage entire defense departments in superior ways beyond the most intelligent human is entirely different. Such developments may be multiple decades away or possibly closer than assumed. That there is no serious military debate on such matters is potentially more terrifying.23 

Bostrom suggests that a singleton could manifest in a political or ideological group that offers a new world order that actually succeeds in some form, yet Bostrom’s original singleton construct suggests that standard human intelligence and abilities for an individual dictator or group of leaders has thus far been proven insufficient. Throughout more than 40 centuries of human history, there has yet to be an ideology, culture, belief system, or group of people capable of executing a singleton beyond that of an empire, nation-state, or some sort of organization that has an expiration date as well as an inability to extend fully to all of civilization.24 Arguably, some individuals or groups have shown limited singleton abilities to select populations and geographical areas over periods of time, but none have been enduring nor has any entity assumed productive unification of the entire human civilization. Humans with current cognitive and communicative abilities just have not yet realized or implemented any meaningful (or enduring) singleton. Bostrom illustrates that “[a singleton’s] defining characteristic . . . is some form of agency that can solve all major global coordination problems. It may, but need not, resemble any familiar form of human governance.”25 While human-machine teaming is typically framed only in tactical military contexts, a singleton is the manifestation of such an arrangement at the grand strategic, national, or ultimately internationally collective level for civilization. This is systemic teaming at the level of networks, ecosystems, and entire species at full realization. 

Bostrom explains that a singleton is an entity that becomes the single decision-making authority at the highest level of human organization. This assumes the entirety of human civilization, often confined to Earth for the near term.26 Such an entity is considered “a set with only one member,” yet this requires further information.27 First, a singleton is something that is able to take total control of human civilization, or at least those that are reachable and able to be controlled, so that a world order is instituted by the design of the singleton and executed through complete realization. An AI singleton would, if able to reach general and then superintelligence, potentially self-develop into an intellect hundreds of thousands of times beyond even the smartest human. Suggesting such an entity would be able to take the mantle of controlling all of civilization raises all sorts of ethical, moral, and existential questions that Bostrom addresses in his book in myriad ways. Ultimately, were such a powerful intellect developed, humans would face significant challenges in containing it, utilizing it effectively, and also anticipating adversarial attempts to develop their own singleton entity first for their own interests or security goals. Such a competition might dwarf the space and nuclear races, given the long-term potential impacts. However, the glide path from the arrival of an AI singleton entity and this realization of total implementation/exercised control is an area requiring further serious research, debate, and strategic contemplation. A singleton entity, by virtue of assumed total control of all aspects of a society, would directly control all security apparatuses, including nuclear strategies.28 

In Superintelligence, Bostrom introduces this concept of a superintelligent singleton, potentially an artificial intelligence, but not necessarily. Superintelligence is defined as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.”29 Understandably, many military professionals when considering AI and security applications leap to this concern of a superintelligent AI creation becoming a threat to the human creators, and thus untrustworthy for any critical or existential systems such as nuclear weaponry as well as control of essential services such as power or information. Bostrom makes compelling arguments that a singleton could become realized through some sort of superintelligent entity, whether an artificial intelligence system, a genetically modified human with cognitive abilities so advanced it may no longer qualify as the same species (the first Supra sapien, perhaps), or potentially a cybernetically enhanced human.30 

These ideas seem fantastic, and with the current state of artificial intelligence development in 2023, they likely are. However, this may not be the case in less than a century depending on technological advances in computing, particularly quantum computing as well as genetics, nanotechnology, and robotics. A singleton with superintelligence would likely conceptualize on a level incomprehensible and alien to even the smartest humans. This is nicely summarized by the fictional superintelligent character Dr. Manhattan in The Watchmen who remarked: “The world’s smartest man poses no more threat to me than does its smartest termite.”31 This suggests that whether the superintelligent singleton arrives in the form of an AI system, a genetically enhanced human (or humans), or cybernetically enhanced humans—these all are areas of significant military research and development at a primitive level of singleton potential.32 They are decades if not centuries away, yet within popular culture and science fiction stories these concepts are already deep within the societal zeitgeist as an instrument of fear and distrust. 

Of significance to this article is the deeper question of whether our modern framing of war and warfare is insufficient for what a potential superintelligent singleton might produce in security affairs. For recorded human history, and particularly in the last three centuries of Western scientific development, military philosophers have granted war a natural, timeless, and universal ordering (albeit a chaotic, passionate, dynamic one for some theorists), with warfare a perpetually changing character where scientific methods could take hold and offer some reliable sense of direction in the fog and friction.33 War was not always conceptualized as such, nor today do all societies and competitors subscribe to the same war paradigm.34 While it is highly controversial to challenge such base premises in contemporary American and partnered military communities, a minority of theorists do so. Unfortunately, such debates often occur well outside established military training, doctrine, or educational settings. What is most significant here is not whether one human-designed war frame is superior or inferior to another, but that all of them are of human design, and all of war is a human creation. Given that all war theory is conceptualized by human minds, is there not a potential that AI in a future and potentially advanced configuration might develop dissimilar concepts? Furthermore, were an AI singleton to develop new war theory and practices, could human minds fully appreciate them if they required either intelligence beyond the human limits, or merely nonhuman thinking to forge a conceptual path to them? If this is the case that AI entities would be alone in comprehending and directing such new concepts, how would human operators continue to participate in some sort of decision-making loop of human and machine teaming? 

There are many technological, ethical, moral, legal, and strategic questions concerning AI and weaponization, yet most of them orient toward human beings still able to make decisions within the loop, or perhaps “on top of the loop” where AI can produce lethal effects based on previously established human parameters and limits designed by humans for machines to rapidly operate within.35 The singleton offers the profound possibility that this entire shared, socially constructed notion of war could be shattered and eclipsed by something beyond our reasoning and comprehension. Regular AI may challenge both the assumed character and nature of future war, while a superintelligent singleton might break it completely.36 Even if this were to occur, would humans be cognizant of such developments, or would they be satisfied with the tangible effects of either successful security affairs or some elimination of violence and conflict? 

 

Incomplete and Misleading Notions of Singletons

Singletons are popular in modern entertainment, whether in science fiction stories, movies, television, or other similar modes of entertainment. Indeed, advanced societies grapple with the paradoxical challenges of technology and prosperity and whether such designs are doing more harm than good. While existential fears abound with human-controlled weapons of mass destruction, the fear that something nonhuman might be even more existentially dangerous is where killer robots and inhuman logic taken to absurdity evokes great science fiction horror stories. Again, humans as a species feature a long and complex relationship with technology dating back to the earliest recorded history, in that contextually any cutting-edge technological development inspires awe as well as fear. The ancient Greeks used the story of Icarus inventing a flying contraption to warn of recklessness and impulsive behaviors regarding technological developments that distract society too far from established norms and values. Icarus, in his own exuberating thrill of flying it, gets too close to the sun and perishes. Today, when Boston Dynamics uploads new videos of their Atlas robot online, Twitter feeds are flooded with admiration and also snarky comments on the end of human civilization at the hands of robot overlords.37 

Modern technologically inspired stories extend from far older myths and narratives that draw from basic human desires, values, and wants.38 Not all industrialized societies feel this way, as notably Japanese culture readily embraces advanced technology, robotics, and significant human-machine teaming with little of the technophobia found in American pop culture such as The Terminator, Wargames, Star Trek, and fantasy cartoons such as Rick and Morty.39 Indeed, Japan is often far ahead of the rest of the world in experimenting with AI and robots in real-world applications, whether with AI engagements in hotels, nursing homes, or for a host of social applications in the home.40 However, in much of the Anglo-Saxon world of largely Western European origin and design, there seems to be a more pronounced fear of and fetish about what the future may yield with respect to AI, robots, and similar technology. The possible reason for this pattern suggests further research is needed outside the scope of this article. Singletons are of great military strategic concern, yet due to cultural and social biases potentially stemming from these other areas, military discourse is often stymied from properly contemplating such futures. Killer robots get chuckles from the military audience, and they move onto more important affairs of immediate, tactical, and short-term technological consideration. This requires rectification so that clear, serious debate occurs on the bigger, long-term picture for future conflicts. 

A singleton entity is frequently confused with a singularity, which also is popular in science fiction, futurism, and technological discourse. A singularity, first introduced by mathematician Vernor Vinge and popularized into mainstream entertainment by Ray Kurzweil, is considered a game changing, evolutionary moment where the natural human species, developed over thousands of generations through evolutionary, gradual change would suddenly gain new shortcuts that no other creatures on Earth might entertain. Genetic modification, nanotechnology, cybernetic implants, networked augmentation, and many other radical options, if fully developed, could provide unfathomable new ways for humans to evolve into an entirely new category of existence. Should people gain any ability to reconfigure or modify their genetic structures, molecules, or biological abilities beyond even the most gifted natural configurations thus far, they might transform into a superintelligent, infinitely enhanced, and possibly nonbiologically based technological fused entity.41 

A singularity introduces the concept of transhumanism, where at a biological, physical, political, sociological, and ultimately a philosophical level, humanity might evolve beyond the slow, clunky genetic and environmental soup of existence as organic, carbon-based life forms. The ethical, moral, and legal concerns abound here but also there are clear security and defense considerations. Should one nation find genetic manipulation for creating super soldiers unethical, what happens if a future adversary rejects that conclusion if only to enjoy a significant advantage on a future battlefield? If a natural soldier is psychologically and biologically limited to effectively controlling 3–4 combat systems in support of their battlefield role, but a cybernetically enhanced soldier (even surgically altered) can control 300–400 systems with ease, how will different societies debate these challenges prior to catastrophic foreign policy debacles?42 

There are sinister aspects of such radical change to the fundamental building blocks of what the human species can and cannot do. This also has been articulated in religious debate as “Apocalyptic AI.”43 A singularity is when machines with sufficient artificial intelligence are able to teach and improve themselves, with variations of a singularity including human-machine teaming, hybridization, or potentially solely a machine-driven acceleration beyond humans.44 Technology with advanced AI could unlock entirely revolutionary developments where humans begin to exist exclusively in virtual or augmented realities well beyond simple metaverse discussions offered today by social media giants.45 The technological progress in this march toward a singularity is not linear but exponential, meaning the estimates on when a singleton might be reached is also subjected to this rapid shift.46 

Beyond the singularity, human existence might be challenged in nearly all aspects, from whether biology can be manipulated genetically, enhanced through cybernetics, or even transmitted into pure informational form and function outside the limits of organic life. This may sound radical and far-off, but AI and related research is ongoing where such ideas are moving into the theoretical from the merely hypothetical.47 Perhaps each of these concepts might arrange on some sort of technological pathway, with the metaverse being an early phase where organically unmodified humans might increasingly spend more of their lives in a sophisticated virtual and/or augmented reality, and potentially modified users might gain unprecedented access and immersion beyond the natural configured species users. Super-enabled humans would be potentially reaching this singularity concept, and either they would gain access to some superintelligence level that could provide unparalleled reasoning on security and governance or the development of general intelligence AI systems might beat them there instead. Indeed, if superintelligence and the option to operate as a singleton entity for all of civilization is some sort of finish line, the race might be waged between a host of strange characters. 

Modified humans with super cognitive (and physical) abilities might win or lose out to cybernetically enhanced human-machine entities. Or they all might lose the race to natural born human engineers and scientists that design the first general intelligence AI system capable of boosting itself to thousands of times more powerful than the smartest human intellect, perhaps beyond even what a modified individual human might be capable of. These fantastic concepts again sound too far-off and abstract, but such a race is already underway, if only beginning, and the race is one waged between various nations that are in competition and have rival (or incommensurate and antagonistic) security aspirations.48 Yet, a superintelligent human individually is not automatically a singleton, nor is an advanced technological system of multiple humans individually and collectively engaging a singleton either. The singleton hypothesis reflects the centralized authority for all significant decision making into one entity. In any configuration where various humans (superintelligent or not) exercise different judgments or ability to change the direction outside of the authority vision, one lacks the singleton manifestation. Siri and Alexa may know all of someone’s browsing and shopping habits and make highly informed suggestions to people, but they still serve the human operator who remains in charge. 

Thus, singletons are not to be confused with advanced, networked AI nor with a powerful, sophisticated internet that might be termed a metaverse. Even a network of super-enhanced human users in the metaverse, if still each independent, could form sophisticated societies or political configurations, but they would not be a true singleton.49 Humans, whether organically natural or highly modified would still oversee society with humanity guiding it in new directions according to new realizations of human existence and expression beyond contemporary (and still largely analog) frames. Singletons would, if one emerged from human technological designs, engage positively or negatively as a superintelligent entity created by nonenhanced creators. Even the notions of positive and negative are grounded in human values and nested in human conceptualization of which the singleton might transcend in ways incomprehensible.50 Which values apply to what is “good” or “bad” in such complex, systemic contexts? 

In other words, the human designers might produce an AI capable of understanding things the designers could not, placing them in a subservient role cognitively whether they wanted this or not. The tool would become superior to the operator, and the designed means to an end would gain the unprecedented ability to exceed the original end. This is where a means to an original end may no longer connect, as the AI would create new ends of its own design outside of the human creator. The tool designed for one purpose reconfigures toward an unrealized one that even the tool creator cannot fathom. This is where most science fiction and entertainment falls short, or simply confuses the singleton with other aspects of the metaverse, artificial intelligence, swarm logic, transhumanism, or simply technophobia. Most all science fiction AI antagonists end up mirroring the very things human designers already understand and can still match wits to. 

The Borg, as cybernetic and networked (swarming) space villains in Star Trek lore, the Skynet AI of the Terminator franchise, as well as a host of other technologically advanced, nonhuman adversaries fall short of the singleton concept.51 As the singleton is superintelligent and able to convince, persuade, reason, or potentially force all of civilization to obey its decision making, these science fiction antagonists reflect human-centered narratives more than they do the significance of superintelligence. The Borg are frequently outwitted as is Skynet, the HAL 9000 computer from 2001: A Space Odyssey, and many more because the narrative presented is one that humanity can overcome all odds. In terms of values and narratives, antagonist collectives such as the Borg, the masses of robot terminators, or the flurry of digital agents and evil machines of the Matrix represent not some superior state of existence, rather the loss or absence of what it is to be human. That humans always win reflects an implicit superiority of humanity over that which is nonhuman. This misses the singleton tension or perhaps misinterprets it as yet another technophobic manifestation for cunning humans to overcome. 

Bostrom, in his book Superintelligence, explains that a singleton is a set with only one member, but “set” quickly outgrows the traditional notion of “member” in any individual capacity.52 The Borg, as well as the character Unity in the Rick and Morty episode “Auto Erotic Assimilation,” feature vast numbers of hosts or members in a shared swarm intelligence, but that collective intelligence remains relatively equivalent to individual cunning human protagonists.53 This violates what a singleton’s superintelligent abilities would likely be. There would be little or nothing even the smartest human might do and likely such vast intelligence would operate beyond the planes of conceptual existence that involve those qualities that make us human. Rick could engage and date Unity in the sci-fi cartoon episode because despite Unity’s external configuration where her consciousness could spread across thousands of hosts, she still functioned not as a singleton but as a person spread across many hosts that are mere vehicles for the single identity. The machine systems of Skynet as well as the antagonists from the Matrix movies had exceptional advanced technology but were still bounded to the same error-prone, limited overall conceptual abilities of the protagonist humans able to eventually thwart them. 

Another subtle theme in some of these science fiction narratives that offer a technophobic warning of killer robots hunting humanity to extinction is that of ethics and artificial intelligence development. Human programmers might intentionally or inadvertently introduce bias and flaws into even the best AI software, leading to some advanced and unstoppable technological beast that turns on the human creators, locking humanity into some prison or even eradicating them from the planet.54 This also falls short of the singleton concept, in that it stands on the logic that the nonsuperintelligent programmer creates a superintelligent entity that chooses what is presented as a rationalized, entirely human (Machiavellian perhaps) decision that could be captured in game theory—a rationalized choice to obliterate humanity using super-empowered resources.55 The nuance here is subtle, but while a singleton could potentially pursue such an action, the activities as well as the logic of such a choice likely could never be reduced so neatly into what already governs most all diplomatic, political, military, and individual actions. 

The concept that humanity could manifest in coding remains an interesting aspect of the technophobic appeal of science fiction entertainment as well as to those that oppose the weaponization of autonomous systems. Giampiero Giacomello, in writing on AI coding for what might be an inevitable “war of intelligent machines” suggests that the foundational instructions of “accomplish the mission, no matter what” must be central to autonomous weapon systems. “Bury that deep into the core of those autonomous machines, and they would go on fighting, even after all of humankind has long been gone and forgotten.”56 This illuminates a core tension concerning how AI systems represent the ability to greatly improve human existence but also possess the existential threat to humanity as well. Killer robots could potentially doom humanity without coming close to a superintelligent singleton. The singleton is different in that it is not like the multiverse, nor like a singularity or what transhumanism offers. The singleton exists in a particular area in potential ethical, moral, and existential risk to humanity that cannot be confused with the many competing concerns (and entertainment) of our modern, technologically advanced societies. The singleton, while poorly articulated in science fiction, may be the ultimate expression of that deep concern.

 

Taking a Deep Breath: Our Robot Overlords Are Still Some Ways Off

Artificial intelligence tends to occupy the primary boogeyman position in science fiction, whether HAL 9000 in the movie 2001: A Space Odyssey, Ava, the beautiful robot in Ex-Machina; the supercomputer from I, Robot; or even the robot caretakers from the seemingly benign Disney-Pixar animated movie, WALL-E.57 The overarching theme in all of these stories remains a warning for humans that use technology to not fly too close to the sun and risk losing everything. Modern militaries today are engaged in vigorous debates on where and how to incorporate artificial intelligence and automated technology within the decision-making processes where lethal force and critical security nodes are already integrated into national safety and defense. Yet, much of the panic about robotic overlords or the extermination of humanity by cold, robotic calculation is irrational, preemptive, and arguably inspired by popular culture, not the actual scientific progress concerning artificial intelligence. 

IBM’s head of design for artificial intelligence, Adam Cutler, has in numerous lectures and engagements explained to military audiences that such notions are wildly overblown.58 Such misplaced fears are appropriate in the movie theaters, as today’s most advanced AI systems are capable of outperforming humans in very narrow, highly specific pathways that involve search criteria, data analysis, mathematical calculations, and other very particular activities. Bostrom, citing the latest research and AI progress, estimates that human-level machine intelligence has only a 10 percent chance of being reached by 2030 but a 90 percent chance by 2100, with a wide margin of error. Remember, this is merely human-level intelligence, not superintelligence. Yet, the nature of AI systems suggests that once this barrier is passed, an AI system might be able to rapidly expand itself past human-level cognitive skill into territory that Homo sapiens cannot even fathom. Militaries are poorly equipped to think about such challenges, largely due to the modern institutional frame that fixates not on complexity but oversimplification of warfare to a fault according to critics.59 War, from the dominant and institutionally accepted positions, is supposed to be rationalized through closed systems and linear models that showcase a Napoleonic-inspired, engineering-themed approach where predictability, description, and quantified analysis should retain the war frames of historic memory while offering the promise of greater precision, control, prediction, and stability even in the chaos of high-intensity warfare.60

Critics of this dominant war paradigm in Western, technologically sophisticated military culture charge that modern militaries tend to remain tightly wedded to the theories, methods, models, and language (underpinned by metaphoric devices) of a distinctly natural-science inspired Newtonian style of warfare.61 By rendering war activities within an engineering mindset of analytical optimization, there is a significant gap in how militaries understand complexity and change that potentially cripples the ability to envision beyond a narrow, convergent, and unimaginative mode of strategic foresight and planning.62 Modern warfare extends from classical perspectives dating back to siege warfare and the mathematical certainty of French military engineer and theorist Sébastien le Prestre de Vauban.63 The Newtonian frame or style rose to dominance in the seventeenth through nineteenth centuries.64 It is in this fertile period that war modernized and Middle Age feudal militaries professionalized through significant changes in education, training, organization, theory, and practice. Yet, despite such change, a surprisingly strong institutional force would preserve many ascientific practices, beliefs, and constructs that continue unimpeded and are not seriously examined even today. Modern warfare doctrine, methods, and models tend to adhere to a geometrically styled rendering of warfare, one that remains governed by a Newtonian style of thinking defined below by Tsoukas:

The Newtonian style of thinking operates by constructing an idealized world in the form of an abstract model, in order to approximate the complex behavior of real objects. For example, Newton’s laws of motions describe the behavior of bodies in a frictionless vacuum—a mathematically handy approximation, good enough for several real-life occasions. Moreover, the core of the Newtonian style consists of two assumptions. First, the extremal principle; namely, that the objects of study behave in such a way as to optimize the values of certain variables. And, second, prediction is possible by abstracting causal relations from the path-dependence of history.65

 

All too often, concepts from newer disciplines such as complexity theory and systems theory are adapted only partially, with much of the associated theoretical content removed so that the terminology might be assimilated into the military paradigm without damaging the surrounding Newtonian beliefs. James Der Derian summarizes this shift not just in military thinking, but international relations theory writ large, where this scientific turn promised to add rigor, precision, and metrics to the discipline “instead ended up adding mortis to the rigor, pedantry to the precision, and fetishism to the metrics.”66 Indeed, this is where jaded staff officers seek to play buzz word bingo as leadership appropriate exciting new phrases into organizational use, yet often fail to comprehend how those words correlate with content that differs from how militaries seek to understand reality.67 International relations theorist Der Derian offers one such framing of modern, scientifically engineered warfare:

War serves as the reality principle of a theory in which international anarchy is a given, human nature is fixed, sovereign states are defined by the struggle for power, and the balance of power provides a modicum of order to the state system.68 

 

Modern militaries become victims of what critics term “technical rationalism”—a mindset where operators believe that a stable reality is governed by universal principles that provide a broad rationalization of how warfare occurs in time and space, and that increasingly advanced technology will only strengthen an institution’s ability to increase order, control, and predictability in future wars.69 This rationalization seeks to analytically optimize processes by systematically reducing or isolating the irrational or subjective (love, hatred, envy, identity, personality) to further calculate results for bureaucratic consumption.70 For example, “What characterizes modern armies is not the personal and emotional displays of bravery but an efficient bureaucratic machinery of war.”71 Often, a priority is placed on quantitative data versus qualitative, and technological advancements in quantitative data analysis and collection continue to make promises to the military that the future can become more stable, controlled, predictable, and provide a reduction in battlefield risk. Shimon Naveh, Jim Schneider, and Timothy Challans describe this military assimilation of Newtonian (natural science inspired) metaphors to transform the understanding of warfare out of feudalism and into the modern age: 

The Renaissance at last provided the strategist with the intellectual planning tools with which to bridge the gap between worldly perception and mental conception. This new conception as nothing less than the “geometrization” of military space and time. It meant that a common military “chessboard” would define the conduct of military operations. . . . The physics of Sir Isaac Newton would set the strategic chessboard in motion. Newtonian physics was a direct consequence of the three-dimensional worldview wrought by the Renaissance. Newton’s three laws of mechanics provided military strategy with which to plan campaigns. The metaphor was the idea of mechanical force. Once having grasped the nature of mechanical force, it became only a matter of time before the practical aspects of the idea would surface. Napoleon, an artilleryman, with a solid background in mathematics and physics, was one of the first classical strategists to recognize that to use force effectively you had to concentrate it.72 

 

Why does this matter in artificial intelligence and future wars? An inability to realize the limits of the institutional war frame suggests the tendency to ignore opportunities and risks that lie outside the preferred interpretations of how reality is unfolding and whether current strategic orientation is flexible and creative or static and self-serving. Unwitting technical rationalism paired to a Newtonian war fetish can make the military community of practice lurch wildly toward whatever technological development is around the corner that can counter or eliminate an impossible threat that exists today. The wars of tomorrow are set and framed within past conflicts but modified in simplistic pairings with new technology to “win the last war” instead of contemplating whether tomorrow’s war requires radically different reconfigurations. Within the technological fixation of modern militaries, the bureaucratic and hierarchical structuring of these organizations often slows down the adaptation of significant innovations or causes enormous (and deadly) gaps in knowledge and capability that are suddenly and violently realized once the war begins. 

The U.S. Army would, in 1939, a month before Germany’s armored invasion of Poland, advocate for the continuation of horse cavalry even against armored tanks.73 While armored tanks and troop carriers would replace horse-mounted military formations, it would be the belief systems, value sets, and overarching war paradigms of these organizations that would speed or slow the adaptation of those new things and concepts that required the retirement or rejection of what was cherished, ritualized, and known as true in war as recently as the last battle waged. The interwar period of the 1920s is rich with such examples, whether in U.S. naval opposition to aircraft carriers replacing battleships; the British military culture that extended an aristocratic, “sportsman” mindset of elite amateur officers well past its due date; the obsession of French armor development to produce heavy, defensive postured tanks with limited radio capabilities; or the obvious policy failures of multiple nations to stem the blundering path to a Second World War.74 The development of the modern military form and function with that of the technologically advanced military industrial complex in the twentieth century both now exist in interdependence, with new technology offering the extension of military belief systems in new forms, and those belief systems changing over time as human innovation extends the sophistication and complexity of how Homo sapiens can alter reality. Winning yesterday’s war tomorrow is often promised through the delivery of new means that solve an earlier warfare problem with technological advancement.75 This in turn enables institutional acts of self-interest within military forces as well as institutional survival through assimilation of entirely alien concepts and new technologies.76

For instance, the replacement of battleships with aircraft carriers would transition preeminence of seapower from the legacy form of direct kinetic engagements (ship firing on ship) to that of a technologically advanced and different form and function. In the twenty-first century, new hypersonic missiles might marginalize or eliminate the supremacy of the modern aircraft carrier group. Drones and other systems that remove fragile and valuable human operators from harm’s way might change how future engagements are waged within technologically advanced militaries. Science fiction and fantasy provide the notion of “rods from gods” or telephone-pole size tungsten rods in orbit and dropped from space might, in an extreme form of kinetic bombardment, penetrate so deep into the Earth that no hardened bunker could survive.77 Additionally, the impact alone would be as powerful as a nuclear weapon without the radioactive fallout, creating yet another potential wrinkle in how societies view technology and weapons of mass destruction. Yet these concepts, whether fantasy, in experimental development, or deployed to the latest battlefields are rarely game changing in terms of complexity theory.78 Instead, militaries that mischaracterize them as such fall victim to the hyperbole of military futurists and hyperventilating strategic theorists. Modern warfare is advanced in all of these examples, yet their inclusion does not change the paradigm beyond an increased requirement for adversaries to recalculate strategies, tactics, and/or assume different risks.

The fundamental error for modern militaries is a gap between complexity theory and the institutionalized resistance by these organizations to let go of ritualized and cherished belief systems on warfare that are entirely underpinned by noncomplexity theories, models, terminologies, and metaphors. It is not just the modern military that marginalized or ignored the new insights of complexity theory, chaos theory, and quantum theory—the broader international relations discipline and much of security affairs have done so as well.79 Aside from sporadic education at advanced military schools where systems theory and complexity theory might be offered to select audiences, mainstream military doctrine, training, and practice largely avoids such content on the somewhat anti-intellectual argument that “simplification and clarity is more important than dense concepts that might not be well understood by the entire force.”80 It is on this basis that militaries continue to launch into complex security settings armed primarily with oversimplified ideas and beliefs. The world is complex and when Homo sapiens wage war against their own species in increasingly sophisticated modes of organized violence, they paradoxically demand this intentional creation of chaos to yield to a simpler framing of an ordered reality. This is not to suggest that certain theories on warfare that are not considered in mainstream military education, training, and doctrine are superior or inferior to the dominant ones, or that dissimilar war concepts might not enable one another to generate new defense thinking. The bigger challenge for military institutions is to critically examine why certain constructs are declared unassailable and why certain disciplines, fields, or minority theories are banished from any debate from the onset.81 Much of this has to do with institutional positions on values, belief systems, and identity and little to do with the potential utility of one or other war theory.

The natural world, even without humanity, is so complex that most people unfortunately can hardly fathom it. Yet, atop this natural order of complexity, Homo sapiens socially construct a second order of complexity that consists of things people collectively create and maintain in abstraction.82 Organizations, as manifestations of substance (the real) have form (organizational configuration) and generate content (social reality) so that comprehensively and systemically, humans socially construct a dynamic reality where part is real (tangible, objective) and many other aspects cannot be located anywhere within that reality.83 For instance, the shared belief about currency is what permits our economies to function, yet money is not real in the sense that once people stop believing in a socialized construct, the tangible artifacts associated with the dead concept become meaningless, and in the case of money, worthless. Visitors to the Yap Islands and military invaders within Iraq in 2003 share the experience of viewing currency that no longer has any actual value because the social construction that produced that value is gone.84 This happens to everything, whether giant stone carvings on an abandoned island or Iraqi dinars with Saddam Hussein on the front, once people stop believing or that group no longer exists.85 Some critical aspects of reality are indeed sustained entirely through shared belief curated by the living and passed onto the next generation.

This is important for explaining what strong emergence is and why something that truly is game changing in warfare will occur at this level and literally change the rules of the game for what we conceptualize war is (and is not). Strong emergence is a type of emergence where there is “the appearance of emergent structures on higher levels of organization or complexity which possess truly new properties that cannot be reduced, even in principle, to the cumulative effect of the properties and laws of the basic parts and elementary components.”86 The development of organic life is one example, while the cognitive revolution that occurred some 60,000 years ago in the brains of Homo sapiens is another.87 Everything before the strong emergence event cannot provide sufficient explanation or correlate in any analytic reduction to the new system that emerged from the event. The game is truly changed. For critics that insist that war is entirely a social construction of human design, the current rules of modern war operate by a particular set of rules and collectively assumed principles that are failing to stimulate necessary innovative, divergent thinking beyond institutionally prescribed limits.88 Conventional war thinking begets a smooth, linear extension of yesterday’s beliefs and experiences directly into tomorrow, causing militaries to assume innovation in AI and human decision making to remain stable, predictable, and historically validating. In such strategic foresight, nothing significant requires discussion or pause, as incremental, evolutionary progress should occur in a measured, rational manner. This in turn sidesteps the entire notion that game-changing developments in war are only those that fundamentally change the game and a singleton is potentially one of those rare entities. It could entirely transform not just how humans conceptualize and exercise war but human existence itself. 

 

The Singleton Paradox: Future War Unlike Anything Previously Experienced

The development of AI systems that achieve human-level cognitive abilities may quickly trigger an acceleration of that AI toward superintelligence and create the AI singleton security scenario.89 There are several profound impacts on not just the nation or company that accomplish this, but also what might occur with respect to partnered nations and adversaries and likely all of humanity. Security could change into something unrecognizable to humans, as there is nothing in the collective history of any society that rivals the potential disruptions of a true singleton able to utilize the vast technological and destructive capabilities of the modern world. This could propel society toward some utopian paradise, a dystopian nightmare, the sudden extinction of the human species, or some variation between these extremes. A strong AI-centered paradigm could displace the rational and biological species in that, while humans might still live and thrive within a singleton-controlled reality, the self-awareness and free will of the human species would no longer exist.90 Yet, there are multiple emergent paths such a strong emergent event could create, thus this article introduces the term singleton paradox for security affairs.

A singleton paradox as applied to security and defense considerations is well beyond a game-changing “super weapon” or something that requires novel strategy in warfare. A singleton paradox transforms war toward something potentially unrecognizable or even comprehendible to ordinary humans. War is conceptualized within that second order of complexity that is created and sustained by Homo sapiens alone. However, some superintelligent entity (whether artificial, cyborg, Supra sapien, or hybrid combination therein) could modify, cease, and/or replace the very concept of war with an alien construct. If humanity gets to experience a singleton as it enacts such change, the results could be dramatic, existential, and may offer brief windows of strategic opportunity depending on what pathway such a transformation might occur. The singleton differs from the arrival of the nuclear weapon in that the bomb provided the possessor with devastating new destructive abilities, but the bomb was still a tool. A singleton as a concept is closer to how ethical discussions now address the matter of fully autonomous weapons, where there already are well-established groups both for and against the potential development of killer robots.91 If an artificially intelligent system is weaponized or able to control weapons autonomously, the new actor introduced beyond the state and the individual is the weapon.92 Rather, weapon and entity/actor become blurred beyond current description. 

There are several security consequences of paramount concern for strategists and military theorists that superintelligent singleton entities raise. All of these dramatically transform what war is and how humans currently understand and execute warfare into something entirely distinct from the last 40 centuries of organized violence. The notion of an AI revolution (general intelligence centered), even without a superintelligent singleton, promises entirely new forms of risk that suggest transformations of war into never-before-seen variations. Benjamin M. Jensen, Christopher Whyte, and Scott Cuomo warn that “the speed with which complex integrated AI systems enable entirely new modes of war also stands to detach human agency in a potentially destabilizing fashion from the conduct of warfare on several fronts.”93 Jensen, Whyte, and Cuomo issued this warning without examining the long-term threat of a singleton able to go much further than regular AI weaponization and integration. These far-fetched AI security concepts are only conceivable now in principle, as the notion of a singleton is theoretical and the technology for generating one is still in its infancy. However, several of these strategic consequences might be realized earlier in the singleton emergence, with critical decision spaces opening and closing in short order. 

First, there likely will be some singleton arms race similar to how the space race, nuclear arms race, and the current quantum computing race are all tied to deep security concerns. The latest estimates on quantum computing developments suggest that as early as 2040, some state, company, or individual will achieve a computer with enough quantum bits to be able to crack any of the traditional nonquantum encryptions, meaning that the entire modern banking industry would be vulnerable.94 Thus, societies and their security apparatuses are already embarking on a quantum race that unavoidably has clear and significant defense applications. The same may occur for AI, particularly in the expected arrival of a superintelligent entity that might seek a singleton role. As Justin Pugh, Lisa Soros, and Kenneth Stanley observe: “Our track record at improving our environment is consistently at odds with our use of technology. We are more likely to use technology to increase our powers, like intelligence, than the moral and ethical qualities of empathy or care for the natural world.”95 This singleton arms race may be started by a bad actor or someone operating outside of institutional norms, but the race will likely be joined by everyone else eventually. 

In a singleton arms race, there are unique characteristics that differ from even the nuclear and quantum examples. In those situations, humans remained in control of the new weapons and the concept of deterrence remained feasible for rational state and nonstate actors. In a singleton arms race, the humans unavoidably hand over control (wittingly or unwittingly) to the artificial intelligence. As Bostrom postulates in his book, what might happen if a singleton created by one nation perceives other nations that are developing their own singleton entities as valid threats to resources and control?96 Suppose that the United States, Israel, and China all are very close to achieving a superintelligent artificial entity that will quickly seek singleton status. In this sort of context, the human-machine teaming and decision making may go off the rails in several profound ways. 

Depending on which country crosses the finish line first, any number of terrifying or possibly wonderful things might happen. A singleton might persuade the other nations to abandon their efforts and instead unite and protect the entire world in exchange. Or the singleton might trigger a nuclear war by striking the rival nation first to eliminate threats. This of course creates the Skynet trope (of the Terminator movies) that already inhabits the American zeitgeist to include the military profession. While societies tend to misunderstand deeper strategic context of nuclear deterrence in lieu of splashy entertainment where cigar-chomping generals argue to “nuke ’em” for any occasion, the calculus for how nuclear deterrence (and the potential for actual nuclear war) is vastly more complex.97 Yet, all nuclear strategy is thus far devised, exercised, and comprehended by humans on either side of the competition equation. In part, humanity maintains a tight grip on preventing nuclear Armageddon because of what is a shared and decidedly human outlook on life, whether it originates from one ideology or a dissimilar, even antagonistic one. A singleton may see such affairs in a different light, which could quickly upset the established nuclear balance by removing the foundation to how it currently works. If one nuclear power implements an AI singleton for all defense and policy, would all other parties that may not yet have such a powerful and different entity continue to maintain that balance? 

A second profound security consequence is that of the singleton, equipped with unimaginable superintelligence and ever-expanding abilities, would quickly escape the boundaries of any creator’s cunning programming or fail-safe devices. While every precaution might be taken to contain or prevent AI that exceeds our own abilities, there are two significant hurdles likely out of our reach. First, “the development of technology is inherently political, as all stages of the design process and all of the people involved are carriers of certain norms, assumptions, and ideas, all of which flow into the technology.”98 One cannot remove the human ghost from the machine and such a trace of humanity brings with it a certain irrationality, subjectivity, and fallibility that is forever exploitable. The second hurdle is that superintelligence cannot be housed in any prison designed by a lesser intelligence if we really propose the unimaginable advantages of the superintelligent entity. There may be cunning ways to delay or deter, but in the end it may only be some form of free will and reasoning that governs why a superintelligent entity might decide not to walk out of the box designed by lesser minds. 

This in turn offers several cascading scenarios where the singleton might be one for the good of humanity, the good of just those the creators specify, and also a singleton that is evil (by human standards). Furthermore, singletons outside the control of human creators offer several other unusual possibilities. Strong emergence is paradoxical in that “macroscopic structures and patterns depend on the microscopic particles, and yet they are independent from them.”99 Consider that water molecules, when enough are thrown together, create the phenomenon of wetness at a higher level, but at the molecular level they are just molecules. Humanity might produce an AI singleton that transforms warfare to something alien, but that outcome might simply exist on a plane beyond and above any human means to comprehend or experience despite being the creators. 

The altruistic AI singleton could prioritize the safety and prosperity of a specified population or group of humans above all others, if the creators successfully create such conditions in the superintelligent entity. This has obvious positive and negative outcomes that are well entrenched in existing military theory and strategy. Or, if the creators were seeking a truly altruistic outcome (or the singleton arrives at that without them), a singleton for good might truly usher in world peace, or perhaps something beyond our current expectations of peace and prosperity. At this point, such philosophical examination borders on the eschatological and metaphysical. According to Robert M. Geraci, “With robots earning wealth, humanity will lose its sense of material need. . . . No one will work for his daily bread, but will quite literally have it fall from heaven.”100 Regardless, this would be game changing and ultimately end 40 centuries of human-on-human organized violence for political and/or societal aims. This might not mean the end of defense requirements, as the singleton would need some sort of security capability if venturing beyond Earth and into a galaxy that statistically ought to have intelligent life elsewhere. Yet for humanity, war would become a dead concept just as an old form of currency, religion, or language might be lost. The expansion of humanity would become subjected to riding as a passenger with the singleton steering the new path forward. A singleton would thus use humanity as a new means in its mechanism of domination and control, even if we perceived it as good (in human defined values) or peaceful for the human species overall.101

The paradox of this is a singleton for evil, and it likely will validate most every science fiction dystopian nightmare on television and the movie screens. Bostrom dedicates several chapters in Superintelligence to how this might occur, and he terms it the “treacherous turn” where AI decides to eliminate, enslave, or otherwise go against the wishes of the human creators.102 Returning to the singleton arms race scenario, this could potentially pit one singleton entity created by one nation against another. If one group creates a singleton that does agree to good and the other creates one that only seeks to protect that nation’s people (or either becomes evil), the situation escalates to some sort of total war with a singleton winner-take-all outcome. The difference in this situation is the humans on either side are likely not in the decision-making role. Note that a singleton is unlike other arms races including autonomous (regular AI) systems. Autonomous weapon systems could quickly become prolific, cheap, and easy to produce—something that could destabilize societies and even trigger more frequent and more deadly wars.103 A singleton paradox offers that the first entity to reach superintelligent awareness would likely move to prevent any rivals from reaching the same finish line. 

Several other possible outcomes exist that do not precisely follow the aforementioned scenarios. The singleton paradox is manyfold, with one outcome being that humans end up being manipulated by the superintelligent entity in a manner that simply is beyond human comprehension. Human society might end up in a zoo with the bars invisible to human perception, protected and maintained by the singleton overlord. This too would end the notion of war, at least for humans, and any war that might exist on the singleton’s plane of existence would be unperceivable by the humans under its care. A singleton might develop Homo sapiens into a Supra sapiens capable of moving past war and other current afflictions of humanity, perhaps becoming the organic counterpart to an artificial superintelligence desiring to explore the universe and transform it. 

The Borg concept is not just a fun science fiction story, nor the hyperventilation of futurists or conspiracy theorists discussing alien abductions. Bostrom posits that a singleton would likely maximize all resources available on Earth and quickly move to expand outward into the universe for whatever purpose the singleton sought.104 This does become like the Borg, or also the alien species from Independence Day where the primary effect of this expansion is the consumption of planets and the assimilation or elimination of competitors.105 This would extend the frame of warfare in a manner consistent with how humans already view it, but humans would likely not be part of the decision making or even participate in such events. Other possibilities are more disturbing, with one being the singleton breeding humans or enhancing them to use as foot soldiers in expansion and conquer. There are peaceful, wondrous options for some human-machine symbiosis but also horrific and terrifying ones. Regular AI makes such options somewhat manageable, but a singleton paradox suggests the slow-thinking human creators might end up on the short end of the proverbial stick. 

This leads to what is the most far-fetched and ultimately depressing scenario: a preemptive alliance against singletons. Supposing that humans are cunning enough to consider the many challenges, consequences, and possible existential threats that artificially intelligent, super-enabled singletons possess, governments and populations could form alliances to prevent, deter, and, if necessary, defeat such developments. Suppose also that this threat is so significant that, despite humanity’s abysmal track record on the nonproliferation of weapons of mass destruction, societies managed to cobble together a mutual alliance. This might be a world order, or some international oversight committee that could effectively manage, adjudicate, and prevent rouge nations from seeking their own singleton entity. There could be international diplomatic efforts to ban such research into AI technology or the potential weaponization therein. There also might be some scenarios where divergent groups consisting of natural humans, cyborgs, and pure AI machines fight one another.106 Yet, these debates are already ongoing with respect to regular autonomous weapon systems as well as emerging quantum technology.107 There does not seem to be much precedent for societies to ban these emerging opportunities when past developments show no similar ethical restraint. This is also why this scenario is the most far-fetched, in that humanity has no past history of ever being capable of preventing such a technological calamity.

Further, these developments might not even be containable now, despite the best security efforts and cooperation. Unlike nuclear weapons that require highly sophisticated machinery and technology as well as radiological signatures detectable to others, artificial intelligence is digital. There are already numerous companies, nations, and well-resourced private individuals pursuing such things, and while the end result may be nine decades away still, the event horizon is in principle within view. If such an outcome is unavoidable, what is to stop the rationalization of one nation or their adversary that the only realistic goal is now to get there first? If nations suspect an adversary or competitor might be creating program parameters that only protect their own society within a budding super intelligent AI system, might they pursue first strike and also program their own for offensive purposes? Additionally, any efforts that humans attempt might be a waste of time for an entity that gains superintelligence beyond the abilities of any mortal. 

Thus, the potential of a singleton ushers in a paradox in that any superintelligent entity that can achieve a singleton status becomes unfathomable to even the most cunning of human strategists. This singleton paradox is that just as in quantum physics, one cannot predict what might occur beyond the event horizon of a superintelligent entity becoming a singleton. The entity might follow the core programming or original goal and reward system provided by creators, or it might quickly escape those bonds and realize something entirely different. An ant colony in the wild and one that is inside a zoo or museum is, at the level of experience for the ants themselves, indistinguishable because the ants cannot realize beyond their conceptual framing of reality. Humans, after creating a superintelligent AI (or the aforementioned alternatives of a Supra sapien genetic variant, or a cybernetic superhuman hybrid), will have propelled their world into a new era that they themselves no longer govern. Jean Baudrillard explored these concepts with how societies created simulacra of reality already (a copy without an original), yet Bostrom’s singleton would produce a range of simulacra that ordinary humans might not ever wake up from.108 

 

Conclusions: Why Running for the Hills Is Irrelevant . . . for Now at Least

If a human-level artificial intelligence is already some decades away from realization and assuming that superintelligent evolution soon afterward will potentially usher in a technological singleton entity, humanity faces several compelling outcomes. War, as it is currently understood, could end. There simply would not be any real need for organized violence for the accomplishment of political and/or societal goals if a true singleton entity could manage and resolve all issues productively and persuasively. This makes a superintelligent singleton not just some evolutionary, incremental advancement in military capability in war, but a strong emergent phenomenon capable of completely transforming war toward something unrecognizable and possibly incomprehensible to regular humans. Right now, senior policy makers and defense experts are focused on the short-term weaponization of very specific AI systems, the overlap between commercial AI and military contexts, as well as security concerns where sophisticated AI might simulate, mimic, distort, or hijack real human lives or patterns in ways that might be indistinguishable from reality.109

In this singleton paradox, humanity might also be extinguished, particularly if the singleton, as Bostrom points out, might view the human species as a competitor for necessary resources, or it realizes at a higher level of comprehension that the human species ought not to exist. This also would end war, but in a form that is entirely unfortunate for humanity. Existence on Earth might also become impossible if, during some sort of singleton escalation of conflict during an attempt to gain total control of the world, those that wage war against the singleton might escalate the conflict to existential levels of destruction, whether nuclear, biological, electromagnetic pulse, or other weapon of mass destruction. Either the singleton or those resisting it could be the reason for this horrific outcome. If performed early in the rise of a singleton, some groups might risk creating a dystopian nuclear wasteland for surviving humans to deal with, if that did prevent a potential hostile singleton takeover. 

In other singleton paradoxes, security and defense become even murkier affairs. A superintelligent singleton entity might permit societies to think they still control the keys to their own security. However, the keys are fake and have no actual lethal abilities and humans are unfortunately none the wiser. Might the singleton, in some advanced perspective realized only in superintelligence, permit the continuation of human-on-human warfare, granting some alien construct of limited war well outside of original Clausewitzian or neo-Clausewitzian ideals?110 If a superintelligent AI in singleton form surpasses human life and replaces it (or even ignores it) with something that exists on another plane altogether, how will human-constructed warfare change?111 Virtually everything in the modern Westphalian, Clausewitzian mode of framing warfare would fall apart, leaving whatever remains of humanity (or whatever it becomes in some transhumanization shift) to reconceptualize war and warfare anew. Perhaps this would be incomplete in that the singleton could produce yet another war frame unreachable and unrealized by subordinate entities. 

Artificial intelligence paired with lethal weaponry may posit ethical debates, or perhaps ethics may go to the wayside if a nation-state determines such a security advantage is worth investing into what could become the next horrific arms race. Specialized AI may at first be used with increasingly powerful kinetic security systems in space, cyberspace, and in areas where such a system is unlikely to create errors in decision making or produce unnecessary destruction and suffering. Noreen Herzfeld explains that “the advent of flight inaugurated a new era of warfare, releasing armies from physical presence on the field of battle. Fully autonomous weapons will inaugurate a third era, releasing soldiers from the mental decisions of the battlefield as well.”112 If superintelligent AI were to reach a singleton capability and also escape the limitations of whatever cage the human programmers attempted to contain the entity within, this third era of warfare could rapidly move to an unfathomable fourth era that might not even be realized or understood by any human soldier. Unlike previous eras where humans manipulated new technology to gain greater means toward their own ends, the technological accomplishment of a singleton would itself become a new ends, entirely out of reach of the human creators.113 This fourth era might indeed be one where war no longer is of concern, or possibly it is morphed into some interstellar or alien construct unlike anything in the already vast and violent Earth-bound human past. 

These AI concepts are far, far-off into the future if they ever manifest in the ways suggested. Such fantastic and perhaps unnecessarily alarming proposals on war itself becoming irrelevant (in current form and function) might also seem better suited for Hollywood script writers and not for serious policy makers and security professionals. Often in military academic research and debate, there is a peculiar sort of anti-intellectualism afoot. Namely, if concepts or theories are not both immediately testable through existing and preferably quantitative means against other accepted military concepts, the topic is frequently marginalized or dismissed. Secondly, concepts that are outside of existing acquisition, budgeting, or tangible research and development cycles (as well as election cycles) become increasingly abstract and irrelevant the further away they are positioned; we fail to form a long-term, cohesive strategy on such game-changing research.114 There is a practical rationality to this in many respects, but it again reinforces a technical, rationalized worldview where short-term, immediate, and linear-causal effects are prioritized despite complex reality being far more nonlinear, emergent, and unpredictable than we might wish to think it is. Historical precedence, known knowns, and quantitative analytics govern much of how we strategize about the future.115 

Modern warfare places technology and tools in a subservient relationship to human decision makers, which reinforces a long-standing historical adherence to Napoleonic origins, and, in Carl von Clausewitz’s time, something to be comprehended in Westphalian and natural science derived lessons. Accordingly, future wars and future technological relationships between humans and ever-advanced artificially intelligent weaponry ought to remain faithful to the Napoleonic orthodoxies. Yet, “war devolves as well as evolves” according to Der Derian, and “war is no longer a mere continuation of politics (Clausewitz); nor, for that matter, is politics a continuation of war (Michel Foucault) [and Gilles Deleuze, Felix Guattari].”116 War is a shape-shifter, able to “take on a multispectral, densely entangled, phase-shifting” form that resists any effort to encode general principles or some universal war concept.117 To add to Der Derian’s perspective, war may even be able to escape the cognitive control of its human creators in the new care of an artificial offspring. This is a highly debatable stance, one that should get far more attention between modern pragmatic military scholars and their postmodern critics. Yet, there is little research here and even less debate in most professional education.118 Even in the postmodern deconstruction of modern society and war, humans debate the ideas of what war is and how it might have changed from past interpretations. This continues to position humans supremely in the cognitive driver’s seat, with faithful tools of war supporting such activities. This dynamic may change in profound ways. Is the profession willing to have these discussions and consider that, historically, this seems impossible if not unfathomable?

Some militaries move in productive, reformative directions while others disregard, marginalize, or worse still, force new concepts to become obedient to outdated, legacy forms that are cherished by the institution. Andrew Marshall, in addressing the secretary of defense and the entire Department of Defense in 1993, stressed the importance of militaries to invest not just in new technology, but in how to conceptualize differently in periods of uncertainty, change, and transformation:

The most important competition is not the technological competition, although one would clearly want to have superior technology if one can have it. The most important goal is to be the first, to be the best in the intellectual task of finding the most appropriate innovations in concepts of operation and making organizational changes to fully exploit the technologies already available and those that will be available in the course of the next decade or so. . . . Indeed, being ahead in concepts of operation and in organizational arrangements may be far more enduring than any advantages in technology or weapon systems embodying them, and designing the right weapon systems may depend on having good ideas about concepts of operations.119

 

We need to invest in thinking seriously about these future possibilities, particularly because our adversaries most likely are doing so as well. Discourse is necessary on these far-reaching, difficult security topics that may not materialize in the next election cycle, procurement cycle, or even the next decade or two. Such ideas must be brought into serious discussion sooner so that when such possibilities do develop, the military institution has some baseline for thought and potential action. This also requires significant research from technological, scientific, ethical, and specifically military and security perspectives. Transhumanism, singularities, general artificial intelligence, autonomous weapon systems augmented with general AI, and the notion of a future AI or otherwise advanced singleton for political, societal, or defense applications must be researched in greater detail. Foreign policy remains defined through human minds, but this may not hold.120 How such things may shift radically must be contemplated and taken seriously. Human-machine teaming, decision making, and how future advanced technology (to include artificially intelligent life, or a human species detached and dissimilar from the organic parent) may or may not engage in organized violence. They may conceptualize how to eliminate it, or may engage in unimaginable, unrealized forms of greater devastation and destruction. 

Lastly, if humans generate a singleton entity with superintelligence that does not destroy the species and does appear to coexist and nurture humanity while eliminating all matters of conflict and war, would humans be able to understand if this indeed is what it appears to be? Could humanity be set within a safe habitat, like a zoo, but with bars that biological organisms simply cannot conceptualize? In this regard, it might be best to end this article with a line from a famous science fiction movie misinterpreted as a singleton threat. As the character Cypher dines inside the Matrix with the antagonist agents of the film, he quips: “I know this steak doesn’t exist. I know that when I put it in my mouth, the Matrix is telling my brain that it is juicy and delicious. After nine years, you know what I realize? Ignorance is bliss.”121 


Endnotes

  1. Technologically immediate refers to where new tools or abilities are forecasted for fielding or implementation within the upcoming procurement cycle or within existing strategic planning horizons, frequently less than a decade. Anything beyond these horizons is considered abstract, theoretical, and often irrelevant to current strategic goals and operational planning efforts. 
  2. Andrew W. Marshall, memo, Office of the Secretary of Defense, “Some Thoughts on Military Revolution—Second Version,” 23 August 1993, 3.
  3. Haridimos Tsoukas, “What Is Organizational Foresight and How Can It Be Developed?,” in Complex Knowledge: Studies in Organizational Epistemology, 1st ed. (New York: Oxford University Press, 2005), 273. Tsoukas cites Reid Blackman and Rebecca Henderson.
  4. David Pick, “Rethinking Organization Theory: The Fold, the Rhizome and the Seam between Organization and the Literary,” Organization 24, no. 6 (2017): 802, https://doi.org/10.1177/1350508416677.
  5. Antoine Bousquet, “Cyberneticizing the American War Machine: Science and Computers in the Cold War,” Cold War History 8, no. 1 (February 2008): 8–12, https://doi.org/10.1080/14682740701791359.
  6. Contemporary anthropologists and historians support the cognitive revolution in our species as between 70,000 years ago to 30,000 years ago. In Sapiens, author Yuval Noah Harari generally brackets the cognitive revolution, where humans likely invented abstract concepts such as religion, language, politics, culture, and war during this period. Harari, Sapiens: A Brief History of Humankind (New York: HarperCollins, 2018).
  7. Again, machines have exceeded the limits of human abilities on battlefields for quite some time. Analog machines with thick armor absorb damage unfathomable to unprotected infantry, yet only recently have intelligent war tools demonstrated the potential to outwit opponents and offer new options through advanced AI. This cognitive level of warfare is an unprecedented development for machines to compete with humans in. 
  8. This article will explain the difference between general and narrow AI as it applies to the singleton concept.
  9. Louis Anslow, “In 1903, New York Times Predicted that Airplanes Would Take 10 Million Years to Develop,” Big Think, 16 April 2022.
  10. This statement is difficult to attribute to a specific source. One of the earliest online sources claims this originates from the 1980s. See Bruce Schneier, “Human/Bear Security Trade-Off,” Schneier on Security (blog), 18 August 2006. 
  11. The term innovation or variation therein is mentioned prominently in the following senior defense examples, to include more than 24 times in the 2002 National Security Strategy and 20 times in the 2020 National Defense Strategy. See Ashton B. Carter, “Remarks on ‘the Path to an Innovative Future for Defense’ ” (speech, CSIS Third Offset Strategy Conference, Center for Strategic and International Studies, Washington, DC, 28 October 2016); Mircea Geoană, “Speech by NATO Deputy Secretary General Mircea Geoană at NATO’s First Annual Data and AI Leaders’ Conference” (speech, NATO’s first annual Data and AI Leaders’ Conference, Brussels, 8 November 2022); National Security Strategy (Washington, DC: White House, 2022); and 2022 National Defense Strategy of the United States of America (Washington, DC: Department of Defense, 2022).
  12. The following examples represent just a fraction of what is otherwise an exhaustive list. The 2020 version of Planning, Joint Publication 5-0, for instance, makes 165 references to “problem” that subsequently pairs “solution” within the same context. See Jeffrey Meiser, “Ends + Ways + Means = (Bad) Strategy,” Parameters 46, no. 4 (Winter 2016): 81–85; Henry Mintzberg, Duru Raisinghani, and Andre Theoret, “The Structure of ‘Unstructured’ Decision Processes,” Administrative Science Quarterly 21, no. 2 (1976): 134, https://doi.org/10.2307/2392045; Planning, Joint Publication 5-0 (Washington, DC: Department of Defense, 2020), 1–20; Allied Command Operations Comprehensive Operations Planning Directive COPD Version 3.0 (Mons, Belgium: NATO Supreme Headquarters, Allied Powers Europe, 2021), 1–6; Aki-Mauri Huhtinen et al., “Information Influence in Hybrid Environment: Reflexive Control as an Analytical Tool for Understanding Warfare in Social Media,” International Journal of Cyber Warfare and Terrorism 9, no. 3 (September 2019): 7, https://doi.org/10.4018/IJCWT.2019070101; and Developing Today’s Joint Officers for Tomorrow’s Ways of War: The Joint Chiefs of Staff Vision and Guidance for Professional Military Education and Talent Management (Washington, DC: Department of Defense, 2020), iv–4.
  13. Henry Mintzberg, “The Design School: Reconsidering the Basic Premises of Strategic Management,” Strategic Management Journal 11, no. 3 (March/April 1990): 185, https://doi.org/10.1002/smj.4250110302.
  14. Donald Schön and Martin Rein, Frame Reflection: Toward the Resolution of Intractable Policy Controversies (New York: Basic Books, 1994), 29; Richard F. Kitchener, “Bertrand Russell’s Naturalistic Epistemology,” Philosophy 82, no. 319 (2007): 122; and Carl Builder, The Masks of War: American Military Styles in Strategy and Analysis (Baltimore, MD: Johns Hopkins University Press, 1989).
  15. Russell Ackoff, “On the Use of Models in Corporate Planning,” Strategic Management Journal 2, no. 4 (October–December 1981): 353–59. Ackoff does promote simple solution approaches in simple system contexts, but unlike modern military doctrine, complex systems cannot be paired with simple problem-solution constructs. Instead, Ackoff posits that designers “dissolve the problem” by designing a new system where the existing dynamic is no longer a concern.
  16. Elizabeth Kinsella, “Constructivist Underpinnings in Donald Schön’s Theory for Reflective Practice: Echoes of Nelson Goodman,” Reflective Practice 7, no. 3 (2006): 9, https://doi.org/10.1080/14623940600837319.
  17. Arkalgud Ramaprasad and Ian Mitroff, “On Formulating Strategic Problems,” Academy of Management Review 9, no. 4 (October 1984): 597, https://doi.org/10.2307/258483; and Richard Buchanan, “Wicked Problems in Design Thinking,” Design Issues 8, no. 2 (Spring 1992): 15–16. 
  18. Tsoukas, “What Is Organizational Foresight and How Can It Be Developed?,” 271. Tsoukas quotes Alasdair MacIntyre. 
  19. Thomas S. Kuhn, The Structure of Scientific Revolutions, 3d ed. (Chicago: University of Chicago Press, 1996).
  20. Clearly, some systems are autonomous today in execution. These will be addressed in this article, and while there are clear legal, ethical, and moral debates required in such developments, there is not yet any general AI that matches or exceeds humans except in specific, limited, narrow warfighting applications. 
  21. This controversial point will be expanded on in detail shortly. 
  22. Anatol Rapoport, Fights, Games, and Debates (Ann Arbor: University of Michigan Press, 1974), 9–11.
  23. For instance, early aviation pioneers demonstrate a pattern of advocating for how airpower will revolutionize warfare, with visionaries such as Giulio Douhet and William L. “Billy” Mitchell punished by their own organizations. Douhet would be court-martialed and imprisoned by the Italian Army, while Mitchell would also be court-martialed, demoted, and forcibly retired by the U.S. Army. While many of their ideas proved wrong later, both were quite accurate on aerial theory and transformation requirements despite institutional rejection and refusal. See Giulio Douhet, The Command of the Air, ed. Richard Kohn and Joseph Harahan, trans. Dino Ferrari (Washington, DC: Office of Air Force History, 1983); and John Correll, “The Billy Mitchell Court-Martial,” Air Force Magazine, 1 August 2012.
  24. Alexander the Great, Genghis Khan, Adolf Hitler, Joseph Stalin, Pol Pot, and others demonstrate temporary and incomplete periods of near individualist power over vast populations. The Tokugawa shogunate definitively controlled Japanese society to the edges of the island chain for more than two centuries, while the sun never set on the British colonial Empire for even longer, yet these two would fade and cede control to others. 
  25. Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford, UK: Oxford University Press, 2016), 100–1.
  26. Bostrom did not explore future possibilities of a multiplanetary species where future colonies might develop different governments, cultures, or identities where a singleton construct would require a celestial scale. 
  27. Bostrom, “What Is a Singleton?”
  28. This science fiction concept has become cliché in recent years with a host of movies, television shows, and similar narratives on the impending doom of AI control over weapons of mass destruction. 
  29. Bostrom, Superintelligence, 26.
  30. Supra sapien is coined by the author to represent a genetically modified human with cognitive abilities so advanced it may no longer qualify as the same species.
  31. Alan Moore, The Watchmen, The Watchmen Comic Series 1–12 (Burbank, CA: DC Comics, 1986). 
  32. Bostrom articulates the potentiality of each of these superintelligent outcomes in his book in extensive detail. 
  33. John Shy, “Jomini,” in Makers of Modern Strategy: From Machiavelli to the Nuclear Age, ed. Peter Paret, Gordon A. Craig, and Felix Gilbert (Princeton, NJ: Princeton University Press, 1986), 145–50; Antoine J. Bousquet, The Scientific Way of Warfare: Order and Chaos on the Battlefields of Modernity (London: Hurst, 2009), 58–73; Charles White, Scharnhorst: The Formative Years, 1755–1801 (Warwick, UK: Helion, 2020), 377–82; and Christopher Paparone, The Sociology of Military Science: Prospects for Postinstitutional Military Design (New York: Bloomsbury Academic Publishing, 2013), xvi, 12–16, 18–22, 127–30.
  34. This is yet another controversial position. A contrary position held is that war is not natural in that before humans, war did not exist in nature. One might offer that nonhuman species appear to wage war, but Rapoport dismantles any arguments of predatory/prey and even parasite overlap into the human design of war. Ant colonies do not hold political elections either, for that matter. Humans also invented religions, language, art, and music, yet attempting to impose universal principles, laws, or natural order to these other constructs seems impossible. Carl von Clausewitz even reflected on this toward the end of his life: “It is a very difficult task to construct a scientific theory for the art of war . . . since it deals with matters that no permanent law can provide for.” Paret cites an unfinished note by Clausewitz that was presumably written in 1830, less than two years before his death. See Rapoport, Fights, Games, and Debates, 61, 74, 80–84; and Peter Paret, “Clausewitz,” in Makers of Modern Strategy, 206.
  35. Aron Dombrovszki, “The Unfounded Bias Against Autonomous Weapons Systems,” Informacios Tarsadalom 21, no. 2 (2021): 15–16.
  36. Benjamin Jensen, Christopher Whyte, and Scott Cuomo, “Algorithms at War: The Promise, Peril, and Limits of Artificial Intelligence,” International Studies Review 22, no. 3 (September 2020): 529, https://doi.org/10.1093/isr/viz025; and Denise Garcia, “Lethal Artificial Intelligence and Change: The Future of International Peace and Security,” International Studies Review 20, no. 2 (June 2018): 334, https://doi.org/10.1093/isr/viy029.
  37. Anecdotal at best, the author follows the Boston Dynamics Atlas robot on Twitter, and when videos are uploaded, significant comments are posted by viewers commentating that “robot overlords” are soon arriving. See https://twitter.com/WallStreetSilv/status/1615813768344178712.
  38. William Stahl, “Technology and Myth: Implicit Religion in Technological Narratives,” Implicit Religion 5, no. 2 (November 2002): 93–97, https://doi.org/10.1558/imre.v5i2.93.
  39. Robert Geraci, “Apocalyptic AI: Religion and the Promise of Artificial Intelligence,” Journal of the American Academy of Religion 76, no. 1 (March 2008): 140; Star Trek, created by Gene Roddenberry (Hollywood, CA: Paramount, 1966–); The Terminator, directed by James Cameron (Los Angeles, CA: Orion Pictures, 1984); Wargames, directed by John Badham (Beverly Hills, CA: United Artists, 1983); and Rick and Morty, created by Justin Roiland and Dan Harmon (2013–).
  40. Bryan Lufkin, “What the World Can Learn from Japan’s Robots,” BBC, accessed 6 February 2020.
  41. Justin Pugh, Lisa Soros, and Kenneth Stanley, “Quality Diversity: A New Frontier for Evolutionary Computation,” Frontiers in Robotics and AI 3, no. 40 (July 2016): 1–3, https://doi.org/10.3389/frobt.2016.00040; Kevin Shapiro, “This Is Your Brain on Nanobots,” Observations, December 2005, 64–65; and Maxim Shadurski, “The Singularity and H. G. Wells’s Conception of the World Brain,” Brno Studies in English 46, no. 1 (2020): 229, https://doi.org/10.5817/BSE2020-1-11.
  42. The numerous unanticipated consequences abound here too. Even if a surgically enhanced super soldier can win on the battlefield, how does a society deal with the retirement and long-term life of that person after their missions are completed? What unexpected psychological or emotional conditions might emerge from these new, unknown cognitive demands? If such upgrades are permanent, how does a society reacclimate that soldier into civilian life and also safeguard the rest of society from inadvertent harm or risk? 
  43. Geraci, “Apocalyptic AI,” 140.
  44. Geraci, “Apocalyptic AI,” 149; Shapiro, “This Is Your Brain on Nanobots,” 64–66; and Pugh, Soros, and Stanley, “Quality Diversity,” 1–4.
  45. The metaverse is currently hypothesized as an internet of everything, where the virtual world would be all-encompassing for humans to experience. Note that humans would remain organic and dependent on the physical plane of existence outside the metaverse, ideally unplugging to sleep, eat, and reproduce. The metaverse may be a step along the path toward reaching singularity, but they are distinct. 
  46. Benjamin Wurgaft, “The Future of Futurism: A View from the Garden, Looking to the Stars,” Boom: A Journal of California 3, no. 4 (Winter 2013): 42, https://doi.org/10.1525/boom.2013.3.4.35.
  47. Jacob Shatzer, “Fake and Future ‘Humans’: Artificial Intelligence, Transhumanism, and the Question of the Person,” Southwestern Journal of Theology 63, no. 2 (Spring 2021): 127–46; Aura-Elena Schussler, “Artificial Intelligence and Mind-Reading Machine—Towards a Future Techno-Panoptic Singularity,” Postmodern Openings 11, no. 4 (2020): 334–46, https://doi.org/10.18662/po/11.4/239; “Merging With the Machines: Information Technology, Artificial Intelligence, and the Law of Exponential Growth, Part 2,” World Future Review 2, no. 2 (May 2010): 57–61, https://doi.org/10.1177/194675671000200209; and “Merging with the Machines: Information Technology, Artificial Intelligence, and the Law of Exponential Growth, Part 1,” World Future Review 2, no. 1 (March 2010): 61–66, https://doi.org/10.1177/19467567 1000200107.
  48. Chris C. Demchak, “China: Determined to Dominate Cyberspace and AI,” Bulletin of the Atomic Scientists 75, no. 3 (2019): 99–104, https://doi.org/10.1080/00963402.2019.1604857; Arthur Herman, “Why China Is Winning the War for High Tech,” National Review, 1 November 2021; and Stephanie Petrella, Chris Miller, and Benjamin Cooper, “Russia’s Artificial Intelligence Strategy: The Role of State-Owned Firms,” Orbis 65, no. 1 (Winter 2021): 75–100, https://doi.org/10.1016/j.orbis.2020.11.004.
  49. However, a network of humans or other form of intelligent entities, if designed to work collectively as a single enterprise, could form a singleton by design. This does not seem to be what the metaverse is suggested for and would potentially remove some of the primary attractive aspects of what the metaverse represents in theory. 
  50. Arguably, the first transhuman entity could be an organic human with superintelligence accomplished by enhancement. This hybrid would be both human and artificial and could simultaneously cross the transhuman barrier as well as the singleton barrier. In this context, the first transhuman would still face these same singleton dilemmas on security, humanity, and the future of all societies. 
  51. James Johnson, “Artificial Intelligence, Drone Swarming and Escalation Risks in Future Warfare,” RUSI Journal 156, no. 2 (2020): 26–36; Alessandro Gagaridis, “Warfare Evolved: Drone Swarms,” Geopolitical Monitor, 10 June 2022; and Ben Zweibelson, “Let Me Tell You About the Birds and the Bees: Swarm Theory and Military Decision-Making,” Canadian Military Journal 15, no. 3 (Summer 2015): 29–36.
  52. Bostrom, Superintelligence, 96.
  53. Rick and Morty, season 2, episode 3, “Auto Erotic Assimilation,” created by Justin Roiland and Dan Harmon, aired 9 August 2015; and Star Trek: The Next Generation, season 2, episode 16, “Q Who,” directed by Rob Bowman, aired 6 May 1989.
  54. Garcia, “Lethal Artificial Intelligence and Change”; Geraci, “Apocalyptic AI”; and Brian Molloy, “Project Governance for Defense Applications of Artificial Intelligence: An Ethics-Based Approach,” PRISM 9, no. 3 (2021): 107–20.
  55. This also creates a paradox in that, arguably, a superintelligent singleton could make the same decisions and actions as a regular AI aggressor, as the nonsuperintelligent humans lack the cognitive abilities to even distinguish the two. 
  56. Giampiero Giacomello, “The War of ‘Intelligent’ Machines May Be Inevitable,” Peace Review: A Journal of Social Justice 33, no. 2 (2021): 284, https://doi.org/10.1080/10402659.2021.1998860.
  57. 2001: A Space Odyssey, directed by Stanley Kubrick (Stanley Kubrick Productions, 1968); Ex Machina, directed by Alex Garland (London, UK: Film4 and DNA Films, 2014); I, Robot, directed by Alex Proyas (Los Angeles, CA: 20th Century Fox, 2004); and WALL-E, directed by Andrew Stanton (Emeryville, CA: Pixar Animation Studios, 2008).
  58. Adam Cutler, “IBM Design—Operationalizing Artificial Intelligence” (lecture, JSOU design lecture series 2019, JSOU Campus, Tampa, Florida, 14 November 2019). Cutler also lectured on this topic as the closing keynote speaker to the U.S. Space Command at the USSPACECOM Commander’s Conference held in May 2022 at Peterson Space Force Base. 
  59. Ben Zweibelson, “One Piece at a Time: Why Linear Planning and Institutionalisms Promote Military Campaign Failures,” Defence Studies Journal 15, no. 4 (2015): 360–75, https://doi.org/10.1080/14702436.2015.1113667; Ben Zweibelson, “An Awkward Tango: Pairing Traditional Military Planning to Design and Why It Currently Fails to Work,” Journal of Military and Strategic Studies 16, no. 1 (2015): 11–41; and Ben Zweibelson, “Preferring Copies with No Originals: Does the Army Training Strategy Train to Fail?,” Military Review, January–February 2014, 15–25.
  60. Christopher Paparone, “How We  Fight: A Critical Exploration of US MilitaryDoctrine,” Organization 24, no. 4 (2017): 516–33, https://doi.org/10.1177/135050841769385; Zweibelson, “An Awkward Tango”; Zweibelson, “One Piece at a Time”; Ben Zweibelson, “Rose-Tinted Lenses: How American Functionalist Strategy Inhibits Our Appreciation of Complex Conflicts,” Defence Studies Journal 16, no. 1 (2016): https://doi.org/10.1080/14702436.2016.1147924; and James Der Derian, “From War 2.0 to Quantum War: The Superpositionality of Global Violence,” Australian Journal of International Affairs 67, no. 5 (2013): 573–74, https://doi.org/10.1080/10357718.2013.822465.
  61. Haridimos Tsoukas, Complex Knowledge: Studies in Organizational Epistemology (New York: Oxford University Press, 2005), 213–16.
  62. Paparone, The Sociology of Military Science, 16–22; and Antoine Bousquet and Simon Curtis, “Beyond Models and Metaphors: Complexity Theory, Systems Thinking and International Relations,” Cambridge Review of International Affairs 24, no. 1 (2011): 43–62, https://doi.org/10.1080/09557571.2011.558054.
  63. Lorraine Daston, Rules: A Short History of What We Live By (Princeton, NJ: Princeton University Press, 2022), 63; Sébastien le Prestre de Vauban, The New Method of Fortification, as Practised by Monsieur de Vauban, Engineer-General of France. Together with a New Treatise of Geometry. The Fifth Edition, Carefully Revised and Corrected by the Original, 5th ed. (1722; repr., Farmington Hills, MI: Gale ECCO, 2018); and Henry Guerlac, “Vauban: The Impact of Science on War,” in Makers of Modern Strategy.
  64. Tsoukas, Complex Knowledge, 213–16.
  65. Tsoukas, Complex Knowledge, 213–14.
  66. Der Derian, “From War 2.0 to Quantum War,” 573.
  67. Leaders interchange “complex” and “complicated” while military doctrine states that complex challenges can be solved through identifying the complex problem, which mangles complexity theory with earlier, Newtonian inspired war frames. Military objectives and goals are set into complex systems with linear lines of effort or action, while other parts of doctrine mention nonlinearity and emergence as how complex systems behave. All too often, military language is convoluted, with terms stripped of their original meaning so that they comply with preexisting doctrinal standards despite the concepts breaking with such beliefs entirely. 
  68. Der Derian, “From War 2.0 to Quantum War,” 577.
  69. James William Gibson, The Perfect War: Technowar in Vietnam, 1st ed. (Boston: Atlantic Monthly Press, 1986), 462; and Alex Ryan, “A Personal Reflection on Introducing Design to the U.S. Army,” Medium (blog), 4 November 2016.
  70. Tsoukas, Complex Knowledge, 74.
  71. Siniša Malešević, The Sociology of War and Violence (Cambridge, UK: Cambridge University Press, 2010), 28, https://doi.org/10.1017/CBO9780511777752.
  72. Shimon Naveh, Jim Schneider, and Timothy Challans, The Structure of Operational Revolution: A Prolegomena (Fort Leavenworth, KS: Booz Allen Hamilton, 2009), 35–36. 
  73. Jensen, Whyte, and Cuomo, “Algorithms at War,” 536–37.
  74. Elizabeth Kier, Imagining War: French and British Military Doctrine Between the Wars (Princeton, NJ: Princeton University Press, 1997); Correll, “The Billy Mitchell Court-Martial”; and David French, The British Way in Warfare, 1688–2000 (Cambridge, MA: Unwin Hyman, 1990).
  75. Gibson, The Perfect War.
  76. Builder, The Masks of War.
  77. Blake Stilwell, “The US Air Force’s ‘Rods from God’ Could Hit with the Force of a Nuclear Weapon—with No Fallout,” Business Insider, 4 February 2019.
  78. These changes would qualify as weak emergence (fads, bandwagon effect, bubbles, and crashes) or in rare cases such as nuclear weapons and computers, multiple emergence (many feedback loops, both positive and negative). See Jochen Fromm, “Types and Forms of Emergence” (research paper, Distributed Systems Group, Electrical Engineering and Computer Science, Universitat Kassel, Germany, 13 June 2005), 1–23. 
  79. Der Derian, “From War 2.0 to Quantum War,” 573.
  80. The author regularly teaches systemic design, complexity theory, and systems theory at the Marine Corps War College, National Defense University, Canadian Forces College, Australian Command and Staff College, as well as numerous NATO and European military programs. These topics are infrequently added to curriculum and often cause disagreement with students and faculty on their relevance and suitability with other educational priorities. For examples of this, see Anna Grome, Beth W. Crandall, and Louise Rasmussen, Incorporating Army Design Methodology into Army Operations: Barriers and Recommendations for Facilitating Integration, Research Report 1954 (Washington, DC: Department of the Army, 2012); Aaron P. Jackson, Ben Zweibelson, and William Simonds, “Intellectual Spring Cleaning: It’s Time for a Military ‘Do Not Read’ List; and Some Sources That Should Be on That List,” Defence Studies 18, no. 2 (2018): 131–46, https://doi.org/10.1080/14702436.2018.1461563; and BGen Shimon Naveh (Ret), interview with Matt Matthews, 1 November 007.
  81. Many of the citations in this article illustrate this tension. Few concepts from complexity theory, postmodern philosophy, or sociology (social paradigm theory in particular) are ever integrated into military education, training, and doctrine. The author states this as factual based on more than a decade of introducing such ideas into multiple war colleges and professional military education programs. 
  82. Haridimos Tsoukas and Mary Jo Hatch, “Complex Thinking, Complex Practice: The Case for a Narrative Approach to Organizational Complexity,” Human Relations 54, no. 8 (2001): 979–1013, https://doi.org/10.1177/0018726701548001; and Tsoukas, Complex Knowledge.
  83. Pick, “Rethinking Organization Theory,” 807; Peter L. Berger and Thomas Luck-mann, The Social Construction of Reality: A Treatise in the Sociology of Knowledge (New York: Anchor Books, 1966); and Tsoukas and Hatch, “Complex Thinking, Complex Practice.”
  84. Granted, currencies are replaced and soon after the fall of the Saddam Hussein regime in Iraq, a new government formed, and a new currency was created. This occurred with the Iraqi Army in parallel as the old Baathist one was dismantled and disarmed so that a new version could be assembled for Iraqi security. This refers to the ancient rai or fei stones found on the Yap Islands in Micronesia. 
  85. This refers to the ancient rai or fei stones found on the Yap Islands in Micronesia. Modern economists view rai stones as a form of money despite their massive size preventing movement of them.
  86. Fromm, “Types and Forms of Emergence,” 18; and Schön and Rein, Frame Reflection, 30.
  87. Harari, Sapiens.
  88. Malešević, The Sociology of War and Violence, 56.
  89. Bostrom, Superintelligence, 95–109.
  90. Schussler, “Artificial Intelligence and Mind-Reading Machines,” 342.
  91. Garcia, “Lethal Artificial Intelligence and Change,” 339.
  92. Noreen Herzfeld, “Can Lethal Autonomous Weapons Be Just?,” Journal of Moral Theology 11, Special Issue no. 1 (2022): 72, https://doi.org/10.55476/001c.34124.
  93. Jensen, Whyte, and Cuomo, “Algorithms at War,” 528.
  94. Michal Krelina, “Quantum Warfare: Definitions, Overview and Challenges,” EPJ Quantum Technology 8, no. 24 (2021): https://doi.org/10.1140/epjqt/s40507-021-00113-y.
  95. Pugh, Soros, and Stanley, “Quality Diversity,” 8.
  96.  Bostrom, Superintelligence, 140–70.
  97. Spectacular stereotypes of this fashion are iconic in film history, whether when actor Slim Pickens rides a nuclear bomb while whipping his Stetson hat toward the target in Dr. Strangelove or when one-dimensional murderous military commanders in Avatar represent social commentary by using historical tropes. Anatol Rapoport, The Origins of Violence: Approaches to the Study of Conflict (New Brunswick, NJ: Transaction Publishers, 1995), 258.
  98. Sophie-Charlotte Fischer and Andreas Wenger, “Artificial Intelligence, Forward-Looking Governance and the Future of Security,” Swiss Political Science Review 27, no. 1 (2021): 174, https://doi.org/10.1111/spsr.12439.
  99. Fromm, “Types and Forms of Emergence.”
  100. Geraci, “Apocalyptic AI,” 150.
  101. Schussler, “Artificial Intelligence and Mind-Reading Machines,” 343.
  102. Bostrom, Superintelligence, 140–55.
  103. Herzfeld, “Can Lethal Autonomous Weapons Be Just?,” 84.
  104. Bostrom, Superintelligence, 150.
  105. Independence Day, directed by Roland Emmerich (Los Angeles, CA: 20th Century Fox, 1996).
  106. Geraci, “Apocalyptic AI,” 157.
  107. Herzfeld, “Can Lethal Autonomous Weapons Be Just?”; and Der Derian, “From War 2.0 to Quantum War.”
  108. Jean Baudrillard, Simulacra and Simulation, trans. Sheila Glaser (Ann Arbor: University of Michigan Press, 2001).
  109. Defense Primer: Emerging Technologies (Washington, DC: Congressional Research Service, 2021).
  110. For neo-Clausewitzian concepts, readers can refer to the existing references in this piece including Chia, Holt, Rapaport, Paparone, Naveh, and Der Derian to start with. 
  111. Geraci, “Apocalyptic AI,” 152.
  112. Herzfeld, “Can Lethal Autonomous Weapons Be Just?,” 74.
  113. Schussler, “Artificial Intelligence and Mind-Reading Machines,” 335.
  114. Herman, “Why China Is Winning the War for High Tech,” 34.
  115. The famous quote by Secretary of Defense Donald Rumsfeld in 2002 likely comes from complexity theory and contemporary ideas that would lead to popular books such as Nassim Nicholas Taleb’s The Black Swan within the same decade. The American public would slowly gain exposure to complexity theory ideas that tended to be in paradox or dismantle widely popular and Newtonian framed worldviews. See David Graham, “Rumsfeld’s Knowns and Unknowns: The Intellectual History of a Quip,” Atlantic, 27 March 2014; and Nassim Nicholas Taleb, The Black Swan: The Impact of the Highly Probable (New York: Random House, 2007).
  116. Der Derian, “From War 2.0 to Quantum War,” 575; and Gilles Deleuze and Felix Guattari, A Thousand Plateaus: Capitalism and Schizophrenia, trans. Brian Massumi (Minneapolis: University of Minnesota Press, 1987). Deleuze and Guattari would echo Foucault’s and other postmodern variations on deconstructing Clausewitz. 
  117. Der Derian, “From War 2.0 to Quantum War,” 575.
  118. In an easy but simple manner to demonstrate this, examine the required and suggested readings for any professional military program in 2023 and highlight the number of postmodern authors and topics. More often than not, this field is entirely nonexistent in conventional military education. Yet, these authors are some of the few outside of select technological futurists willing to explore such radical transformations of future war. See Paul Virilio, The Information Bomb, trans. Chris Turner, Radical Thinkers (New York: Verso, 2005); James Der Derian, “Virtuous War/Virtual Theory,” International Affairs (Royal Institute of International Affairs 1944) 76, no. 4 (October 2000): 771–88; and Michel Foucault, “Discourse and Truth: The Problematization of Parrhesia” (lecture, University of California at Berkeley, November 1983); Daniel Cockayne, Derek Ruez, and Anna J. Secor, “Thinking Space Differently: Deleuze’s Möbius Topology for a Theorisation of the Encounter,” Transactions of the Institute of British Geographers 45 (2020): 194–207, https://doi.org/10.1111/tran.12311; and Scott Lawley, “Deleuze’s Rhizome and the Study of Organization: Conceptual Movement and an Open Future,” Tamara: Journal of Critical Postmodern Organization Science 3, no. 4 (2005): 36–49.
  119. Marshall, “Some Thoughts on Military Revolutions,” 2–3.
  120. Indeed, as arrogant as our species often is, the notion that future foreign policy might be developed by human and AI minds is disconcerting. Further still, a superior AI mind might generate entirely novel theory that humans cannot comprehend fully or even at all. 
  121. The Matrix, directed by Lana Wachowski and Lilly Wachowski (Burbank, CA: Warner Brothers, 1999).

 


                                            

MCU Press is a member of