Marines


Journal of Advanced Military Studies

Marine Corps University Press logo
Marine Corps University Press
Quantico, Virginia

jams, vol. 16, no. 2

Conscientious Centaurs

Lethal Autonomous Weapons Systems, Human-Machine Teaming, and Moral Enmeshment

Lieutenant Commander Jonathan Alexander, USN

https://doi.org/10.21140/mcuj.20251602003

PRINTER FRIENDLY PDF
EPUB
AUDIOBOOK

 

 Abstract: In modern warfare, lethal autonomous weapons systems (LAWS) introduce artificial intelligence–enabled capabilities that may execute lethal actions without final human judgment. Their development envisions human-machine teams (HMT) that combine machine precision with human situational awareness in what has been termed “centaur warfighting.” Although some argue that autonomous weapons will shield personnel from the psychological burdens of combat, others contend that such systems risk morally distancing human operators from lethal outcomes. This article instead examines how humans may become “enmeshed” with machines within HMT sociotechnical structures, producing new forms of moral exposure and potentially contributing to morally injurious experiences at the tactical level.

Keywords: autonomous weapons, human-machine team, HMT, centaur warfighting, moral injury, moral enmeshment, moral luck

 

Introduction1

In the history of war, technology has been discovered, designed, and deployed to increasingly distance combatants from their enemy. In today’s modern warfare, the development and likely future deployment of lethal autonomous weapons systems (LAWS) represents a potentially new era, as artificial intelligence (AI)-enabled weapons may make lethal decisions apart from the final judgment and input of human warfighters. Paul Scharre argues in Army of None that “technology has brought us to a crucial threshold in humanity’s relationship with war. In future wars, machines may make life-and-death engagement decisions on their own.”2 The state of military affairs is currently in the middle of this “crucial threshold” as illustrated by defense-related news sources regularly publishing stories on the current development of autonomous systems andreal-time militarized emerging technologies innovation and experimentation in the Russo-Ukrainian War.3 Numerous factors contribute to the “relentless drive toward autonomy”—the nation’s decreased appetite for physical risk to military personnel; lower costs per unit of unmanned autonomous systems; a militarized emerging technologies arms race with peer and near-peer adversaries; the need for hardened systems in cyber-contested wartime environments; and the ever-increasing speed of algorithmic warfare that may not permit humans to stay meaningfully engaged in the kill chain.4

Current LAWS development does not envision armies of lethal autonomous robots flying, marching, and sailing through the battlespace independent of all collaborative human partnership and control. Some of the sensationalized rhetoric around LAWS makes this seem the case, but most of the literature and future operational concepts embrace human-machine teaming (HMT).5 Within this hybridized sociotechnical system of the HMT, the relationship between the human and machine has been metaphorically and cleverly depicted as “centaur warfighting.”6 The mythological centaur is a creature with a human head and upper body and a horse’s lower body. Paul Scharre leverages this image to describe the unique advantages that humans and machines both provide: “The best systems will combine human and machine intelligence to create hybrid cognitive architectures that leverage the advantages of each. Hybrid human-
machine cognition can leverage the precision and reliability of automation, without sacrificing the robustness and flexibility of human intelligence.”7 As LAWS are currently developed and likely deployed in the future, a crucial question emerges—in the “centaur” HMT relationship, how might the machine’s autonomous lethal actions morally and psychologically impact the human warfighter as the end user at the tactical level of war? 

While there has been significant scholarship on the ethical, moral, and legal dimensions of LAWS, to date, there has been sparse research and discussion on the potential moral and psychic effects on those who will employ such weapon systems. The limited literature that does exist predominately suggests that the deployment of LAWS in place of human combatants will either minimize war-related traumas like post-traumatic stress disorder and moral injury or LAWS will morally displace, distance, or desensitize servicemembers from the consequences of the lethal robot’s autonomous actions.8 However, as this article argues, the deployment of LAWS serving as a prophylactic against the human experience of wartime moral and psychic distress or contributing to moral displacement, distancing, and desensitization may not be the case, as some philosophies of technology and studies on anthropomorphizing in human-robot relationships theorize how the human may become enmeshed and entangled with the machine in the sociotechnical architecture of the HMT. Thisenmeshment may very well extend to the moral effects and consequences caused by autonomous weapons, especially if laws of war (LOW), namely the jus in bello principles of discrimination and proportionality, are violated.9 With the possibility of enmeshment and entanglement, if a lethal machine violates LOW, this could create a potentially morally injurious experience for the human warfighter. When it comes to the possibility of moral enmeshment in the hybridized relationship between the human and machine, perhaps Scharre’s description of HMT as centaur warfighting is more prescient than his clever metaphorical use intends. 

The structure of the article is as follows. It begins with brief overviews of both LAWS and moral injury and is followed by a theoretical exploration of how the human combatant in a sociotechnical HMT may become morally enmeshed with the machine and ultimately feel a sense of moral responsibility for the machine’s autonomous actions. The term theoretical exploration is not used to avoid rigorous scholarship or deflect any burden of proof. In the same way that engineers study potential shortcomings or failures in structures yet to be built, the methodology of this exploration is the application of established philosophical and psychological theorems. The investigation of technological and relational enmeshment occurs via: (1) an analysis of Bruno Latour’s philosophy of technology and his idea of the “collective”; (2) research on warfighters relationally anthropomorphizing the robots they team with in combat; and (3) an application of the philosophical concept of moral luck.10 The article concludes with recommended areas for future research so that militarized HMT can be conscientious centaurs.  

As emerging technologies are increasingly integrated into the future military context, illustrated by the Department of Defense’s (DOD) Replicator Initiative and the North Atlantic Treaty Organization’s (NATO) Allied Command Transformation, it is vital for civilian and military leadership—policy makers, defense planners, technologists, and commanders alike—to recognize critical inflection points and reflect on the human dimension of autonomous warfare.11 This not only incorporates considerations on how to wage war justly (jus in bello) but also includes intentional efforts to provide “the moral and psychological armor needed to preserve honor and perhaps even humanity during and after war.”12 This type of multifaceted reflection can foster an even greater freedom for servicemembers to fulfill the mission and fight in good conscience because they will trust that ethical and moral issues have been considered by their leaders in the design, development, and deployment of the technologies at their disposal. Warfighters and their families deserve nothing less.

 

Lethal Autonomous Weapon Systems

While space constraints limit a thorough discussion on LAWS and other conceptual issues (e.g., continuums of autonomy and lethality; technical components and concepts; ethical, moral, and legal concerns), a brief overview and definition of LAWS is useful.13 Department of Defense Directive (DODD) 3000.09, Autonomy in Weapon Systems defines LAWS as 

a weapon system that, once activated, can select and engage targets without further intervention by an operator. This includes, but is not limited to, operator-supervised autonomous weapon systems that are designed to allow operators to override operation of the weapon system, but can select and engage targets without further operator input after activation.14 

 

Additionally, it is worth quoting in full the DOD’s definition of autonomy, contrasted with automation, in the 2018 Unmanned Systems Integrated Roadmap, 2017–2042:

Autonomy is defined as the ability of an entity to independently develop and select among different courses of action to achieve goals based on the entity’s knowledge and understanding of the world, itself, and the situation. Autonomous systems are governed by broad rules that allow the system to deviate from the baseline. This is in contrast to automated systems, which are governed by prescriptive rules that allow for no deviations. While early robots generally only exhibited automated capabilities, advances in AI and ML [machine learning] technology allow systems with greater levels of autonomous capabilities to be developed. The future of unmanned systems will stretch across the broad spectrum of autonomy, from remote controlled and automated systems to near fully autonomous, as needed to support the mission.15

 

LAWS are also often described by the place of the human operator “in/on/out of” the observe, orient, decide, act kill chain loop (a.k.a. John R. Boyd’s OODA loop).16 “In” the loop refers to the operator deciding what the robotic system does at every stage from target acquisition to engagement. “In” the loop is sometimes also referred to as semiautonomous weapons that “only engage individual targets or specific target groups that have been selected by a human operator.”17 “On” the loop means that the system will carry out most of its functions without a human operator, but the human may intervene at any time. “Out of” the loop (sometimes called “off the loop”) is a fully autonomous system where the machine carries out each action of OODA. DODD 3000.09 currently directs humans to stay in or on the loop, but it leaves room for future LAWS development and deployment.

In operational concepts of future HMT, the human warfighter and LAWS may team together on the physical battlefield or be separated in the battlespace but connected via high-definition sensors and screens. As an example of the former, one operational scenario might have an infantry squad employing an autonomous ground robot acting as the “point man,” minimizing risk to human combatants in the event of initial contact with the enemy. A physically distanced scenario might look like an operator launching a lethal autonomous drone swarm from another continent yet still witnessing kinetic effects via sensors and screens. 

 

Moral Injury

Moral injury (MI) is the “damage done to one’s conscience or moral compass when that person perpetrates, witnesses, or fails to prevent acts that transgress one’s own moral beliefs, values, or ethical codes of conduct.”18 Brett T. Litz et al.’s seminal article describes MI as “perpetrating, failing to prevent, bearing witness to, or learning about acts that transgress deeply held moral beliefs and expectations.”19 Jonathan Shay defines MI as “a betrayal of what’s right by someone who holds legitimate authority in a high stakes situation.”20 In veteran war reporter David Wood’s What Have We Done: The Moral Injury of Our Longest Wars, he conducts interviews with combat veterans, mental health clinicians, and military chaplains and develops an insightful experiential explanation of moral injury:

Moral injury is a jagged disconnect from our understanding of who we are and what we and others ought to do and ought not to do. Experiences that are common in war . . . challenge and often shatter our understanding of the world as a good place where good things should happen to us, the foundational beliefs we learn as infants. The broader loss of trust, loss of faith, loss of innocence, can have enduring psychological, spiritual, social, and behavioral impact.21

 

Signs and symptoms of MI include: (1) inappropriate guilt and shame; (2) social or relational issues (e.g., avoiding intimacy, anger and aggression, reduced trust in other people and cultural contracts); (3) spiritual and existential problems (e.g., loss of spirituality or weakened religious faith, negative attributions toward God or higher power, lack of forgiveness, crisis in meaning); (4) substance abuse and other attempts at self-handicapping; and (5) suicide and other self-harm behaviors.22 

Moral injury is typically viewed as a phenomenon distinct from post-traumatic stress disorder (PTSD). When Shay, serving as a psychiatrist for the U.S. Department of Veterans Affairs Boston Outpatient Clinic, began to study Vietnam veterans’ combat experiences, he did not see PTSD as an adequate explanation of the psychic trauma experienced by those he counseled. Thus, he coined the term moral injury. In his 2014 journal article “Moral Injury” in Psychoanalytic Psychology, he reflects:

The DSM [Diagnostic and Statistics Manual of Mental Disorders] diagnosis, Posttraumatic Stress Disorder (PTSD), does not capture either form of moral injury [i.e., Shay’s own betrayal-focused or Litz et al.’s perpetrator-focused MI]. PTSD nicely describes the persistence into life after mortal danger of the valid adaptations to the real situation of other people trying to kill you. However, pure PTSD, as officially defined, with no complications, such as substance abuse or danger seeking, is rarely what wrecks veterans’ lives, crushes them to suicide, or promotes domestic and/or criminal violence. Moral injury—both flavors—does.23 

 

Clinical researchers Sonya B. Norman and Shira Maguen note overlapping symptomology of PTSD and MI—guilt, shame, betrayal, and loss of trust. They also highlight nosological differences, especially PTSD’s fear-based reactions, hyperarousal, startle response, memory loss, and flashbacks, and MI’s sorrow, regret, shame, and alienation.24 Research and scholarship related to understanding and treating moral injury, including perspectives from both clinical and humanities disciplines, has proliferated in the past two decades because, as Wood observes, MI is “the signature wound of this generation of [Global War on Terrorism] veterans.”25

 

Human Machine Teaming and Moral Enmeshment

War as a Moral Arena

War is a moral arena. Article 15 of Francis Lieber’s 1863 Instructions for the Government of Armies of the United States in the Field (a.k.a. the Lieber Code or General Orders No. 100 by President Abraham Lincoln) states: “Men who take up arms against one another in public war do not cease on this account to be moral beings, responsible to one another and to God.”26 Psychotherapist Edward Tick also reflects on the moral nature of war:

During warfare, we human beings take over the Divine functions of granting life or administering death and of determining the destinies of peoples and nations. . . . Taking life is the essence of war and is also in essence a moral and spiritual act . . . . Precisely because we become arbiters of life, death, and fate, we enter religious and spiritual dimensions and are in a world of ultimate matters that Karl Marlantes calls “this wartime sacred space, this Temple of Mars.”27

 

This “moral and spiritual act” in the arena of war is why warfighters may experience MI if they “perpetrate, witness, or fail to prevent acts that transgress one’s own moral beliefs, values, or ethical codes of conduct.”28 While this article argues that moral enmeshment between the human and machine may be possible in the HMT, and therefore the use of LAWS may potentially contribute to moral injury in the warfighters who deploy them at the tactical level of war, as noted above, the opposite is often suggested—the introduction of LAWS may reduce occurrences of MI. For instance, while Scharre does not necessarily espouse the future use of LAWS, he writes, “In a world where autonomous weapons bore the burden of killing, fewer soldiers would presumably suffer from moral injury. There would be less suffering overall. From a purely utilitarian consequentialist perspective, that would be better.”29 His brief mention of MI comes at the end of a lengthier section on death by LAWS’ potential violation of human dignity, yet Scharre never entertains how servicemembers who believe this to be the case (e.g., LAWS violating human dignity) may themselves be morally injured by deploying said weapons. Massimiliano L. Cappuccio, Jai Galliott, and Fady Alnajjar also contend that deploying LAWS in the place of humans will minimize war-related traumas like MI and is, therefore, ethically imperative.30 Like Scharre, the authors do not illustrate ways in which moral enmeshment may occur between human and machine, nor do they evaluate how LAWS may potentially contribute to moral injury in those who deploy them. These perspectives assume there will be little to no moral connection or moral responsibility between the future LAWS operator and the autonomous robot under their command. In essence, they imply that lethal autonomous technology removes the human combatant from the moral arena of war. Scharre notes, “If we lean on algorithms as a moral crutch, it weakens us as moral agents. . . . Someone should bear the moral burden of war. If we handed that responsibility to machines, what sort of people would we be?”31 Again, this presumes that LAWS operators will be removed from the moral arena of war and not feel a sense of moral responsibility or moral burden for the lethal decisions autonomous robots make while under their tactical authority. 

 

Bruno Latour’s “Collective”

Philosophies of technology theorize and provide insight into how a future operator and LAWS may possibly become morally enmeshed within a human-machine, sociotechnical system, and therefore, how the LAWS operator may experience the moral responsibility and burden for the lethal actions of the autonomous machine, thereby potentially contributing to MI. In French philosopher and sociologist Bruno Latour’s 1999 essay “A Collective of Humans and Nonhumans: Following Daedalus’s Labyrinth,” he presents the concept of the “collective—defined as an exchange of human and nonhuman properties inside a corporate body.”32 His concept eschews modernity’s strict subject-object dualism and explains how human actors and nonhuman actants are collectively “entangled” in the process and pursuit of a goal. The subject-object duality treats the human as subject and the nonhuman artifact as object, but Latour avers that this distinction in practice does not hold. When describing the “technical mediation” relationship(s) between humans and nonhumans, a “third agent emerges from the fusion of the other two.”33 To illustrate, Latour uses the example of a chimpanzee wielding a stick to knock down a banana from a tree. In this scene, how does one identify subject and object? Modernity’s dualism views the chimpanzee as the subject instrumentalizing the stick as object to accomplish the goal of knocking down the banana (another object). However, Latour proposes that hybridization and enmeshment occurs when the chimpanzee as actor partners with the stick as actant, and this entanglement of chimpanzee and stick comprises a new “collective” to accomplish the goal—“The chimp plus the sharp stick reach (not reaches) the banana.”34 Notice the syntax of “reach” versus “reaches.” Latour’s approach employs third-person plural as the chimp plus the stick collectively reach. In his construct, the “imbroglios of humans and nonhumans on an ever-increasing scale” occur because of “successive crossovers through which humans and nonhumans have exchanged their properties” in a “deepened intimacy, a more intricate mesh, between the two.”35

How might Latour’s theory of the “collective” illuminate the potential of moral enmeshment between the human and machine in a militarized HMT, thereby offering an explanation as to how the LAWS operator remains in the moral arena of war and personally experiences the moral weight of the machine’s lethal action? In a traditional dualistic dichotomy, the operator is the subject, and the LAWS is the object. Operator and LAWS are ontologically distinct. However, in Latour’s view of enmeshment, operator and LAWS are sociotechnically fused and entangled in pursuit of the goal of engaging a target. In military practice, this is intimated and illustrated by a Marine and a rifle as articulated in the “Marine’s Rifle Creed”—“This is my rifle. There are many like it, but this one is mine. My rifle is my best friend. It is my life. I must master it as I master my life. My rifle, without me, is useless. Without my rifle, I am useless. . . . We will become part of each other.”36 While the latter phrase is included as an implied symbolic, and not ontological, union between the Marine and the rifle, according to Latour’s idea of the collective, there might be more going on that actually creates some kind of ontological, or at least psychological, union or enmeshment. With this sense of oneness between actor (the Marine) and actant (the rifle), the human Marine has moral agency and therefore takes moral responsibility and feels the moral burden for the outcomes of this hybridized entanglement. When the bullet (another subactant in the sociotechnical imbroglio) is fired from the weapon and strikes another human being, the Marine as human actor feels a sense of moral agency, moral responsibility, and moral burden. The Marine as actor does not dichotomize and disengage from the rifle as actant when dealing with the effects of the rifle’s actions because operating together in a collective, they are one enmeshed system. 

There may be a similar human-machine collective teaming with the future operator and LAWS. One might take issue with this comparison, claiming that with the rifle, the Marine as human actor is involved in every stage of the OODA loop, and the moral component of lethal action is in the ultimate decision to act. However, once the rifle’s firing pin strikes the primer, the bullet is now an “uncontrollable” actant, at least in the sense that the Marine cannot definitively control where it strikes (e.g., accuracy of the shot, another individual stepping in the way or being in close proximity of the intended target, minute differences in bullet manufacturing, wind conditions, etc.). Yet, through the “oneness” of the Marine-rifle system, the Marine still retains moral agency as well as moral responsibility and the weight of moral burden. 

To expand the exploration of moral responsibility residing beyond just the individual’s ultimate decision to lethally act, one could also apply this concept to a Marine infantry company commander deploying Marines. Even though discipline has been instilled through rigorous training, the individual Marine, once deployed, is essentially autonomous and uncontrollable by the commander. Yet, the commander still retains a morally weighted “command responsibility” and experiences a moral burden for what those under their command “autonomously” choose to do.37 If an individual Marine violates LOW, the commander still feels a sense of moral responsibility, experiences the moral burden, and may even in some cases be held legally accountable for the actions of the individual “autonomous” Marine. This moral weight is illuminated by Latour’s collective—the commander has been morally enmeshed with the individual Marine (yet another hybridized actor/actant system). 

Caroline Holmqvist, senior lecturer in war studies at the Swedish National Defence College, similarly conceptualizes potential enmeshment and entanglement in militarized technologies in her article “Undoing War: War Ontologies and the Materiality of Drone Warfare.” She labels the HMT as a “complex human-material assemblage” noting that virtual war is still “humanly experienced,” contra the argument that it is game-like:

Contrary to common perception, drone warfare is “real” also for those staring at the screen and, as such, the reference to video games is often simplistic. . . . The relationship between the fleshy body of the drone operator and the steely body of the drone and its ever-more sophisticated optical systems needs to be conceptualized in a way that allows for such paradoxes to be made intelligible.38 

 

Holmqvist uses similar language to Latour regarding the potential enmeshment of human and machine in contemporary warfare, explaining that “the human experience is continually altered by human beings’ encounters with technology . . . and to understand the human being in war, we need to consider the way in which fleshy and steely bodies associate, interact, merge—the dissolution between the corporeal and the incorporeal.”39

While Latour’s idea of the collective does not present an unassailable argument for the moral responsibility and burden of the individual operator who deploys an autonomous system, it does provide philosophical and technological insight into why and how the human in the HMT may feel and experience a sense of moral distress for the lethal effects of the autonomous weapon under their control, even when the human is not specifically the element of the HMT acting in the OODA kill chain loop. Counterintuitively, it could also be that the removal of the human operator’s authority and decision to act in a lethal scenario causes moral stress. As Jai Galliott suggests, “If increasingly autonomous systems limit the exercise of autonomy or exert undue power and control over an operator’s ability to oversee the execution of lethal action in a just manner or that which accords with one’s own values systems and that of their military organization, they may be ‘morally injured’.”40 Latour’s theory that technologies potentially diminish the modern notion of subject-object distinction supports the exploratory thesis that human moral agency is not deferred or disengaged with the employment of LAWS but instead becomes a complex phenomenon of moral entanglement via this enmeshed, hybridized relationship between human and machine in the militarized HMT.  

 

Anthropomorphism in Militarized Human Machine Teaming

A form of this HMT relational enmeshment and entanglement is documented in warfighters who anthropomorphize the robots they team with in combat. Cappuccio, Galliott, and Eduardo Sandoval cite numerous research studies in human-robot interaction regarding anthropomorphism—the human tendency to perceive various kinds of nonhuman agents, including machines, as human-like. The authors observe:

Explosive Ordnance Disposal (EOD) operators deployed in Iraq and Afghanistan report at least four remarkable anecdotes about their relationships with the robots that assisted them in carrying out their task in active war scenarios . . . which testify to the pervasiveness of anthropomorphism:

1. EOD robots were assigned names and gendered identities by the soldiers who worked with them in Iraq and Afghanistan;

2. at times, when one of these robots was damaged, its loss was not simply experienced as the destruction of an expensive piece of equipment, but also grieved like the death of a teammate, and in some cases, it was accompanied by funeral-like rituals;

3. when one of these robots was sent to the headquarters for repair after suffering structural damage, its human mates requested that its mechanical parts were not replaced, but accurately fixed to preserve the robot’s individual identity;

4. in rare occasions, soldiers have endangered themselves to protect the robot from enemy assaults. . . .

Therefore, anthropomorphistic attributions can be fueled by empathy and narratives of sacrifice, which reinforce each [other’s] . . . effects: thus, if robots are portrayed as entities that “sacrifice” themselves, then their destruction feels like a “death,” which in turn reinforces the empathic perception of them as person-like entities.41

 

Julie Carpenter similarly researches the sociotechnical relationship between EOD personnel and the robots they use to locate improvised explosive devices. She writes:

From the armed forces projected standpoint, the robot stands in for the EOD [human] operator as a critical doppelganger or extension of their physical self. . . . From the results of research on Human-Robot Interaction, the case is made that EOD personnel may form pseudo-relationships with robots and attribute mental states and sociality to them. The human instinct to anthropomorphize non-living things may be amplified and exploited by the addition of humanlike characteristics in robots, as well as people engaging in prolonged proximity and interactive situations with them.42

 

Victoria Groom et al.’s insightful paper “I Am My Robot: The Impact of Robot-Building and Robot Form on Operations” presents the idea of human self-extension into objects such as robots:

When interacting with autonomous robots or robots tele-operated by another person, people respond in much the same way they respond to other people. In contrast, teleoperation and other immersive interactions through robots enable interactions between humans and robots that, in the moment of using the robot, may make people feel like the robot is part of one’s self.43 

 

The ideas of collective, assemblage, anthropomorphization, and self-extension may seem initially disparate, but explored together, they support the concept of a deeper sociotechnical relationship in the HMT. In the context of the human operator and LAWS, this hybridized and enmeshed social relationship may explain how a “moral” connection is created between the two, especially if the robot is viewed anthropomorphically as a teammate or as an extension of the human operator. Therefore, taking into consideration these philosophies of technology and theories of social robotics, one can hypothesize how the moral implications of a lethal robot, especially in cases of LOW violations, may be felt and carried by the human operator, thus contributing to potentially morally injurious experiences for warfighters functioning in HMT. Again, the use of a centaur to describe the human-robot relationship in HMT is perhaps more accurate than initially intended.

 

Moral Luck

In addition to philosophies of technology and psychological theories on HMT, the philosophical concept of moral luck provides further insight as to how the LAWS operator may indeed be more deeply connected and “enmeshed” in the moral arena and effects of war even as autonomous lethal decisions are executed beyond the operator’s direct control. Dana K. Nelkin provides a description of moral luck:

Moral luck occurs when an agent can be correctly treated as an object of moral judgment despite the fact that a significant aspect of what she is assessed for depends on factors beyond her control. . . . The problem of moral luck arises because we seem to be committed to the general principle that we are morally assessable only to the extent that what we are assessed for depends on factors under our control (call this the “Control Principle”). At the same time, when it comes to countless particular cases, we morally assess agents for things that depend on factors not in their control [i.e., luck].44

 

Moral philosopher Bernard Williams’s classic example of a truck driver, who “through no fault of his own,” runs over a child who darted in front of the moving vehicle, offers a helpful analogical parallel to the operator whose LAWS executes a lethal algorithmic action violating LOW.45 In Williams’s case, the driver is not “at fault” due to the agency of the child, but nonetheless, the driver should feel a sense of remorse and regret. One might even suggest that something would be morally amiss with the driver if they did not feel some sense of moral remorse or regret. Therefore, the driver feels and experiences the moral weight of an action outside of their direct control. David Sussman further explains this seemingly irrational but very real and understandable moral notion: “We expect the truck driver to be strongly inclined to blame himself, even though it would be wrong for the rest of us to feel anything like resentment or indignation toward him. . . . Although the driver should ultimately be ‘let off the hook,’ he nevertheless should not so release himself, but instead put up real resistance to our attempts to exonerate him.”46 Sussman continues, “Regret often brings with it some kind of self-reproach. . . . We can rationally feel regret for events that are completely and obviously beyond our influence. We need to see that even when there is no culpability, there can still be inescapable forms of personal antagonism that, although innocent, can nevertheless involve many of the features of personal wrongdoing.”47 

A more contemporary moral luck thought experiment might include the use of a self-driving car. Although the environment differs significantly, the moral dynamics of delegation to civilian autonomous systems can illuminate similar tensions in militarized applications. Imagine someone owning a self-driving car and riding as a passenger when the vehicle hits and kills a pedestrian. The vehicle’s owner and operator, even when riding as a passenger in self-driving mode, will likely feel a sense of moral responsibility and regret even though the car made an algorithmically programmed, autonomous decision resulting in the fatality. From Williams’s “unfortunate” (i.e., bad luck) truck driver example and the self-driving vehicle thoughts experiment, one can easily identify the analogous parallel. The concept of moral luck and feeling of moral responsibility for an outcome shaped by factors beyond one’s control serve to demonstrate how the human operator in the HMT may experience the moral weight, regret, and potential self-reproach for an autonomous lethal decision made outside of their control.

 

War as a Moral Arena (Reprise)

When one surveys the literature on LAWS, authors frequently employ the descriptor “moral.” For example, when discussing HMT, Scharre frequently describes humans as “moral beings” and “moral agents” with “moral responsibility.”48 Others echo this use—James L. Boggess’s “personal moral code,” Mark Coeckelbergh’s “moral responsibility,” Anthony C. Pfaff’s “moral assessment,” and Gary E. Marchant et al.’s “moral judgment.”49 If “moral” is used so pervasively in the literature when describing the moral weight and burden humans must retain in war, then why would every warfighter who deploys future LAWS completely and permanently disengage morally when using these type of weapons, even if robotic warfare technology potentially contributes to different classes of moral displacement? More importantly, if the technology does indeed morally displace, distance, or desensitize servicemembers in real time from the kinetic results of the moral arena of war, one cannot assume that they will not at some point in the future evaluate and reflect on their wartime service. Tick writes: 

A unique dimension of modern war with as yet unknown impact is that with modern technology, people take lives on the other side of the world but are not in danger of being killed in return. . . . Many troops engaged in distant forms of military action often feel detached from the experience of killing, their victims, and their own status as combat veterans. They may not rehumanize the foe or reconcile with their own histories until long after their service, if at all.50

 

Tick notes that the psychospiritual impact of modern technology is unknown, yet also states that distanced warfare may contribute to moral disengagement. However, what is also undetermined, as Tick suggests, is whether the distanced warfighter will or will not morally reflect on their wartime actions. Later in his volume, he further intimates, “Today, warfare has become more deadly, debilitating, and invisible than ever. . . . We can surmise that the greater the destructive reach of our weaponry, the greater the moral stress and burden on troops and the nation, and the more penetrating yet mysterious the invisible wound will be.”51 These citations are not highlighted to call Tick to task for proffering seemingly opposing statements regarding the potential moral impact of using militarized emerging technologies. They merely reveal the unknown and mysterious moral and psychic distress that may result in the warfighter who deploys LAWS due to potential moral enmeshment. 

Understanding and further exploring the potential of moral enmeshment may serve as a protective factor for those who will one day deploy LAWS. If the current narrative continues that LAWS will either (1) shield human combatants from the traumas of war or (2) create a context of systematic killing that is construed as dehumanizing, desensitized, and dispassionate with few moral or psychic consequences on the commander or operator, then the military will not be ready for the potential moral distress and harm that results from the use of autonomous weapons in future war. While not all will emerge morally injured, some will—and this number matters.

 

Conclusion and Recommendations

The aim of this article has been to explore how the human may become enmeshed with the machine in the sociotechnical architecture and hybridized relationship of the HMT. This enmeshment may extend to the moral effects and consequences caused by LAWS, thus contributing to morally injurious experiences for the human warfighter at the tactical level of war, especially if LOW principles of discrimination and proportionality are violated. Using the doctrine, organization, training, materiel, leadership and education, personnel, facilities, and policy (DOTmLPF-P) framework, below are three recommendations and areas of research that should be pursued in future studies of LAWS and the potential of moral harm on operators.52

The first recommendation involves personnel and policy such as multidisciplinary inflection points in DOD LAWS development and deployment policy. In this author’s opinion, to date, the DOD has satisfactorily addressed ethical, moral, and legal issues concerning LAWS and other militarized AI-enabled systems.53 Dialogue has included military leadership, policy makers, academics, private industry leaders, ethicists, technologists, and engineers. Because military chaplains often focus directly on servicemembers and serve as vanguards in treating psychospiritual injuries, they and other behavioral health care providers with expertise in moral injury should be incorporated into this discussion. The DOD must continue to prioritize and value regular inflection points and pauses in policy development for ethical, moral, and legal considerations before, during, and after the deployment of LAWS. If the nation desires ethical and morally fit warfighters, institutional processes of ideation, development, acquisition, deployment, and evaluation of emerging technologies must model this ethical and moral integrity.

The second area of focus or recommendation is to pursue leadership and education remedies by conducting further research on LAWS and potential psychic and moral harm. The only data and anecdotal information currently existing on militarized emerging technologies and moral distress derives from research on remotely piloted aircraft (RPA, i.e., drone) crews. It is vital to understand how future LAWS might affect the psychospiritual health of war-
fighters. While some may see this discussion as premature, analogical models such as studies on RPA crews can be leveraged to discern potential adverse results. This does not necessitate slowing or suspending development or deployment of LAWS, but the conversation must continue regarding holistic care of servicemembers. Retired Army psychologist Dave Grossman’s proposition of physically distanced weapons reducing innate resistance to killing applies to conventional weapons (e.g., artillery, naval gunfire, bombers, etc.), but RPA with high-definition sensors and screens minimize this distance psychologically via “empathic bridging” and “distant intimacy.”54 M. Shane Riza shares the comments of a former wing commander at Creech Air Force Base in Nevada where a majority of RPA operations are flown, who explained that “it’s not really 8,000 miles away, it’s 18 inches away. We’re closer . . . than we’ve ever been as a service. There’s no detachment. Those employing the system are very involved at a personal level in combat.”55 

Research reveals RPA crews experiencing PTSD and MI at similar, and in some studies higher, rates as compared to conventional manned fighter aircraft. Jean L. Otto and Bryant J. Webber state, “There was no significant difference in the rates of [mental health] MH diagnoses, including post-traumatic stress disorder, depressive disorders, and anxiety disorders between RPA and [manned aircraft] MA pilots. Military policymakers and clinicians should recognize that RPA and MA pilots have similar MH risk profiles.”56 They also noted the additional challenges of RPA crews could “increase susceptibility to PTSD.” Rajiv K. Saini, V. K. Raju, and Amit Chali cite a “variety of studies conducted on drone crews[, which] have consistently provided [a] higher incidence of psychiatric symptoms than their compatriots who operate manned aircraft.”57 Joseph O. Chapa believes that the psychological risks to RPA crews may be higher than the psychological risk to other combatants due to the empathic bridging with the targets they are tracking.58 

While RPA warfare maintains a human in each aspect of the kill chain, the lessons learned, especially regarding psychological, moral, and spiritual care of technologically distanced operators, will be helpful in developing preventative and responsive care best practices for future LAWS operators. Among branches of the U.S. military, the Air Force currently executes the most RPA missions and has unit-embedded mental health providers and chaplains. Their best practices for crew care modalities may provide a good foundation for practices of care for those who one day deploy LAWS.

The third recommendation is to integrate robust ethics and moral decision-making education and training for military leaders and warfighters as end users. Even though LAWS will make final lethal decisions, their use on the battlefield will require more, not less, moral decision-making education and training for military personnel. In Christian Brose’s The Kill Chain, he observes:

As intelligent machines become capable of performing these kinds of technical tasks more effectively than humans can, allowing them to do so can liberate more members of the military to do work of greater ethical value. They can spend more of their days solving complex problems with other people, making operational and strategic decisions, contextualizing critical information, distinguishing between right and wrong, and commanding people and machines to perform critical missions. These are the kinds of jobs that Americans actually join the military to do. In this way, intelligent machines could enable more human beings to concentrate on the ethics of warfare than ever before.59

 

If this is the case, it will require robust moral instruction, which can occur through focused ethics education and training in basic, intermediate, and advanced schools for officers and enlisted alike, especially as the Services train robotics and unmanned systems military occupation specialties (e.g., the Navy’s robotics warfare specialist enlisted rating and the Marine Corps’ 73XX Unmanned Aircraft System [UAS] Occupational Fields). Additional instruction and application may include ethical dilemmas with AI-enabled decision systems woven into training exercises, as well as regular ethical and moral decision-making discussions conducted at the unit level. As the military Services embrace and employ more emerging technologies and correspondingly expect “human beings to concentrate on the ethics of war,” warfighters must be trained to think ethically and act morally sound. 

The late Isaac Asimov once said, “The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.”60 While this has traditionally been the case, the next chapter of history begins now. As LAWS are likely deployed in the not-so-distant future and the character of war continues to evolve, the citizenry has a moral and social obligation to holistically care for the warfighters who fight the nation’s wars. Among other things, this includes sincere reflection on the potential morally injurious effects caused by the weaponized emerging technologies placed in their hands. As the American Civil War General William Tecumseh Sherman once lamented, “War is hell,” but it is, and always will be, a moral realm. However, when warfighters see this level of concern for their holistic health from the society that sends them to war, it bolsters their confidence in the cause and empowers them to fight with honor in emerging warfare as “conscientious centaurs”—as those who have considered the moral dynamics and weight of employing autonomous weapons in HMT. Not only is this an obligation, but this is also wisdom.

 


Endnotes

1. Portions of this article are taken from the author’s PhD dissertation. For additional material in a broader context, see Jonathan Alexander, “Lethal Autonomous Weapon Systems and the Potential of Moral Injury” (PhD diss., Salve Regina University, Newport, RI, 2024). Additionally, all research on LAWS originates from open-source, unclassified references.

2. Paul Scharre, Army of None: Autonomous Weapons and the Future of War (New York: W. W. Norton, 2018), 4. Emphasis added.

3. For a recent analysis of militarized technologies being deployed in the Russo-Ukrainian War, see Neil Renic and Johan Christensen, Drones, the Russo-Ukrainian War, and the Future of Armed Conflict (Copenhagen, Denmark: Djof Publishing and the Centre for Military Studies, 2024); and August Cole et al., Artificial Intelligence in Military Planning and Operations: Ethical Considerations, PRIO Paper (Oslo, Norway: Peace Research Institute Oslo, 2024).

4. George R. Lucas Jr. is credited with the phrase “relentless drive toward autonomy.” See George R. Lucas Jr., “Engineering, Ethics, and Industry: The Moral Challenge of Lethal Autonomy,” in Killing by Remote Control: The Ethics of an Unmanned Military, ed. Bradley J. Strawser (New York: Oxford University Press, 2013), 211, http://dx.doi.org/10.1093/acprof:oso/9780199926121.003.0010.

5. Although science fictionalized characterizations of militarized emerging technologies (e.g., Terminator, 2001: A Space Odyssey, iRobot) can provide creative visualizations of the potential dystopian downsides of AI and LAWS, the still-future and often implausible orientation and sensationalism harnessed by advocates of a preemptive ban are not constructive in furthering a calm, reasoned, and systematic approach in evaluating the ethics, morality, and legality of LAWS. Noel Sharkey, a pro-ban on LAWS advocate, provides a good example of this reasoned approach, “It is important to clarify what is meant by ‘robot autonomy’ here. This is often confused with science fiction notions of robots with minds of their own with the potential to turn on humanity. The reality is very different. The autonomous robots being discussed for military applications are closer in operation to your washing machine than to a science fiction Terminator.” Noel Sharkey, “Saying ‘No!’ to Lethal Autonomous Targeting,” Journal of Military Ethics 9, no. 4 (2010): 376, https://doi.org/10.1080/15027570.2010.37903.

6. Paul Scharre, “Centaur Warfighting: The False Choice of Humans vs. Automation,” Temple International & Comparative Law Journal 30, no. 1 (March 2016): 164. Robert Sparrow and Adam Henschke take the metaphor a step further, proposing that the future of human-machine teaming is more likely to be expressed with “teams of humans under the control, supervision, and command of artificial intelligence.” Robert J. Sparrow and Adam Henschke, “Minotaurs, Not Centaurs: The Future of Manned-Unmanned Teaming,” Parameters 53, no. 1 (2023): 115, https://doi.org/10.55540/0031-1723.3207.

7. Scharre, “Centaur Warfighting,” 152.

8. One exception to the above perspectives is Jai Galliott, “The Soldier’s Tolerance for Autonomous Systems,” Paladyn, Journal of Behavioral Robotics no. 9 (2018): 131–32, https://doi.org/10.1515/pjbr-2018-0008, where he discusses how LAWS’ removal of the human operator’s decision to use lethal force may contribute to moral distress.

9. For a concise summary of jus in bello principles, see “Jus ad bellum and jus in bello,” International Committee of the Red Cross, accessed 16 September 2025.

10. Bruno Latour, “A Collective of Humans and Nonhumans: Following Daedalus’s Labyrinth,” in Pandora’s Hope: Essays on the Reality of Science Studies (Cambridge, MA: Harvard University Press, 1999).

11. Jim Garamone, “Hicks Discusses Replicator Initiative,” DOD Manufacturing Technology Program, 7 September 2023; and “Allied Command Transformation,” North Atlantic Treaty Organization, 24 September 2024. At the time of the original writing, the name change from Department of Defense to Department of War had not yet been made.

12. Christopher Toner, “Military Service as a Practice: Integrating the Sword and Shield Approaches to Military Ethics,” Journal of Military Ethics 5, no. 3 (November 2006): 184–85, https://doi.org/10.1080/15027570600911993.

13. For one of the most accessible volumes discussing these issues, see Scharre, Army of None. See also Kenneth Payne, I, Warbot: The Dawn of Artificially Intelligent Conflict (New York: Oxford University Press, 2021). Emphasis added.

14. Department of Defense Directive 3000.09, Autonomy in Weapon Systems (Washington, DC: Department of Defense, 25 January 2023), 21.

15. Unmanned Systems Integrated Roadmap, 2017–2042 (Washington, DC: Office of the Assistant Secretary of Defense for Acquisition, Department of Defense, 2018).

16. Col John R. Boyd, U.S. Air Force fighter pilot and strategist, began developing the OODA concept in the 1950s to describe the process of reacting to a stimulus. In combat, the adversary with the shortest OODA loop has the advantage. Current dynamic targeting methodology (the kill chain) is referred to as find, fix, track, target, engage, assess (F2T2EA) by air and naval forces and decide, detect, deliver, assess (D3A) by land component forces. With F2T2EA, LAWS’s lethal use of force would take place at engage and for D3A at deliver. Within special operations, find, fix, finish, exploit, analyze, disseminate (F3EAD) is used, with finish being the point of lethal action. For simplicity, OODA is used in this article. See Joint Fire Support, Joint Publication 3-09 (Washington, DC: Department of Defense, 2019), xii; and Jimmy A. Gomez, “The Targeting Process: D3A and F3EAD,” Small Wars Journal, 16 July 2011.

17. Kelley M. Sayler, Defense Primer: U.S. Policy on Lethal Autonomous Weapon Systems, In Focus (Washington, DC: Congressional Research Service, 2025).

18. “What Is Moral Injury?,” Moral Injury Project, Syracuse University, accessed 22 November 2021.

19. Brett T. Litz et al., “Moral Injury and Moral Repair in War Veterans: A Preliminary Model and Intervention Strategy,” Clinical Psychology Review 29, no. 8 (December 2009): 699, https://doi.org/10.1016/j.cpr.2009.07.003.

20. Jonathan Shay, “Moral Injury,” Psychoanalytic Psychology 31, no. 2 (April 2014): 183, https://doi.apa.org/doi/10.1037/a0036090.

21. David Wood, What Have We Done: The Moral Injury of Our Longest Wars (New York: Little, Brown, 2016), 8.

22. Jacob K. Farnsworth et al., “The Role of Moral Emotions in Military Trauma: Implications for the Study and Treatment of Moral Injury,” Review of General Psychology 18, no. 4 (December 2014): 249–62, https://dx.doi.org/10.1037/gpr0000018.

23. Shay, “Moral Injury,” 184.

24. Sonya B. Norman and Shira Maguen, “Moral Injury,” PTSD Quarterly 33, no. 1 (2022).

25. David Wood, “The Grunts: Damned If They Kill, Damned If They Don’t,” Huffington Post, 18 March 2014.

26. Instructions for the Government of Armies of the United States in the Field (Lieber Code) (Washington, DC: Adjutant General’s Office, 1863).

27. Edward Tick, Warrior’s Return: Restoring the Soul After War (Boulder, CO: Sounds True, 2014), 74–75. Emphasis added. Karl Marlantes, Vietnam War veteran, emphasizes that warriors must wage war morally with justice. Evoking the Greek god of war Ares (or Mars in the Roman pantheon), he writes, “The connection between the war God and God of justice is evident in the hill in the midst of Athens called the Areopagus, the hill of Ares. The Areopagus is where the Athenians had their principal Court of Justice. Judges were called areopagitae.” Karl Marlantes, What It Is Like to Go to War (New York: Atlantic Monthly Press, 2012), 251.

28. “The Moral Injury Project.”

29. Scharre, Army of None, 290.

30. Massimiliano Lorenzo Cappuccio, Jai Christian Galliott, and Fady Shibata Alnajjar, “A Taste of Armageddon: A Virtue Ethics Perspective on Autonomous Weapons and Moral Injury,” Journal of Military Ethics 21, no. 1 (2022): 10, https://doi.org/10.1080/15027570.2022.2063103.

31. Scharre, Army of None, 290.

32. Latour, “A Collective of Humans and Nonhumans,” 193.

33. Latour, “A Collective of Humans and Nonhumans,” 178. Latour additionally explains four relational meanings and facets of the technical mediation between humans and nonhumans: (1) goal translation; (2) composition; (3) reversible blackboxing; and (4) delegation. He describes the following example of the chimpanzee, stick, and banana relationship and “system” as “composition.” Latour, “A Collective of Humans and Nonhumans,” 182. Emphasis added.

34. Latour, “A Collective of Humans and Nonhumans,” 182

35. Latour, “A Collective of Humans and Nonhumans,” 201, 196.

36. “Marine’s Rifle Creed,” Marine Corps University, accessed 18 February 2023. Emphasis added.

37. While command responsibility is technically used as a legal term in LOW, there is an implicit moral responsibility of the commander within the definition, and the moral burden is being emphasized here. See James M. Dubik, “Human Rights, Command Responsibility, and Walzer’s Just War Theory,” Philosophy and Public Affairs 11, no. 4 (Autumn 1982): 354–71. Dubik remarks, “Command responsibility . . . [is] moral responsibility” (p. 355). For further discussion of command responsibility and inherent moral responsibility, see also James M. Dubik, “Social Expectations, Moral Obligations, and Command Responsibility,” International Journal of Applied Philosophy 2, no. 1 (Spring 1984): 39–48, https://doi.org/10.5840/ijap1984212.

38. Caroline Holmqvist, “Undoing War: War Ontologies and the Materiality of Drone Warfare,” Millennium: Journal of International Studies 41, no. 3 (2013): 541–42, https://doi.org/10.1177/0305829813483350.

39. Holmqvist, “Undoing War,” 548. Emphasis added.

40. Galliott, “The Soldier’s Tolerance for Autonomous Systems,” 131.

41. Massimiliano L. Cappuccio, Jai Galliott, and Eduardo B. Sandoval, “Saving Private Robot: Risks and Advantages of Anthropomorphism in Agent-Soldier Teams,” International Journal of Social Robotics 14, no. 2 (2021): 2140, https://doi.org/10.1007/s12369-021-00755-z. Peter Singer shares a similar story about an explosive ordnance disposal team’s “emotional connection” with their robot in Iraq that was destroyed while disarming an improvised explosive device. See Peter W. Singer, Wired for War: The Robotics Revolution and Conflict in the 21st Century (New York: Penguin Press, 2009), 19–21.

42. Julie Carpenter, “Just Doesn’t Look Right: Exploring the Impact of Humanoid Robot Integration into Explosive Ordnance Disposal Teams,” in Handbook of Research on Technoself: Identity in a Technological Society, ed. Rocci Luppicini (Hershey, PA: IGI Global, 2013), 621, 623–24, https://doi.org/10.4018/978-1-4666-2211-1.ch032. Emphasis added.

43. Victoria Groom et al., “I Am My Robot: The Impact of Robot-building and Robot Form on Operations” (paper presented at the 4th ACM/IEEE International Conference on Human Robot Interaction, La Jolla, CA, 11–13 March 2009), https://doi.org/10.1145/1514095.1514104. Emphasis added.

44. Dana K. Nelkin, “Moral Luck,” Stanford Encyclopedia of Philosophy, 20 January 2025. Emphasis added.

45. Bernard Williams, “Moral Luck,” in Moral Luck: Philosophical Papers, 1973–1980 (Cambridge, UK: Cambridge University Press, 1981), 28, https://doi.org/10.1017/CBO9781139165860.003. Emphasis added.

46. David Sussman, “Is Agent-Regret Rational?,” Ethics 128, no. 4 (July 2018): 791, https://doi.org/10.1086/697492.

47. Sussman, “Is Agent-Regret Rational?,” 793, 802.

48. Scharre, Army of None, 290 (moral beings), 294 (moral responsibility), 322 (moral agents).

49. James L. Boggess, “More Than a Game: Decision Support Systems and Moral Injury,” in Samuel R. White, Closer Than You Think: The Implications of the Third Offset Strategy for the U.S. Army (Carlisle, PA: U.S. Army War College, 2017), 3; Mark Coeckelbergh, “Drones, Information Technology, and Distance: Mapping the Moral Epistemology of Remote Fighting,” Ethics and Information Technology 15, no. 2 (June 2013): 88, https://doi.org/10.1007/s10676-013-9313-6; C. Anthony Pfaff, “The Ethics of Acquiring Disruptive Military Technologies,” Texas National Security Review 3, no. 1 (Winter 2019/2020): 44; and Gary E. Marchant et al., “International Governance of Autonomous Military Robots,” Science and Technology Law Review no. 12 (2011): 296, https://doi.org/10.7916/D8TB1HDW.

50. Tick, Warrior’s Return, 83. Emphasis added.

51. Tick, Warrior’s Return, 102. Emphasis added

52. DOTmLPF-P is part of the DOD’s Joint Capabilities Integration and Development System. See Manual for the Operation of the Joint Capabilities Integration and Development System (Washington, DC: Department of Defense, 2021).

53. For example, see Department of Defense Directive 3000.09, 21; “DOD Adopts Ethical Principles for Artificial Intelligence,” Department of Defense, 24 February 2020; and U.S. Department of Defense Responsible Artificial Intelligence Strategy and Implementation Pathway (Washington, DC: Department of Defense, 2022).

54. See LtCol Dave Grossman, On Killing: The Psychological Cost of Learning to Kill in War and Society, rev. ed. (New York: Back Bay Books, 2009), 13. Grossman writes, “There is within most men an intense resistance to killing their fellow man. A resistance so strong that, in many circumstances, soldiers on the battlefield will die before they can overcome it” (p. 4); and LtCol Wayne Phelps, On Kill Remotely: The Psychology of Killing with Drones (New York: Little, Brown, 2021), 63.

55. M. Shane Riza, Killing without Heart: Limits on Robotic Warfare in an Age of Persistent Conflict (Washington, DC: Potomac Books, 2013), 263.

56. Jean L. Otto and Bryant J. Webber, “Mental Health Diagnoses and Counseling Among Pilots of Remotely Piloted Aircraft in the United States Air Force,” Medical Surveillance Monthly Report 20, no. 3 (March 2013): 2.

57. Rajiv K. Saini, V. K. Raju, and Amit Chali, “Cry in the Sky: Psychological Impact on Drone Operators,” Industrial Psychiatric Journal 30, no. 1 (2021): 17, https://10.4103/0972-6748.328782.

58. Joseph O. Chapa, “Remotely Piloted Aircraft, Risk, and Killing as Sacrifice: The Cost of Remote Warfare,” Journal of Military Ethics 16, nos. 3–4 (2017): 263, https://doi.org/10.1080/15027570.2018.1440501. See also Seth Davin Norrholm et al., “Remote Warfare with Intimate Consequences: Psychological Stress in Service Member and Veteran Remotely-Piloted Aircraft (RPA) Personnel,” Journal of Mental Health and Clinical Psychology no. 7 (2023): 37–49, https://doi.org/10.29245/2578-2959/2023/3.1289.

59. Christian Brose, The Kill Chain: Defending America in the Future of High-Tech Warfare (New York: Hachette Books, 2020), 126. Emphasis added.

60. Isaac Asimov, Isaac Asimov’s Book of Science and Nature Quotations, ed. Isaac Asimov and Jason A. Shulman (New York: Weidenfeld & Nicolson, 1988), 281.

 

About the Author

LtCdr Jonathan Alexander is an active-duty U.S. Navy chaplain currently serving at the Naval Chaplaincy School in Newport, RI. Prior to the Navy, he served in the U.S. Army as an infantry officer. He holds a doctor of philosophy in humanities and philosophy of technology from Salve Regina University, and his dissertation research was on lethal autonomous weapons systems and the potential of moral injury. His research interests are in the areas of ethics and emerging technologies, especially as it relates to what it means to be human in an age of advanced technology.

https://orcid.org/0009-0008-4301-1961.

The views expressed in this article are solely those of the author. They do not necessarily reflect the opinion of Marine Corps University, the U.S. Marine Corps, the Department of the Navy, or the U.S. government.