Blog & Opinion Pieces

Autonomous Systems and Moral De-Skilling: Beyond Good and Evil in the Emergent Battlespaces of the 21st Century

4 November 2019

As we move deeper into the 21st Century, our traditional understandings and experiences of war are being challenged in a number of ways. In part, this may be attributed to the rise and spread of advanced technologies, particularly Autonomous Weapon Systems (AWS) which are underwritten by Artificial Intelligence (AI) paradigms. Strategic, operational and tactical matters aside, these challenges also trigger important ethico-moral concerns.

One such concern is the potential for the “moral deskilling” of the soldier. Taking its cue from Braverman’s study (1974) of the “deskilling” of artisanal workers with the onset of the Industrial Age, this concern asserts that the introduction of AWS will undermine the soldier’s ability to exercise moral skills in combat which, in turn, will lead to, among other things, the loss of the “warrior ethic”. Put differently, it is argued that with the rise of AWS (and related technologies), the concern is not just that the exercise of moral skills would atrophy, but also that moral knowledge itself would be diminished.

A critical engagement with the related literature suggests that there are a number of operative assumptions, which may not stand rigorous scrutiny. Thus, for example, while the invocation of Braverman’s concept of “deskilling” at first glance appears attractive and relevant, a closer examination suggests that the notion of “moral deskilling’ as applied to the soldier in the context of AWS reflects (1) a logicopositivistic view of knowledge and skills, which ignores the distinction between “tacit” and “functional” knowledge”; (2) “[assumes] that the content of tacit knowledge can be fully objectified”; and (3) does not account for the fact that “moral skills”, which constitute a body of tacit knowledge, are in excess of the military profession. In sum, the employment of “deskilling” as an analytic tool to engage with the ethico-moral concerns resulting from the introduction of AWS is less than optimal.

Further, regarding the potential for the erosion of the “warrior ethic”, it is important to recognize – though the literature on the subject elides the issue - that while it is true that most strategic-military systems propound a “warrior ethic”, the use of the term “warrior” is anachronistic and thus a misnomer. This is because the “warrior” is an entity that harkens back to a tribal era, while the soldier is a modern entity who is in service of the modern State. This is reflected in how the soldier is trained and the ethos with which he/ she is inculcated, which is designed, in the first instance, to foster and maintain “unit cohesion”, which is of critical importance because, particularly under combat conditions, it is a guarantor of the somatic integrity of the larger strategic-military superstructure. Thus, the question is not so much about “moral deskilling” but of “morale deskilling”, which assumes importance as advanced AWS and related systems are introduced in increasingly combat-centric roles.

Lastly, it is also worth recognizing that the nexus between war and these emergent technologies have often also given rise to some extraordinary claims about 21st Century warfare, some of which have veered close to science fiction. Indeed, there appears to be a tacit working assumption that the AWS that are expected to populate emergent battlespaces will possess and exhibit some sort of sentience, which would result in human soldiers surrendering their moral skills to them. This is not only highly speculative but also reflects a profound misunderstanding of the technologies involved. Invoking such a speculative framework only serves to obfuscate some of the more critical ethico-moral concerns related to the introduction and use of AWS.

 

 

 

Autonomy in Weapons Systems ― No Trust without Tolerance[1]

13 September 2019

(U.S. Marine Corps photo by Cpl. Matthew Callahan/RELEASED) (link)

 

Autonomous military robots are here, but they cannot yet do everything in the soldiering profession which is, perhaps, for the best. As time goes by, however, military personnel will almost certainly find themselves putting their lives or the lives of others in the hands of highly autonomous walking military robots, armed aerial robots or perhaps flying drone ambulances and their decision on whether or not to trust the judgment of a machine during the heat of battle may have life or death consequences.

And so, ‘trust’ is what military leaders typically refer to in conceptualizing the relationship between a human and machine. It is commonly presented as a human-machine engineering problem with a largely technical solution. While substantial investments are being directed into human-robot interaction research worldwide,[2] the question that remains is whether engineering alone is capable of delivering solutions to all problems. 

While it may over the course of time, what should not be overlooked today is that the problem for military robotics and automation is also a matter of tolerance, defined as a power process, that is capacity of soldiers to endure subjection to technology and the extent to which automation impinges upon one’s autonomy or otherwise impacts the soldier’s wellbeing. To illustrate, when soldiers discuss a mission with their superiors or amongst each other and then make a joint effort to succeed in that mission, their collective agential need to engage in the power process is fulfilled, but if they work under such rigid orders from above or in such a large group that leaves them no flexibility to exercise their own initiative or virtually no capacity for autonomous decision making, the power process is not served. Soldiers, of course, are not automatons and do not practice blind obedience to authority in conflict, even though they act within a firm hierarchy. Soldiers are not trained to kill blindly and in executing and interpreting orders are making fine grained moral and legal decisions. The argument here is that the introduction of autonomous robotics could usurp autonomy in this context and impinge upon fulfillment of the power process and that if this occurs, soldiers’ tolerance of – and trust in – autonomous systems will wane.

With that in mind, what are the factors that are likely to impact one’s level of tolerance? How are soldiers likely to adjust to challenges posed by increasing automation of technological systems beyond simply disengaging from technology? What strategies can be implemented to limit disruption of the power process and enhance soldiers’ wellbeing, thus building tolerance and, as a consequence, also trust in automated robotic systems? These are all complex questions, addressed in the first of the VDST Group deliverables here.




[1] Jai Galliott, ‘The Soldier’s Tolerance for Autonomous Systems’, Paladyn, Journal of Behavioral Robotics 9, no. 1 (2018): 124–36.

[2] See, for example, Trusted Autonomous Systems, Defence CRC.

 

 

Airmen and Unmanned Aerial Vehicles: The Danger of Generalization

13 September 2019

(U.S. Air Force photo by Val Gempis) (link)

Unmanned Aerial Vehicles (aka ‘Remotely Piloted Aircraft’ or ‘drones’) are often perceived of as the beginning of a slippery slope to a machine takeover of warfare. While some ground this belief on misconceptions still surrounding UAVs — a keyword here would be ‘robotic’ or ‘autonomous precision weapons’ — others point to the developments in the nature and operational use of technology. Indeed, UAVs have advanced significantly since October 2001, when the first-ever Predator combat sortie resulted in a successful strike on a vehicle belonging to personal guards of Mullah Omar—the Taliban leader in Afghanistan.  That sortie stands in marked contrast to the Reaper sorties of today. Unlike the MQ-1 Predator of 2001, which spent its operative years supporting land and special operations forces in pursuit of mission objectives, its successor, the MQ-9 Reaper, has recently demonstrated its ability to achieve mission objectives as a true theater asset, executing strikes, close air support, and surveillance in a single mission.[1]

The past decade has witnessed a growing popular and academic interest in these systems, the legal and ethical questions surrounding their use and their impact on armed conflict and society more generally. To date, however, only a handful of protagonists (pilots and sensor operators) have spoken about their experience openly. Official security policies prohibit aircrews from discussing the details of their work with anyone who does not have a proper level of security clearance and a need to know. Those few, however, who braved an opportunity to tell their story in detail, lament that the exhausting US government censorship processes take longer to complete than an aircrew member may require for writing a book-length monograph.[2]

The challenges of gaining insight into the work of UAV operators notwithstanding, a number of narratives have been fashioned and maintained in the popular and scholarly discourse presenting operators in a particular light. For example, the aircrew is portrayed either as courageously restrained heroes who, due to the nature of their profession, suffer under heavy psychological trauma or, conversely, as gung-ho joystick warriors responsible for fashioning and sustaining the culture of ‘convenient killing.’[3] Given that little insight on the topic has been offered by the operators themselves, the assertiveness and even boldness of some of the suggested narratives is indeed striking.

The VDST group’s interest in the firsthand accounts by former UAV operators has been motivated foremost by continuous sophistication of technology and, specifically, the potential of novel types of military and security systems to become successfully integrated in the existing force structures. Given that human operator is projected to remain a central element of future weapons and support systems, the success of the integration process is squarely dependent on how humans will adjust to ongoing developments in technology. In that regard, current UAVs offer an example of technologically advanced systems (yet to be incorporated in arsenals of many States) that have tested human capacity to adapt and where the experience of adaptation has been described by the users of such systems.

We analyse a selected number of predominant narratives created around UAV operators and the technology they operate. We specifically address the allegation that the nature of operators’ labour enables emotional detachment and psychological disassociation from the consequences of targeting decisions. We also assess the claim that UAVs constitute ‘autonomous precision weapons’ that lower an operators’ task load to the point where boredom negatively affects vigilance. It is our view that the frequently expressed criticisms about UAVs and the personnel that controls them do not hold up well under more detailed scrutiny, because the images created in public perception can be supported by reference to the firsthand testimonies as much as they can be opposed by reference to the same testimonies. They are enough, however, to point to the danger of generalising personal experiences. In fact, those few in the academic community who investigated the issue carefully, suggest that the impact of engaging in remote warfare on operators remains both unclear and underinvestigated.[4]

As autonomous aircraft and (un)manned aircraft are likely to remain tools of air warfare for decades to come, it is important now to focus the debate on how the technology will interact with and affect those in charge of that technology. Developing a better understanding of the nature and implications of interaction between currently used systems and their operators is essential to ensure that technology is developed in ways that will serve rather than negatively impact society. The challenge will be designing the right human-machine balance to maximise the relative advantages of both service member and machine in a future fighting system.

 

For more detail, see: Dr. Natalia Jevglevskaja and Dr. Jai Galliott, ‘Airmen and Unmanned Aerial Vehicles: The Danger of Generalization’, Journal of Indo-Pacific Affairs, Fall 2019, vol.2(3), pp. 33-65.




[1] Joe Chapa, ‘The Sunset of the Predator: Reflections on the End of an Era’, War on the Rocks, 9 March 2018.

[2] Brett Velicovich and Christopher S. Stewart, Drone Warrior: An Elite Soldier’s Inside Account of the Hunt for America’s Most Dangerous Enemies (New York: Harper Collins Publishers, 2017), ix.

[3] See, for example, Chris Cole, Mary Dobbing, and Amy Hailwood, Convenient Killing: Armed Drones and the “Playstation” Mentality (Fellowship of Reconciliation: Oxford, 2010). See also Laurie Calhoun, ‘The End of Military Virtue’, Peace Review: A Journal of Social Justice 23, no. 3 (2013): 377, 382.

[4] Peter Lee, Reaper Force: Inside Britain’s Drone Wars (London: John Blake, 2018). Joseph L. Campo, From a Distance: The Psychology of Killing with Remotely Piloted Aircraft (dissertation, Air University, 2015).