EC Library Guide on artificial intelligence, security, defence and warfare: Selected articles
Selected articles
- AI-enabled remote warfare: Sustaining the western Warfare paradigm?
Rossiter, A. , International Politics, 60 (August), 2023.
The most prominent feature of Western approaches to warfare in recent decades has been the centrality of precision-strike systems and related capabilities—most notably unmanned platforms—for delivering lethal force with ever-greater remoteness. Comparative advantages derived from this ‘remote warfare’ are waning due to competitors’ partial adoption of precision weapon systems and the development of countermeasures. Analyses by military experts and technology enthusiasts in the West propose that Artificial Intelligence (AI), properly harnessed, will soon resuscitate former advantages derived from remote warfare, which have been subject to diminishing returns. The assumptions underpinning this conclusion, however, rest on weaker ground than is claimed. First, AI boosters—unwittingly or otherwise—frequently overstate the near-term impact of AI on important aspects of remote warfare, downplaying enduring technological challenges, and overlooking vulnerabilities associated with greater reliance on AI-enabled systems. Furthermore, it is far from clear whether over the longer-term AI will enhance and entrench the central aspects of remote warfare. Indeed, the technology may lean toward methods of warfare antithetical to the Western warfare paradigm, such as mass over precision or the widespread deployment of lethal autonomous weapons systems (LAWS).
- AI and warfare: A rational choice approach
Basuchoudhary, A., Eastern Economic Journal, (June), 2024.
Artificial intelligence has been a hot topic in recent years, particularly as it relates to warfare and military operations. While rational choice approaches have been widely used to understand the causes of war, there is little literature on using the rational choice methodology to investigate the role of AI in warfare systematically. This paper aims to fill this gap by exploring how rational choice models can inform our understanding of the power and limitations of AI in warfare. This theoretical approach suggests (a) an increase in the demand for moral judgment due to a reduction in the price of AI and (b) that without a human in the AI decision-making loop, peace is impossible; the very nature of AI rules out peace through mutually assured destruction.
- The AI Commander Problem: Ethical, Political, and Psychological Dilemmas of Human-Machine Interactions in AI-enabled Warfare
Johnson, J., Journal of Military Ethics, 21 (3-4), 2022.
Can AI solve the ethical, moral, and political dilemmas of warfare? How is artificial intelligence (AI)-enabled warfare changing the way we think about the ethical-political dilemmas and practice of war? This article explores the key elements of the ethical, moral, and political dilemmas of human-machine interactions in modern digitized warfare. It provides a counterpoint to the argument that AI "rational" efficiency can simultaneously offer a viable solution to human psychological and biological fallibility in combat while retaining "meaningful" human control over the war machine. This Panglossian assumption neglects the psychological features of human-machine interactions, the pace at which future AI-enabled conflict will be fought, and the complex and chaotic nature of modern war. The article expounds key psychological insights of human-machine interactions to elucidate how AI shapes our capacity to think about future warfare's political and ethical dilemmas. It argues that through the psychological process of human-machine integration, AI will not merely force-multiply existing advanced weaponry but will become de facto strategic actors in warfare - the "AI commander problem."
- Artificial intelligence and information warfare in major power states: How the US, China, and Russia are using artificial intelligence in their information warfare and influence operations
Hunter, L.Y., Albert, C.D., Rutland, J., Topping, K. and Hennigan, C., Defense & Security Analysis, 40 (2), 2024.
Previous research in security studies contends that information warfare (IW) is becoming a critical element in states' overall security strategies. Additionally, many researchers posit that artificial intelligence (AI) is quickly emerging as an important component of digital communications and states' military applications worldwide. However, less is known regarding how states are incorporating AI in their information warfare and influence operations (IWIO). Thus, given the growing importance of AI and IW in global security, this paper examines how the United States, China, and Russia are incorporating AI in their IWIO strategies and tactics. We find that the US, China, and Russia are utilizing AI in their IWIO approaches in significant ways depending on each state's overall IW strategy, with important implications for international security.
- Autonomous drone swarms and the contested imaginaries of artificial intelligence
Weber, J., Digital War, 5 (January), 2024.
AI-based, autonomous weapon systems (AWS) have the potential of weapons of mass destruction and thereby massively add to the intensifying dialectic of fear between ground and space and the pervasive mass human vulnerability of being tracked and targeted from above. Nevertheless, the dangerous effects of the proliferation of AWS have not been and still are not widely acknowledged. On the one hand, the capabilities and effects of AWS are downplayed by the military and the arms industry staging these systems as precise and clean. Recently, it is also argued that they can be built on the basis of a ‘responsible’ or ‘trustworthy’ artificial intelligence (AI). On the other hand, inadequate sociotechnical imaginaries of AI as a conscious, evil super-intelligence circulated by Hollywood blockbuster films such as 'Terminator' or 'Ex Machina' dominate the public discourse. Their massive overstatement of the power of the technology and also their focus on often irrelevant imaginaries such as the ‘Terminator’ hinders a realistic understanding of the AI’s capabilities. Against this background, arms control advocates develop new imaginaries to show the loss of ‘meaningful human control’ and its problematic consequences. In October 2023, the deployment of autonomous military in the battlefield has already been officially confirmed by an Ukrainian drone company.
- The biopolitics of algorithmic governmentality: How the US military imagines war in the age of neurobiology and artificial intelligence
Tängh Wrangel, C., Security Dialogue, 55 (4), 2024.
With the objective to predict and pre-empt the emergence of political violence, the US Department of Defence (DoD) has devoted increasing attention to the intersection between neurobiology and artificial intelligence. Concepts such as ‘cognitive biotechnologies’, ‘digital biosecurity’ and large-scale collection of ‘neurodata’ herald a future in which neurobiological intervention on a global scale is believed to come of age. This article analyses how the relationship between neurobiology and AI – between the human and the machine – is conceived, made possible, and acted upon within the SMA programme, an interdisciplinary research programme sponsored by the DoD. By showcasing the close intersection between the computer sciences and the neurosciences within the US military, the article questions descriptions of algorithmic governmentality as decentring the human, and as juxtaposed to biopolitical techniques to regulate processes of subjectivity. The article shows that within US military discourse, new biotechnologies are seen to engender algorithmic governmentality a biopolitical dimension, capable of monitoring and regulating emotions, thoughts, beliefs, and subjectivity on
- Bridging the civilian-military divide in responsible AI principles and practices
Azafrani, R. and Gupta, A., Ethics and Information Technology, 25 (27), 2023.
Advances in AI research have brought increasingly sophisticated capabilities to AI systems and heightened the societal consequences of their use. Researchers and industry professionals have responded by contemplating responsible principles and practices for AI system design. At the same time, defense institutions are contemplating ethical guidelines and requirements for the development and use of AI for warfare. However, varying ethical and procedural approaches to technological development, research emphasis on offensive uses of AI, and lack of appropriate venues for multistakeholder dialogue have led to differing operationalization of responsible AI principles and practices among civilian and defense entities. We argue that the disconnect between civilian and defense responsible development and use practices leads to underutilization of responsible AI research and hinders the implementation of responsible AI principles in both communities. We propose a research roadmap and recommendations for dialogue to increase exchange of responsible AI development and use practices for AI systems between civilian and defense communities. We argue that generating more opportunities for exchange will stimulate global progress in the implementation of responsible AI principles.
- Ethical governance of artificial intelligence for defence: Normative tradeoffs for principle to practice guidance
Blanchard, A., Thomas, C. and Taddeo, M., AI & Socity, (February), 2024.
The rapid diffusion of artificial intelligence (AI) technologies in the defence domain raises challenges for the ethical governance of these systems. A recent shift from the what to the how of AI ethics sees a nascent body of literature published by defence organisations focussed on guidance to implement AI ethics principles. These efforts have neglected a crucial intermediate step between principles and guidance concerning the elicitation of ethical requirements for specifying the guidance. In this article, we outline the key normative choices and corresponding tradeoffs that are involved in specifying guidance for the implementation of AI ethics principles in the defence domain. These correspond to: the AI lifecycle model used; the scope of stakeholder involvement; the accountability goals chosen; the choice of auditing requirements; and the choice of mechanisms for transparency and traceability. We provide initial recommendations for navigating these tradeoffs and highlight the importance of a pro-ethical institutional culture.
- Explainable AI in the military domain
Basuchoudhary, A., Ethics and Information Technology , 26 (29), 2024.
Artificial intelligence (AI) has become nearly ubiquitous in modern society, from components of mobile applications to medical support systems, and everything in between. In societally impactful systems imbued with AI, there has been increasing concern related to opaque AI, that is, artificial intelligence where it is unclear how or why certain decisions are reached. This has led to a recent boom in research on “explainable AI” (XAI), or approaches to making AI more explainable and understandable to human users. In the military domain, numerous bodies have argued that autonomous and AI-enabled weapon systems ought not incorporate unexplainable AI, with the International Committee of the Red Cross and the United States Department of Defense both explicitly including explainability as a relevant factor in the development and use of such systems. The article presents a cautiously critical assessment of this view, arguing that explainability will be irrelevant for many current and near-future autonomous systems in the military (which do not incorporate any AI), that it will be trivially incorporated into most military systems which do possess AI (as these generally possess simpler AI systems), and that for those systems with genuinely opaque AI, explainability will prove to be of more limited value than one might imagine.
In particular, the author argues that explainability, while indeed a virtue in design, is a virtue aimed primarily at designers and troubleshooters of AI-enabled systems, but is far less relevant for users and handlers actually deploying these systems. He further argues that human–machine teaming is a far more important element of responsibly using AI for military purposes, adding that explainability may undermine efforts to improve human–machine teamings by creating a prima facie sense that the AI, due to its explainability, may be utilized with little (or less) potential for mistakes. The article concludes by clarifying that the arguments are not against XAI in the military, but are instead intended as a caution against over-inflating the value of XAI in this domain, or ignoring the limitations and potential pitfalls of this approach.
- Generative AI and wargaming: What is it good for?
Hinton, P., RUSI Journal, 168 (7), 2023.
It has been suggested that generative AI might be used to support military wargaming activity. Patrick Hinton contends that, while there are potential uses for models in the execution of wargames through decision support, their very nature makes wargames a human endeavour and people will remain central to wargames going forward.
- Insurmountable enemies or easy targets? Military-themed videogame ‘translations’ of weaponized artificial intelligence
Qiao-Franco, G. and Franco, P., Security Dialogue, 55 (1), 2024.
With the objective to predict and pre-empt the emergence of political violence, the US Department of Defence (DoD) has devoted increasing attention to the intersection between neurobiology and artificial intelligence. Concepts such as ‘cognitive biotechnologies’, ‘digital biosecurity’ and large-scale collection of ‘neurodata’ herald a future in which neurobiological intervention on a global scale is believed to come of age. This article analyses how the relationship between neurobiology and AI – between the human and the machine – is conceived, made possible, and acted upon within the SMA programme, an interdisciplinary research programme sponsored by the DoD. By showcasing the close intersection between the computer sciences and the neurosciences within the US military, the article questions descriptions of algorithmic governmentality as decentring the human, and as juxtaposed to biopolitical techniques to regulate processes of subjectivity. The article shows that within US military discourse, new biotechnologies are seen to engender algorithmic governmentality a biopolitical dimension, capable of monitoring and regulating emotions, thoughts, beliefs, and subjectivity on population level, particularly targeting the minds and brains of ‘vulnerable’ populations in the global South.
- Le soldat augmenté, quel impératif ?
Héritier, M. and Gheorghiev, C., Annales Médico-psychologiques, revue psychiatrique, 181 (1), 2023.
L’exigence de l’opérativité guerrière mène à l’expression d’une agressivité qui fait des sujets des individus, du latin « individuus », c’est-à-dire des hommes non divisés par la parole. Le projet de l’augmentation technologique du soldat intéresse de nos jours la France et ne peut être inhibé, au risque d’un décrochage capacitaire des armées. Toutefois, un comité d’éthique de la défense s’est réuni en 2020 pour en limiter les risques et compte sur l’expertise du Service de santé des armées pour assurer l’appréciation des risques, l’information au commandement et au militaire ainsi que le suivi psychologique continu. Quels seront les effets de tout processus d’augmentation sur le sujet ? De quel malaise en est-il le signe ? Et que devrons-nous répondre dans le champ de la psychiatrie ? L’enjeu premier sera de ne pas confondre l’éthique que revendiquent les comités avec celle du sujet de la psychanalyse, c’est-à-dire de l’inconscient. À ne pas s’y repérer, nous ne ferons qu’assujettir notre travail à l’efficacité opérationnelle, loin de la défense des malades.
- The moral case for the development and use of autonomous weapon systems
Riesen, E., Journal of Military Ethics, 21 (2), 2022.
Autonomous Weapon Systems (AWS) are artificial intelligence systems that can make and act on decisions concerning the termination of enemy soldiers and installations without direct intervention from a human being. In this article, I provide the positive moral case for the development and use of supervised and fully autonomous weapons that can reliably adhere to the laws of war. Two strong, prima facie obligations make up the positive case. First, we have a strong moral reason to deploy AWS (in an otherwise just war) because such systems decrease the psychological and moral risk of soldiers and would-be soldiers. Drones protect against lethal risk, AWS protect against psychological and moral risk in addition to lethal risk. Second, we have a prima facie obligation to develop such technologies because, once developed, we could employ forms of non-lethal warfare that would substantially reduce the risk of suffering and death for enemy combatants and civilians alike. These two arguments, covering both sides of a conflict, represent the normative hill that those in favor of a ban on autonomous weapons must overcome. Finally, I demonstrate that two recent objections to AWS fail because they misconstrue the way in which technology is used and conceptualized in modern warfare.
- Public perceptions of the use of artificial intelligence in defence: A qualitative exploration
Hadlington, L., Karanika-Murray, M., Slater, J., Binder, J., Gardner, S. et al., AI & Society, (February), 2024.
There are a wide variety of potential applications of artificial intelligence (AI) in Defence settings, ranging from the use of autonomous drones to logistical support. However, limited research exists exploring how the public view these, especially in view of the value of public attitudes for influencing policy-making. An accurate understanding of the public’s perceptions is essential for crafting informed policy, developing responsible governance, and building responsive assurance relating to the development and use of AI in military settings. This study is the first to explore public perceptions of and attitudes towards AI in Defence. A series of four focus groups were conducted with 20 members of the UK public, aged between 18 and 70, to explore their perceptions and attitudes towards AI use in general contexts and, more specifically, applications of AI in Defence settings.
Thematic analysis revealed four themes and eleven sub-themes, spanning the role of humans in the system, the ethics of AI use in Defence, trust in AI versus trust in the organisation, and gathering information about AI in Defence. Participants demonstrated a variety of misconceptions about the applications of AI in Defence, with many assuming that a variety of different technologies involving AI are already being used. This highlighted a confluence between information from reputable sources combined with narratives from the mass media and conspiracy theories. The study demonstrates gaps in knowledge and misunderstandings that need to be addressed, and offers practical insights for keeping the public reliably, accurately, and adequately informed about the capabilities, limitations, benefits, and risks of AI in Defence.
- A risk-based regulatory approach to autonomous weapon systems
Blanchard, A., Novelli, C., Floridi, L. and Taddeo, M., Centre for Digital Ethics (CEDE) Research Paper Series, (April), 2024.
International regulation of autonomous weapon systems (AWS) is increasingly conceived as an exercise in risk management. This requires a shared approach for assessing the risks of AWS. This paper presents a structured approach to risk assessment and regulation for AWS, adapting a qualitative framework inspired by the Intergovernmental Panel on Climate Change (IPCC). It examines the interactions among key risk factors—determinants, drivers, and types—to evaluate the risk magnitude of AWS and establish risk tolerance thresholds through a risk matrix informed by background knowledge of event likelihood and severity. Further, it proposes a methodology to assess community risk appetite, emphasizing that such assessments and resulting tolerance levels should be determined through deliberation in a multistakeholder forum. The paper highlights the complexities of applying risk-based regulations to AWS internationally, particularly the challenge of defining a global community for risk assessment and regulatory legitimization.
- When tomorrow comes: A prospective risk assessment of a future artificial general intelligence-based uncrewed combat aerial vehicle system
Salmon, P.M., McLean, S., Carden, T., King, B.J., Thompson, J. et al., Applied Ergonomics, 117 (May), 2024.
There are concerns that Artificial General Intelligence (AGI) could pose an existential threat to humanity; however, as AGI does not yet exist it is difficult to prospectively identify risks and develop requisite controls. We applied the Work Domain Analysis Broken Nodes (WDA-BN) and Event Analysis of Systemic Teamwork-Broken Links (EAST-BL) methods to identify potential risks in a future ‘envisioned world’ AGI-based uncrewed combat aerial vehicle system. The findings suggest five main categories of risk in this context: sub-optimal performance risks, goal alignment risks, super-intelligence risks, over-control risks, and enfeeblement risks. Two of these categories, goal alignment risks and super-intelligence risks, have not previously been encountered or dealt with in conventional safety management systems. Whereas most of the identified sub-optimal performance risks can be managed through existing defence design lifecycle processes, we propose that work is required to develop controls to manage the other risks identified. These include controls on AGI developers, controls within the AGI itself, and broader sociotechnical system controls.
- Who acts when autonomous weapons strike? The act requirement for individual criminal responsibility and state responsibility
Gaeta, P., Journal of International Criminal Justice, 21 (5), 2024.
This essay examines the theories according to which 'actions' carried out by autonomous weapon systems enabled by strong artificial intelligence in detecting, tracking and engaging with the target ('intelligent AWS') may be seen as an 'act' of the weapon system for the purpose of legal responsibility. The essay focuses on the material act required for the commission of war crimes related to prohibited attacks in warfare. After briefly presenting the various conceptions of the act as an essential component of the material element of criminal offences, it argues that the material act of war crimes related to prohibited attacks is invariably carried out by the user of an 'intelligent AWS'. This also holds true in the case of so-called 'unintended engagements' during the course of a military attack carried out with an intelligent AWS. The essay moves on to examine the question of whether, in the case of the use of intelligent AWS by the armed forces of a state, the 'actions' of intelligent AWS - including those not intended by the user - are attributable to the state. It demonstrates that under a correct understanding of the concept of 'act of state' for the purpose of attributing state responsibility under international law, such attribution is unquestionable. It underlines that, suggesting otherwise, would bring to a breaking point the possibility of establishing violations by states of international humanitarian law in the conduct of hostilities.
- Last Updated: Oct 25, 2024 3:04 PM
- URL: https://ec-europa-eu.libguides.com/ai-and-warfare
- Print Page