EC Library Guide on artificial intelligence, security, defence and warfare: Selected publications
Selected publications from international organisations
- Above the law: Drones, aerial vision and the law of armed conflict – a socio-technical approach
Krebs, S., International Review of the Red Cross, (924), 2023.
Aerial visuals play a central – and increasing – role in military operations, informing military decision-makers in real time. While adding relevant and time-sensitive information, these visuals construct an imperfect representation of people and spaces, placing additional burdens on decision-makers and creating a persuasive – yet misleading – virtual representation of the actual conditions on the ground. Based on interdisciplinary analysis of critical security studies, behavioural economics and international law literature, as well as rich data from US and Israeli military investigations into four military operations spanning from 2009 to 2021, this article identifies three types of challenges stemming from the mounting reliance on aerial visuals to inform military operations: technical challenges, relating to the technical capabilities and features of aerial vision technologies; cognitive challenges, relating to decision-making biases affecting human decision-makers; and human-technological challenges, relating to the human–machine interaction itself. The article suggests ways to mitigate these challenges, improve the application of the law of armed conflict, and protect people, animals and the environment during armed conflicts.
- Artificial intelligence and machine learning in armed conflict: A human-centred approach
International Committee of the Red Cross, International Review of the Red Cross, 102 (913), 2021.
AI and machine learning systems could have profound implications for the role of humans in armed conflict, especially in relation to: increasing autonomy of weapon systems and other unmanned systems; new forms of cyber and information warfare; and, more broadly, the nature of decision-making. In the view of the ICRC, governments, militaries and other relevant actors in armed conflict must pursue a genuinely human-centred approach to the use of AI and machine learning systems based on legal obligations and ethical responsibilities. The use of AI in weapon systems must be approached with great caution.
- Governance of artificial intelligence in the military domain
United Nations Office for Disarmament Affairs, UNODA Occasional Papers, (42), 2024.
The integration of artificial intelligence (AI) into military applications, such as weapon systems, decision-support tools, and various other tasks, poses opportunities and challenges to international peace and security. Military applications of AI can exacerbate and amplify existing risks and could also lead to new unintended consequences. Rapid developments in this technological domain have outpaced the development of guardrails to mitigate such risks. This publication aims to enhance the international community’s understanding of the governance of AI in the military domain. It outlines the opportunities and risks associated with military applications of AI, highlights areas of contention within the expert and diplomatic communities, and offers policy recommendations and options for its multilateral governance.
- The militarization of artificial intelligence
Sisson, M., Spindel, J., Scharre, P., et al., United Nations, 2020.
Artificial Intelligence (AI) has the potential to improve the health and well-being of individuals, communities, and states, and help meet the UN’s Sustainable Development Goals. However, certain uses of AI could also undermine international peace and security by raising concerns about safety and security of the technology, accelerating the pace of armed conflicts, or loosening human control over the means of war.
In 2019, the United Nations Office for Disarmament Affairs, the Stanley Center and the Stimson Center partnered in a workshop and series of papers to facilitate a multistakeholder discussion among experts from Member States, industry, academia, and research institutions, with the aim of building understanding about the peace and security implications of AI. This publication captures that conversation and shares assessments of the topic from US, Chinese, and Russian perspectives. It is intended to provide a starting point for more robust dialogues among diverse communities of stakeholders as they endeavor to maximize the benefits of AI while mitigating the misapplication of this important technology.
- NATO-Mation: Strategies for leading in the age of artificial intelligence
Gilli, A., NDC Research Paper, (15), NATO Defense College, 2020.
The increasing power of processors, accuracy of algorithms, and availability of digital data are driving the dramatic artificial intelligence (AI)-centred technological transformation now in progress. These changes have already turned companies, industries and markets upside down, and we are also starting to see their effects on the battlefield. The employment of unmanned vehicles, reliance on big data for target detection, identification and acquisition, as well as the potentials of machine learning in other critical functions such as logistics and maintenance are only some of the possible examples of how warfare will evolve in the near future. The North Atlantic Treaty Organization (NATO) and its Allies cannot be bystanders during this technological transition.
- NATO decision-making in the age of big data and artificial intelligence
Sonia Lucarelli, L., Marrone A, and Moro, F.N. (eds), NATO, 2021.
This publication is the result of the Conference 'NATO Decision-Making : Promises and Perils of the Big Data Age', organized by NATO Allied Command Transformation (ACT), the University of Bologna and Istituto Affari Internazionali (IAI) of Rome. Digital revolution has substantially transformed the world we live in, providing great opportunities but also making societies more vulnerable. Technology makes external interferences cheaper, faster and all-encompassing : citizens can potentially become direct targets of information warfare, all members of a society can be part of conflicts one way or another. From advanced weaponry to command and control, most security-related domains are undergoing deep transformations as data availability and transmission increase exponentially. In this context, three broad, interconnected aspects are explored through this publication with a view to NATO future evolution. First, the organizational challenges posed by Big Data for the Alliance, which has to adapt in order to fully exploit their potentialities but also mitigate the risks they entail. Second, the hybrid threats to Allies’ decision-making through the cyberspace, whereby for instance artificial intelligence (AI) and Big Data enable more effective information warfare by hostile actors. Third, the adoption of AI in the defence domain is crucial, from equipment to procedures, and NATO can play a positive role in this regard also concerning the dialogue with private companies.
- The risks of autonomous weapons: An analysis centred on the rights of persons with disabilities
Díaz Figueroa, M., Henao Orozco, A., Martínez, J. and Munoz Jaime, W.International Review of the Red Cross, (922), 2020.
Autonomous weapons systems have been the subject of heated debate since 2010, when Philip Alston, then Special Rapporteur on Extrajudicial, Summary, or Arbitrary Executions, brought the issue to the international spotlight in his interim report to the United Nations (UN) General Assembly 65th Session. Later, in 2013, Christof Heyns, who was Special Rapporteur for Extrajudicial, Summary or Arbitrary Executions at the time, published a report that elaborated further on the issues raised by what he called “lethal autonomous robotics”.2 Following a recommendation by Advisory Board on Disarmament Matters at the UN General Assembly 68th Session, the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, as amended on 21 December 2021, started discussing autonomous weapons systems in 2014. Then, the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (GGE on LAWS)3 was created in 2016 to focus on this issue.4 While the group has kept meeting since then, no clear steps have been taken yet towards a normative framework on autonomous weapons as of September 2022.
In all these years, persons with disabilities – including conflict survivors – have not been included in discussions, nor has the disability perspective been reflected in international debate on autonomous weapons. Only recently has there been any effort to consider the rights of persons with disabilities when examining ethical questions related to artificial intelligence (AI). In this article, we will examine how and why autonomous weapons have a disproportionate impact on persons with disabilities, because of the discrimination that results from a combination of factors such as bias in AI, bias in the military and the police, barriers to justice and humanitarian assistance in situations of armed conflict, and the lack of consultation and participation of persons with disabilities and their representative organizations on issues related to autonomy in weapons systems.
- Science & Technology trends 2020-2040: Exploring the S&T edge
Reding, D.F. and Eaton, J., NATO Science & Technology Organization, 2020.
Science & Technology Trends: 2020-2040 provides an assessment of the impact of S&T advances over the next 20 years on the Alliance. This assessment is based on a review of selected national and international S&T foresight and futures studies; multi-national workshops; and, technology watch activities conducted by the Science & Technology Organization. I gratefully acknowledge, the collaboration and support provided by Alliance and Partner defence R&D communities, the NATO international staff, Allied Command Transformation (ACT), and the NATO Communication and Information Agency (NCIA).
As the world changes, so does our Alliance. NATO adapts. We continue to work together as a community of like-minded nations, seeking to develop military capabilities fit for the geostrategic challenges of today and the future. As such, NATO nations must remain at the forefront of innovation, S&T based or otherwise while facing challenges from all strategic directions and across all operational domains. To do so requires an appreciation of the potential future security environment, especially the military and security challenges presented by emerging or disruptive S&T. Drawing upon the intellectual strength and knowledge advantage of the Alliance, Science & Technology Trends: 2020-2040 provides just such an assessment. The informed insights and information provided will help guide NATO at all levels and the Alliance as weprepare to evolve and adapt to the future security environment and the challenges ahead.
- The threat of killer robots
Sychev, V., UNESCO, 2018 (updated in 2023).
Artificial intelligence (AI) has a growing number of applications in the security and military areas. It facilitates manoeuvres in the field, and can save lives when things go wrong. It also boosts the performance of armies by providing robot allies to combat forces. According to some experts, Lethal Autonomous Weapons Systems (LAWS) are creating a “Third Revolution” in warfare, after gunpowder and nuclear weapons. It is time we start worrying about the day when armies of robots are capable of conducting hostilities with full autonomy, without humans to command them.
- Last Updated: Oct 25, 2024 3:04 PM
- URL: https://ec-europa-eu.libguides.com/ai-and-warfare
- Print Page