EC Library Guide on large language models and generative artificial intelligence: Selected publications
Selected EU publications
- Adopt AI study – Final study report
European Commission, Directorate-General for Communications Networks, Content and Technology (CNECT), 2024.
A study commissioned by the European Commission highlights the significant potential of Artificial Intelligence (AI) to improve public sector services across the EU. The report emphasizes that AI can enhance citizen-government interactions, boost analytical capabilities, and increase efficiency in key areas such as healthcare, mobility, e-Government, and education. These sectors are identified as among the most ready for large-scale AI deployment, with applications ranging from autonomous vehicles and smart traffic systems to AI-driven healthcare solutions and education technologies.
However, the study also outlines several challenges hindering AI uptake in the public sector. These include complex public procurement processes, difficulties in data management, a lack of regulatory clarity, and concerns about bias in AI decision-making. In response, the report provides a series of policy recommendations aimed at accelerating AI adoption. These include increasing funding and resources for AI in public services, ensuring transparency and accountability in AI systems, promoting cross-border data sharing, and aligning industry and public sector expectations. The European Commission is advised to create a clear regulatory framework for AI, prioritise long-term implementation, and foster human-centric, trustworthy AI solutions. By addressing these challenges, the EU aims to position itself as a global leader in the development of trustworthy and sustainable AI technologies for the public sector.
- AI watch: Revisiting technology readiness levels for relevant artificial intelligence technologies
European Commission: Joint Research Centre, Martínez-Plumed, F. and Caballero, F., Publications Office of the European Union, 2022.
Artificial intelligence (AI) offers the potential to transform our lives in radical ways. However, we lack the tools to determine which achievements will be attained in the near future. Also, we usually underestimate which various technologies in AI are capable of today. Certainly, the translation from scientific papers and benchmark performance to products is faster in AI than in other non-digital sectors. However, it is often the case that research breakthroughs do not directly translate to a technology that is ready to use in real-world environments. This report constitutes the second edition of a study proposing an example-based methodology to categorise and assess several AI technologies, by mapping them onto Technology Readiness Levels (TRL) (e.g., maturity and availability levels).
We first interpret the nine TRLs in the context of AI and identify different categories in AI to which they can be assigned. We then introduce new bidimensional plots, called readiness-vs-generality charts, where we see that higher TRLs are achievable for low-generality technologies focusing on narrow or specific abilities, while high TRLs are still out of reach for more general capabilities. In an incremental way, this edition builds on the first report on the topic by updating the assessment of the original set of AI technologies and complementing it with an analysis of new AI technologies. We include numerous examples of AI technologies in a variety of fields and show their readiness-vs-generality charts, serving as a base for a broader discussion of AI technologies. Finally, we use the dynamics of several AI technologies at different generality levels and moments of time to forecast some short-term and mid-term trends for AI.
- ChatGPT: The impact of large language models on law enforcement
Europol, Publications Office of the European Union, 2023.
The release and widespread use of ChatGPT – a large language model (LLM) developed by OpenAI – has created significant public attention, chiefly due to its ability to quickly provide ready-to-use answers that can be applied to a vast amount of different contexts. These models hold masses of potential. Machine learning, once expected to handle only mundane tasks, has proven itself capable of complex creative work. LLMs are being refined and new versions rolled out regularly, with technological improvements coming thick and fast. While this offers great opportunities to legitimate businesses and members of the public it also can be a risk for them and for the respect of fundamental rights as criminals and bad actors may wish to exploit LLMs for their own nefarious purposes.
In response to the growing public attention given to ChatGPT, the Europol Innovation Lab organised a number of workshops with subject matter experts from across the organisation to explore how criminals can abuse LLMs such as ChatGPT, as well as how it may assist investigators in their daily work. The experts who participated in the workshops represented the full spectrum of Europol’s expertise, including operational analysis, serious and organised crime, cybercrime, counterterrorism, as well as information technology. The objective of this report is to examine the outcomes of the dedicated expert workshops and to raise awareness of the impact LLMs can have on the work of the law enforcement community. As this type of technology is undergoing rapid progress, this document further provides a brief outlook of what may still be to come, and highlights a number of recommendations on what can be done now to better prepare for it.
- EDPS TechDispatch #2/2023: Explainable artificial intelligence
European Data Protection Supervisor and Bernardo, V., Publications Office of the European Union, 2023.
The adoption of artificial intelligence (AI) is rapidly growing in sectors such as healthcare, finance, transportation, manufacturing and entertainment. Its increasing popularity in recent years is largely due to its ability to automate tasks, such as processing large amounts of information or identifying patterns, and its widespread availability to the public. Large language models (LLMs), like ChatGPT, or text-to-image models, like Stable Diffusion, are two examples of AI that have gained large popularity in recent years. However, despite the growing use of AI, many of these systems operate in ways that are opaque to both those providing AI systems (‘providers’), those deploying AI systems (‘deployers’), and those affected by the use of AI systems. In the complex realm of AI systems, even the providers of these systems are often unable to explain the decisions and outcomes of the systems they have built. This phenomenon is commonly referred to as the “black box” effect.
- FABLS - framework for autonomous behaviour-rich language-driven emotion-enabled synthetic populations: Modelling autonomous emotional AI-driven agents in their spatiotemporal context
Hradec, J., Ostlaender, N. and Bernini, A., Publications Office of the European Union, 2023.
The research presented in this Technical Report investigates how large language models (LLMs), through their extensive training and transcend linguistic capabilities, emerge as reservoirs of a vast array of human experiences, behaviours, and emotions. Building upon prior work of the JRC on synthetic populations. It presents a complete step-by-step guide on how to use LLMs to create highly realistic modelling scenarios and complex societies of autonomous emotional Artificial Intelligence agents (AI agents). An AI agent is defined as a program that employs artificial intelligence techniques to perform tasks that typically require human-like intelligence. The report describes how the agents of a small subset of the existing synthetic population generated by Hradec and colleagues (2022) were instantiated using LLMs and enriched with personality traits using the ABC-EBDI, which combines the psychotherapeutic model ABC (Activation, Belief, Consequence) with the EBDI (Emotions, Belief, Desire, Intent).
These intelligent agents were then equipped with short- and long-term memory, access to detailed knowledge of their environment, as well as the use of tools such as “mobile phone with a contact list” and the possibility to call friends and “public services”. We found that this setting of embodied reasoning (Huang et al 2023) significantly improved the agents' problem-solving capabilities. Hence, when subjected to various scenarios, such as simulated natural disasters, the LLM-driven agents exhibited behaviours mirroring human-like reasoning and emotions, inter-agent patterns and realistic conversations, including elements that reflect critical thinking. The study shows how these LLM-driven agents can serve as believable proxies for human behaviour in simulated environments, which has vast implications for future research and policy applications, including impact assessment of different policy scenarios. The next level of implementation would cover a setting where all agents of the synthetic population have access to their complete environment, comprehensive network of contacts, functioning public services and an actual synthetic economy.
- How artificial intelligence works
Boucher , P., European Parliament, 2019.
This briefing provides accessible introductions to some of the key techniques that come under the AI banner, grouped into three sections to give a sense the chronology of its development. The first describes early techniques, described as ‘symbolic AI’ while the second focusses on the ‘data driven’ approaches that currently dominate and the third looks towards possible future developments. By explaining what is ‘deep’ about deep learning and showing that AI is more maths than magic, the briefing aims to equip the reader with the understanding they need to engage in clear-headed reflection about AI’s opportunities and challenges, and meaningful debates about its development.
- Technology foresight for public funding of innovation: Methods and best practices
European Commission: Joint Research Centre, Dannemand Anderson, P., Vesnic-Alujevi, L., et al., Publications Office of the European Union, 2023.
In times of growing uncertainties and complexities, anticipatory thinking is essential for policymakers. Technology foresight explores the longer-term futures of Science, Technology and Innovation. It can be used as a tool to create effective policy responses, including in technology and innovation policies, and to shape technological change. In this report we present six anticipatory and technology foresight methods that can contribute to anticipatory intelligence in terms of public funding of innovation: - the Delphi survey, - Genius forecasting, - Technology roadmapping, - Large language models used in foresight, - Horizon scanning and - Scenario planning. Each chapter provides a brief overview of the method with case studies and recommendations. The insights from this report show that only by combining different anticipatory viewpoints and approaches to spotting, understanding and shaping emergent technologies, can public funders such as the European Innovation Council improve their proactive approaches to supporting ground-breaking technologies. In this way, they will help innovation ecosystems to develop.
- Use of Large Language Models for location detection on the example of the terrorism and extremism event database
European Commission: Joint Research Centre, Bosso, F., Valisa, J. et al., Publications Office of the European Union, 2023.
This technical report discusses the potential use of Large Language Models for location detection with a focus on the JRC Terrorism and Extremism Database. The report highlights the current inaccuracies in the database’s location detection algorithm, which uses adhoc created embeddings, struggles with contextual issues, and has difculty with translated location names. These issues can lead to misclassifed events, negatively impacting the quality of the tool. The report suggests exploring more accurate and sophisticated approaches, taking advantage of recent advancements in Artifcial Intelligence, to increase the accuracy of the automated classifcation thus helping reducing human intervention.
- Last Updated: Oct 25, 2024 3:55 PM
- URL: https://ec-europa-eu.libguides.com/llm-and-genAI
- Print Page