Artificial Intelligence, Ethics, Bias and Fairness
Selected EU publications
Selected EU publications
-
Adopt AI study – Final study report
European Commission, Directorate-General for Communications Networks, Content and Technology (CNECT), 2024.
A study commissioned by the European Commission highlights the significant potential of Artificial Intelligence (AI) to improve public sector services across the EU. The report emphasizes that AI can enhance citizen-government interactions, boost analytical capabilities, and increase efficiency in key areas such as healthcare, mobility, e-Government, and education. These sectors are identified as among the most ready for large-scale AI deployment, with applications ranging from autonomous vehicles and smart traffic systems to AI-driven healthcare solutions and education technologies.
However, the study also outlines several challenges hindering AI uptake in the public sector. These include complex public procurement processes, difficulties in data management, a lack of regulatory clarity, and concerns about bias in AI decision-making. In response, the report provides a series of policy recommendations aimed at accelerating AI adoption. These include increasing funding and resources for AI in public services, ensuring transparency and accountability in AI systems, promoting cross-border data sharing, and aligning industry and public sector expectations. The European Commission is advised to create a clear regulatory framework for AI, prioritise long-term implementation, and foster human-centric, trustworthy AI solutions. By addressing these challenges, the EU aims to position itself as a global leader in the development of trustworthy and sustainable AI technologies for the public sector.
-
AI-based solutions for legislative drafting in the EU – Summary report
European Commission: Directorate-General for Digital Services, Fitsilis, F. and Mikros, G., AI-based solutions for legislative drafting in the EU – Summary report, Publications Office of the European Union, 2024.
This publication provides an overview of the results of a European Union (EU) funded study entitled “Overview of smart functionalities in drafting legislation in LEOS”. The full study has been published on the European Commission's (EC) Joinup platform and centres on the concept of smart functionalities in law-making, i.e., advanced Information (and Communication) Technologies (I[C]T) services that assist legal drafters and policy developers in their daily work. The underlying research was conducted in view of the development of an “augmented LEOS”, an open-source solution developed by the EC for drafting legislation.
The work draws on the results of a 2022 study on “Drafting legislation in the era of AI and digitisation”, referred to as the reference study. The present study offers a thorough examination of various development steps of the "augmented LEOS" system. It confirms, updates, and expands upon the findings of the reference study. Moreover, it provides a detailed assessment of the business value associated with the proposed smart functionalities. The prioritisation of these functionalities is carried out based on their perceived business value. Furthermore, the study conducts an in-depth investigation into the implementation of these functionalities, addressing their deployment. Additionally, recognising the emergence of Large Language Models (LLMs), the study explores their utilisation in drafting legislation. In this context, potential implications and applications of LLMs in the legislative processes are analysed. Finally, the study suggests a high-level framework and roadmap for further work, outlining the necessary steps and milestones for the successful realisation of the augmented LEOS system.
-
AI Act Regulation (EU) 2024/1689 – Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/201
European Data Protection Supervisor, AI Act Regulation (EU) 2024/1689 – Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance), Publications Office of the European Union, 2025, https://data.europa.eu/doi/10.2804/4225375
The purpose of this Regulation is to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the Union and supporting innovation.
-
AI evidence pathway for operationalising trustworthy AI in health – An ontology unfolding ethical principles into translational and fundamental concepts
Griesinger, C. B., Reina, V., Panidis, D. and Chassaigne, H., AI evidence pathway for operationalising trustworthy AI in health – An ontology unfolding ethical principles into translational and fundamental concepts, Publications Office of the European Union, 2025, https://data.europa.eu/doi/10.2760/8107037
Health, inherently rich in multi-modal data, could profit significantly from artificial intelligence (AI). Yet, adoption of AI in health remains challenging due to three key issues: (1) The “trust barrier”: while a plethora of documents based on AI (ethical) principles are available, there remains a significant interpretation gap between high-level desiderata and detailed actionable concepts. This hampers determination of both type and level of evidence that would render AI tools sufficiently trustworthy for adoption and integration into use contexts and environments. This is further complicated by the heterogeneous landscape of principles used by various organisations despite robust evidence on a convergence towards ca 10 principles.
(2) The “complexity barrier”: health is complex in terms of life cycle and value chains, involving specialised communities that need to develop and translate AI governance into pragmatic approaches that integrate upand downstream life cycle stages in terms of evidence requirements. This requires networked thinking, forward-looking planning and bridging of disciplines and domains. However, out-of-domain literacy is typically limited, impeding effective collaboration for trustworthy AI. (3) The “technical barrier”: interoperability and infrastructure needs that may collide with the underfunding of health systems. To tackle these issues, we propose an ‘AI evidence pathway for health’ aimed at collaboration for evidence on trustworthy AI. The present ontology is its cornerstone. It lays out a pathway for evidence identification, using 10 consensus ethical principles which are unfolded into 42 high-level ‘translational concepts’ that branch into further 110 lower-level concepts (part A of the ontology). The translational concepts connect to 12 clusters of 179 fundamental socio-ethical, scientific, technical, and clinical concepts relevant for AI design, development, evaluation, use and monitoring (part B). Relationships between individual concepts are indicated throughout. The ontology defines user communities for AI innovation in health and outlines a comprehensive life cycle and value chain framework. We introduce the concept of “algorithm-to-model transition” to capture all decisions that may impact on benefits and risks of a model – throughout the life cycle and across value chains. The ontology embraces the benefit-risk ratio concept, emphasising the need for robust real-world evidence on possible benefits of AI tools. The concept descriptions are enriched by a total of ca. 900 publication references. The ontology provides an innovative and comprehensive knowledge resource to support the bridging of relevant actor communities and foster collaboration in view of ‘operationalising’ trustworthy AI in health.
View less
-
Analysis of EU AI Office – Stakeholder consultations – Defining AI systems and prohibited applications – Final study report
European Commission: Directorate-General for Communications Networks, Content and Technology and Centre for European Policy Studies (CEPS), Analysis of EU AI Office – Stakeholder consultations – Defining AI systems and prohibited applications – Final study report, Publications Office of the European Union, 2025, https://data.europa.eu/doi/10.2759/6218665
This report analyses the results of stakeholder consultations conducted by the EU AI Office regarding two critical aspects of AI regulation: the definition of AI systems and prohibited AI applications (European Commission, 2024). This report synthesises stakeholder feedback to 88 questions and informs the development of clear, practical, and effective AI regulatory frameworks. The report was drafted by the Centre for European Policy Studies (CEPS).
-
Analysis of the generative AI landscape in the European public sector
European Commission: Directorate-General for Digital Services, Brizuela, A., Combetto, M., Kotoglou, S., Galasso, G. et al., Analysis of the generative AI landscape in the European public sector, Publications Office of the European Union, 2025, https://data.europa.eu/doi/10.2799/0409819
This report provides a broad description of the adoption of generative AI (or GenAI) within the European public sector. It focuses on (i) guidelines and policies adopted within administrations to regulate the use of this emerging technology; and (ii) the multiple applications and use cases found in the Public Sector Tech Watch observatory. The public sector is quickly adopting GenAI solutions, but administrations are facing daily challenges related to implementation processes and effective public-private collaborations. Administrations are also facing other challenges in their regulatory eff o r t s , primarily centred around human oversight; accountability; the importance of data protection; and governance, safety, fairness and transparency.
-
Annual Report 2024 – European Data Protection Supervisor
European Data Protection Supervisor, Publications Office of the European Union, 2025.
The EDPS Annual Report 2024 is about acting for the future of data protection, preparing for the diverse possibilities and risks that the digital landscape presents.
-
Artificial intelligence (AI) and human rights – Using AI as a weapon of repression and its impact on human rights – In-depth analysis
European Parliament: Directorate-General for External Policies of the Union and Ünver, A., Publications Office of the European Union, 2024.
This in-depth analysis (IDA) explores the most prominent actors, cases and techniques of algorithmic authoritarianism together with the legal, regulatory and diplomatic framework related to AI-based biases as well as deliberate misuses. With the world leaning heavily towards digital transformation, AI’s use in policy, economic and social decision-making has introduced alarming trends in repressive and authoritarian agendas. Such misuse grows ever more relevant to the European Parliament, resonating with its commitment to safeguarding human rights in the context of digital trans-formation.
By shedding light on global patterns and rapidly developing technologies of algorithmic authoritarianism, this IDA aims to produce a wider understanding of the complex policy, regulatory and diplomatic challenges at the intersection of technology, democracy and human rights. Insights into AI’s role in bolstering authoritarian tactics offer a foundation for Parliament’s advocacy and policy interventions, underscoring the urgency for a robust international framework to regulate the use of AI, whilst ensuring that technological progress does not weaken fundamental freedoms. Detailed case studies and policy recommendations serve as a strategic resource for Parliament’s initiatives: they highlight the need for vigilance and proactive measures by combining partnerships (technical assistance), industrial thriving (AI Act), influence (regulatory convergence) and strength (sanctions, export controls) to develop strategic policy approaches for countering algorithmic control encroachments.
-
Artificial intelligence and civil liability – A European perspective
European Parliament: Directorate-General for Citizens’ Rights, Justice and Institutional Affairs and Bertolini, A., Artificial intelligence and civil liability – A European perspective, European Parliament, 2025, https://data.europa.eu/doi/10.2861/0075079
This study, commissioned by the European Parliament's Policy Department for Justice, Civil Liberties and Institutional Affairs, critically analyses the EU's evolving approach to regulating civil liability for artificial intelligence systems.
-
Assessing technology in law enforcement – A method for ethical decision-making
Europol, Assessing technology in law enforcement – A method for ethical decision-making, Publications Office of the European Union, 2025.
Europol was mandated in 2019 by the EU Justice and Home Affairs ministers to create an Innovation Lab to support the law enforcement community in the area of innovation. The Lab aims to identify, promote and develop concrete innovative solutions in support of the EU Member States’ operational work. These will help investigators and analysts to make the most of the opportunities offered by new technology to avoid duplication of work, create synergies and pool resources. The activities of the Lab are directly linked to the strategic priorities as laid out in Europol Strategy 2020+, which states that Europol shall be at the forefront of law enforcement innovation and research.
The European Clearing Board for ‘Tools, Methods and Innovations in the field of technical support of operations and investigations’ (EuCB) was launched by the Heads of Europol National Units (HENUs) in their meeting of 5 November 2020. It is composed of Single Points of Contact (SPoCs) from the Europol Innovation Lab, all EU Member States and the four Schengen-associated countries. SPoCs meet regularly in plenary meetings, during which they update each other on innovative projects and tools and decide on new joint collaboration activities. The Strategic Group on Technology and Ethics was founded in 2021 under the umbrella of the EuCB. Currently, the group is composed of representatives from Australia, Netherlands, Norway, Slovenia, Spain, Sweden and the UK. One of the objectives of the group has been to create these guidelines ‘Assessing technology in law enforcement: A method for ethical decision-making’ for the benefit of all EuCB members.
-
The development of generative artificial intelligence from a copyright perspective
European Union Intellectual Property Office, The development of generative artificial intelligence from a copyright perspective, European Union Intellectual Property Office, 2025, https://data.europa.eu/doi/10.2814/3893780
This study is designed to clarify how GenAI systems interact with copyright – technically, legally, and economically. It examines how copyright-protected content is used in training models, what the applicable EU legal framework is, how creators can reserve their rights through opt-out mechanisms, and what technologies exist to mark or identify AI-generated outputs. It also explores licensing opportunities and the potential emergence of a functioning market for AI training data. Although the study is intended for experts in the field, it lays the groundwork for developing clear and accessible informational resources for a broader audience.
-
Ethics for AI in aviation – Aviation professionals survey results 2024/2025
European Union Aviation Safety Agency and Berlenga, I., Ethics for AI in aviation – Aviation professionals survey results 2024/2025, European Union Aviation Safety Agency, 2025, https://data.europa.eu/doi/10.2822/7047867
The analysis presented in this review provides an overview of the activities carried out in 2024 by the European Union Aviation Safety Agency (EASA) in response to safety recommendations, as well as a comparison with historical data. This review also highlights a range of safety issues and safety improvement actions that will be of interest to the European aviation community and the wider public.
EASA has a key role in addressing safety concerns which emerge during safety investigations, and the processing of safety recommendations in a systematic manner constitutes one of its core responsibilities. This has been reflected in the establishment of a proven process for managing the safety recommendations received and tracking them through to closure. Due to its central position in the European aviation safety system, EASA can take actions with respect to systemic problems and risk management.
-
European model for artificial intelligence
European Commission: Directorate-General for Research and Innovation, Renda, A., Balland, P., Soete, L. and Christophilopoulos, E., A European model for artificial intelligence, Publications Office of the European Union, 2025.
There is a consensus on the urgent need for a cohesive European response to the challenges posed by Artificial Intelligence (AI). Increased investment, policy alignment and skill development, are crucial to leverage the emergence of this ground-breaking technology of AI for societal good. The potential of AI in science, government, and industry, underlines the need for the EU public sector to use its unique position in developing and supporting a strategic, transborder and cooperative approach to AI development in Europe, in a unique European model for AI.
-
Explainable AI in education – Fostering human oversight and shared responsibility – By the European Digital Education Hub’s Squad on artificial intelligence in education
European Commission: European Education and Culture Executive Agency, Explainable AI in education – Fostering human oversight and shared responsibility – By the European Digital Education Hub’s Squad on artificial intelligence in education, Publications Office of the European Union, 2025, https://data.europa.eu/doi/10.2797/6780469
This report provides an in-depth analysis of explainable artificial intelligence (XAI) in the context of education, with a focus on promoting human oversight, ethical governance, and shared responsibility. It is the outcome of the European Digital Education Hub’s squad (online working group) on XAI in education. The report defines and distinguishes key concepts – transparency, interpretability, explainability, and understandability – positioning them along technical and human-centered dimensions essential for building trustworthy AI systems.
-
General-purpose GenAI chatbots – Preserving the public interest
European University Institute and Gori, P., General-purpose GenAI chatbots – Preserving the public interest, European University Institute, 2025, https://data.europa.eu/doi/10.2870/5121057
Generative Artificial Intelligence (GenAI) tools – in particular, general-purpose GenAI chatbots – are now widely used by society. The training methodology and the quality of the data on which they are trained impacts their outputs. These tools are trained on large sets of mainly public data, and their outputs (i.e., responses provided to the user) depend on the prediction of the next word based on the statistically most-used word in a given context (they do not possess knowledge or reasoning capacities per se).
As such, their responses may be biased, misleading, or inaccurate (a phenomenon that is often referred to as ‘hallucinations’). These tools, which have high energy and carbon impacts, can also negatively impact our neural and cognitive capacities. Policy responses should support a balanced use of general-purpose GenAI chatbots, integrate responses across sectors (including those outside the technology sphere), support research, and ensure that tech companies act in the public interest.
-
Generative AI and copyright – Training, creation, regulation
European Parliament: Directorate-General for Citizens’ Rights, Justice and Institutional Affairs and Lucchi, N., Generative AI and copyright – Training, creation, regulation, European Parliament, 2025, https://data.europa.eu/doi/10.2861/9120512
This study examines how generative AI challenges core principles of EU copyright law. It highlights the legal mismatch between AI training practices and current text and data mining exceptions, and the uncertain status of AI-generated content. These developments pose structural risks for the future of creativity in Europe, where a rich and diverse cultural heritage depends on the continued protection and fair remuneration of authors. The report calls for clear rules on input/output distinctions, harmonised opt-out mechanisms, transparency obligations, and equitable licensing models. To balance innovation and authors’ rights, the European Parliament is expected to lead reforms that reflect the evolving realities of creativity, authorship, and machine-generated expression. This study was commissioned by the European Parliament’s Policy Department for Justice, Civil Liberties and Institutional Affairs at the request of the Committee on Legal Affairs.
-
Generative AI outlook report – Exploring the intersection of technology, society, and policy
Abendroth-Dias, K., Vespe, M., Arias Cabarcos, P., Kotsev, A., Bacco, M. et al., Generative AI outlook report – Exploring the intersection of technology, society, and policy, Vespe, M.(editor), Kotsev, A.(editor), Van Bavel, R.(editor) and Navajas Cawood, E.(editor), Publications Office of the European Union, 2025, https://data.europa.eu/doi/10.2760/1109679
This Outlook report, prepared by the European Commission’s Joint Research Centre (JRC), examines the transformative role of Generative AI (GenAI) with a specific emphasis on the European Union. It highlights the potential of GenAI for innovation, productivity, and societal change. GenAI is a disruptive technology due to its capability of producing human-like content at an unprecedented scale. As such, it holds multiple opportunities for advancements across various sectors, including healthcare, education, science, and creative industries. At the same time, GenAI also presents significant challenges, including the possibility to amplify misinformation, bias, labour disruption, and privacy concerns. All those issues are cross-cutting and therefore, the rapid development of GenAI requires a multidisciplinary approach to fully understand its implications.
Against this context, the Outlook report begins with an overview of the technological aspects of GenAI, detailing their current capabilities and outlining emerging trends. It then focuses on economic implications, examining how GenAI can transform industry dynamics and necessitate adaptation of skills and strategies. The societal impact of GenAI is also addressed, with focus on both the opportunities for inclusivity and the risks of bias and over-reliance. Considering these challenges, the regulatory framework section outlines the EU’s current legislative framework, such as the AI Act and horizontal Data legislation to promote trustworthy and transparent AI practices. Finally, sector-specific ‘deep dives’ examine the opportunities and challenges that GenAI presents. This section underscores the need for careful management and strategic policy interventions to maximize its potential benefits while mitigating the risks. The report concludes that GenAI has the potential to bring significant social and economic impact in the EU, and that a comprehensive and nuanced policy approach is needed to navigate the challenges and opportunities while ensuring that technological developments are fully aligned with democratic values and EU legal framework.
-
The role of artificial intelligence in processing and generating new data – An exploration of legal and policy challenges in open data ecosystems
Publications Office of the European Union, Graux, H., Gryffroy, P., Gad-Nowak, M. and Boghaert, L., Publications Office of the European Union, 2024.
The general impact of artificial intelligence (AI) systems on businesses, governments and the global economy is currently a hot topic. This isn’t surprising, considering that AI is believed to have the potential to bring about radical, unprecedented changes in the way people live and work. The transformative potential of AI originates to a large extent from its ability to analyse data at scale, and to notice and internalise patterns and correlations in that data that humans (or fully deterministic algorithms) would struggle to identify. In simpler terms: modern AIs flourish especially if they can be trained on large volumes of data, and when they are used in relation to large volumes of data.
-
The role of artificial intelligence in scientific research – A science for policy, European perspective
Purificato, E., Bili, D., Jungnickel, R., Ruiz-Serra, V., Fabiani, J. et al., The role of artificial intelligence in scientific research – A science for policy, European perspective, Publications Office of the European Union, 2025, https://data.europa.eu/doi/10.2760/7217497
Artificial Intelligence (AI) is fundamentally transforming the scientific process across all stages, from hypothesis generation and experimental design to data analysis, peer review and dissemination of results. In many research fields, such as the examined protein structure prediction, materials discovery and computational humanities, AI accelerates discovery, fosters interdisciplinary collaboration and enhances reproducibility, while improving access to advanced analytical and computational capabilities.
These developments align with the European Union (EU)’s vision to make AI tools and infrastructure more accessible, strengthening research in areas of strategic importance such as climate change, health, and clean technologies. However, this progress introduces new challenges, including concerns about algorithmic bias, the proliferation of hallucinations and fabricated data, and the potential erosion of critical thinking skills. AI adoption remains uneven across scientific domains, and addressing these risks requires robust governance, transparency and alignment with open-science principles. This report serves as the scientific evidence base for the European Strategy for AI in Science, offering insights to help policymakers navigate the challenges and opportunities of AI. It supports efforts to maximize the benefits of AI for research excellence and competitiveness in the EU, while maintaining a firm commitment to ethical, inclusive, and European values.
-
Study exploring the context, challenges, opportunities, and trends in algorithmic management in the workplace – Annexes to the final report
European Commission: Directorate-General for Employment, Social Affairs and Inclusion and Visionary Analytics, Study exploring the context, challenges, opportunities, and trends in algorithmic management in the workplace – Annexes to the final report, Publications Office of the European Union, 2025, https://data.europa.eu/doi/10.2767/0610147
A comprehensive mapping and overview of the existing and ongoing academic literature was carried out in the areas of AM, AI, and digitisation in the workplace. To ensure a wider range of insights and provide a more comprehensive understanding, the literature was collected in a number of different languages. The members of the core team and national experts gathered the relevant literature, with the former concentrating on English-language literature and the latter on the literature in EU languages.
Moreover, the identified literature consisted of a variety of document types, such as academic articles in areas of economics, law, sociology, philosophy, and medical research, as well as from studies, institutional reports, and evaluations. Grey literature, such as independent company reports, discussions and working papers, was also reviewed. The total number of collected documents in the area of AM, AI, and digitisation in the workplace was 622 documents.
-
Study on the deployment of AI in healthcare – Final report
European Commission: Directorate-General for Health and Food Safety, EEIG, Open Evidence and PwC, Study on the deployment of AI in healthcare – Final report, Publications Office of the European Union, 2025, https://data.europa.eu/doi/10.2875/2169577
Present day healthcare systems face several complex challenges, including rising demand due to an aging population, increasing prevalence of chronic and complex conditions, rising costs, and shortages in the healthcare workforce. Artificial intelligence (AI) has the potential to address some of these by improving operational efficiency, reducing administrative burdens, and enhancing diagnosis and treatment pathways. Despite the promise and availability AI-based tools in the market, their deployment in clinical practice is slow.
Using a mixed methods approach, entailing a literature review and consultation activities, the study identifies a range of challenges to AI deployment in healthcare, spanning technological and data-related issues, legal and regulatory complexities, organisational and business challenges, and social and cultural barriers. It also highlights successful strategies (accelerators) employed by hospitals globally to overcome these common obstacles, offering valuable inspiration in the broader European Union (EU) context. The EU is uniquely positioned to support the safe, effective, ethical and equitable scale-up of AI deployment in healthcare, balancing the need to nurture innovation with safeguarding the fundamental rights of patients. This report presents considerations for future action and proposes a monitoring and indicators framework that could enable progress to be tracked with the view of enabling the sustainable integration of AI into healthcare systems.
- Last Updated: Oct 31, 2025 4:25 PM
- URL: https://ec-europa-eu.libguides.com/ai-and-ethics
- Print Page