AI, Algorithms and
the Risk of Discrimination
Image: Pixabay
Introduction
AI, algorithms and the risk of discrimination
AI systems must be technically robust to ensure they are fit for purpose and do not produce biased results, such as false positives or negatives, that disproportionately affect marginalised groups, including those based on racial or ethnic origin, sex, age, and other protected characteristics.
High-risk systems will also need to be trained and tested with sufficiently representative datasets to minimise the risk of unfair biases embedded in the model and ensure that these can be addressed through appropriate bias detection, correction and other mitigating measures.
Source: European Commission
About this Library Guide
This Library Guide has been compiled to support the work of the European Commission. It may also be of interest to students, researchers and the wider public.
The Library Guide presents a curated selection of relevant sources on the topic: EU websites, EU publications, EU law, EU research results, international publications, peer-reviewed research journals and articles, books, think tank reports and news updates.
Use the Find-eR search box on the left to discover information sources on other topics that matter to you.
The resources listed in the EC Library Guides do not necessarily represent the positions, policies or opinions of the EU institutions and bodies.