EC Library Guide on large language models and generative artificial intelligence: Selected reports
Selected think tank reports
- Artificial intelligence: Overview, recent advances, and considerations for the 118th Congress
Harris, L.A., Congressional Research Service, 2023.
A notable area of recent advancement has been in generative AI (GenAI), which refers to machine learning (ML) models developed through training on large volumes of data in order to generate content. Technological advancements in the underlying models since 2017, combined with the open availability of these tools to the public in late 2022, have led to widespread use. The underlying models for GenAI tools have been described as “general-purpose AI,” meaning they can be adapted to a wide range of downstream tasks. Such advancements, and the wide variety of applications for AI technologies, have renewed debates over appropriate uses and guardrails, including in the areas of health care, education, and national security.
In the 118th Congress, as of June 2023, at least 40 bills had been introduced that either focused on AI/ML or contained AI/ML-focused provisions, and none has been enacted. Collectively, bills in the 118th Congress address a range of topics, including federal government oversight of AI; training for federal employees; disclosure of AI use; export controls; usespecific prohibitions; and support for the use of AI in particular sectors, such as cybersecurity, weather modeling, wildfire detection, precision agriculture, and airport safety.
- Economic arguments in favour of reducing copyright protection for generative AI inputs and outputs
Martens, B., Bruegel, 2024.
Artificial intelligence (AI), like any workplace technology, changes the division of labour in an organisation and the resulting design of jobs. When used as an automation technology, AI changes the bundle of tasks that make up an occupation. In this case, implications for job quality depend on the (re)composition of those tasks.
The licensing of training inputs slows down economic growth compared to what it could be with competitive and high-quality GenAI.
- Generative artificial intelligence and data privacy: A primer
Busch, K.E., Congressional Research Service, 2023.
Since the public release of Open AI’s ChatGPT, Google’s Bard, and other similar systems, some Members of Congress have expressed interest in the risks associated with “generative artificial intelligence (AI).” Although exact definitions vary, generative AI is a type of AI that can generate new content—such as text, images, and videos—through learning patterns from pre-existing data. It is a broad term that may include various technologies and techniques from AI and machine learning (ML). Generative AI models have received significant attention and scrutiny due to their potential harms, such as risks involving privacy, misinformation, copyright, and non-consensual sexual imagery. This report focuses on privacy issues and relevant policy considerations for Congress. Some policymakers and stakeholders have raised privacy concerns about how individual data may be used to develop and deploy generative models. These concerns are not new or unique to generative AI, but the scale, scope, and capacity of such technologies may present new privacy challenges for Congress.
- Highlights of the 2023 executive order on artificial intelligence for Congress
Harris, L.A. and Jaikaran, C., Congressional Research Service, 2023.
On October 30, 2023, the Biden Administration released Executive Order (E.O.) 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. It establishes a government-wide effort to guide responsible artificial intelligence (AI) development and deployment through federal agency leadership, regulation of industry, and engagement with international partners.
- How Europe can make the most of AI
Meyers, Z. and Springford, J., Centre for European Reform, 2023.
AI may give Europe a chance to raise growth, especially in its lethargic services sector. Productivity levels in European services firms are lower than in the US. And European companies adopt new technology about 10-15 years later than American ones do: it could be many years before most EU firms adopt AI. In 2021, only 8 per cent of EU enterprises used the technology in any form. And the EU’s ageing population means that increasing the productivity of its workforce will become increasingly important. It often takes many years for innovations to mature and be put to use in raising productivity. There are some reasons to hope that AI will be taken up more quickly. AI requires high-end chips, cloud computing services, data centres and sufficient energy to run them – but the firms that want to use AI do not typically need to invest in new infrastructure. Regulation has an important role in making globally-agreed rules enforceable. The aim of EU policy should be to hasten the adoption of AI, rather than impede it. Firms learn by doing. Only after AI is more widely adopted can we see whether it will raise productivity growth, disrupt labour markets, or pose risks that mean more regulation is needed.
Nonetheless, European governments and the EU should do more to promote its take-up by businesses. They should focus on three things: First, the EU should ensure there is vigorous competition between companies providing AI ‘foundation’ models – the basic models used to analyse language, imaging and data. Second, the EU should support AI research and development, ensure that enough skilled AI workers are available through subsidised training and plentiful immigration visas, and remove regulatory barriers to AI adoption. Third, the EU and its member-states should conduct overarching reviews of how existing regulation applies to AI, inform businesses about their responsibilities under existing legislation, and ensure regulators are ‘AI-ready’. However, AI could introduce new risks because it is so potentially powerful, and these could dissuade businesses from adopting the technology.
- Recalibrating assumptions on AI towards an evidence-based and inclusive AI policy discourse
Holland Michel, A., The Royal Institute of International Affairs, 2023.
This paper makes a bid to recalibrate the AI policy discourse. It highlights, analyses and offers counterpoints to four core assumptions of AI policy: 1) that AI is ‘intelligent’; 2) that ‘more data’ is a requisite for better AI; 3) that AI development is ‘a race’ among states; and, 4) that AI itself can be ‘ethical’. It focuses on these four assumptions because they have gone particularly unchallenged in policy documentation, and because they demonstrate how real harms can result from policy that is built upon assumptions that negate counterpoint perspectives. In challenging these assumptions, the paper offers a rubric for addressing other problematic AI assumptions. By illustrating how a more evidence-based, inclusive discourse yields better policy, it advocates for an ecosystem of policy innovation that is more structurally diverse and intellectually accommodating.
- Reconciling the AI value chain with the EU's artificial intelligence act
Engler, A.C. and Renda, A., Centre for European Policy Studies, 2022.
The EU Artificial Intelligence Act (AI Act), proposed by the European Commission in April 2021, is an ambitious and welcome attempt to develop rules for artificial intelligence, and to mitigate its risks. The current text, however, is based on a linear view of the AI value chain, in which one entity places a given AI system on the market and is made accountable for complying with the regulation whenever the system is considered ‘high risk’. In reality, the AI value chain can present itself in a wide variety of configurations. In this paper, in view of the many limitations of the Act, we propose a typology of the AI value chain featuring seven distinct scenarios, and discuss the possible treatment of each one under the AI Act. Moreover, we consider the specific case of general-purpose AI (GPAI) models and their possible inclusion in the scope of the AI Act, and offer six policy recommendations.
- Last Updated: Oct 25, 2024 3:55 PM
- URL: https://ec-europa-eu.libguides.com/llm-and-genAI
- Print Page