The AI Act: Background and Development
Author: Alain Álvarez
Date: 28-12-2023
Background
In April 2021, the European Union introduced the groundbreaking Artificial Intelligence Act (AI Act) with the primary goal of establishing a unified regulatory framework for artificial intelligence (AI) across its member states. This pivotal legislation, while excluding military applications, casts a wide net, encompassing various sectors and addressing the diverse landscape of AI technologies.
At the heart of the AI Act lies a distinctive approach to classification and regulation based on the risk levels associated with different AI applications. The regulatory framework delineates three main categories: banned practices, high-risk systems, and other AI systems. Each category is strategically designed to mitigate potential risks and strike a balance between fostering innovation and safeguarding societal interests.
Banned practices, as explicitly outlined in the AI Act, are strictly prohibited.The AI Act states that certain applications deemed ethically or socially unacceptable will not be tolerated within the EU's AI landscape. On the other end of the spectrum, high-risk systems face comprehensive oversight and stringent requirements to ensure responsible development, deployment, and use. This approach allows for nuanced scrutiny tailored to the potential impact and sensitivity of each AI application.
Funding, research and development
The EU's strategic approach encompasses not only fostering the development of AI but also cultivating a vibrant innovation hub, ensuring societal benefits, and establishing leadership in high-impact sectors.
At the core of this initiative is the collaboration between the Commission and Member States, channeling joint policies and investments towards an annual €1 billion commitment to AI through Horizon Europe and Digital Europe programs. The ambitious goal is to achieve a total investment of €20 billion over the digital decade, showcasing a significant commitment to advancing AI capabilities.
The Recovery and Resilience Facility significantly contributes to these aspirations by earmarking an impressive €134 billion for digital initiatives. This financial injection is poised to elevate Europe to a global leader in trustworthy AI, laying the groundwork for innovation and reliability in AI technologies.
Recognizing the pivotal role of data, the EU places emphasis on access to high-quality information. Initiatives such as the EU Cybersecurity Strategy, Digital Services Act, Digital Markets Act, and Data Governance Act lay the foundation for a robust infrastructure supporting the development of high-performance and reliable AI systems.
In tandem with these initiatives is the EU's dedication to building trustworthy AI. Through three interconnected legal proposals, the EU addresses fundamental rights, safety risks, and adapts liability rules to the digital age. This includes a revision of sectoral safety legislation, establishing a comprehensive framework to manage and mitigate risks associated with AI use.
The proposed legal framework, aptly named the AI Act, adopts a clear and understandable approach based on four distinct levels of risk: minimal risk, high risk, unacceptable risk, and specific transparency risk. Notably, the framework introduces dedicated rules for general-purpose AI models, providing developers, deployers, and users with the clarity they need.
Envisioning a future where Europe sets the global gold standard for AI regulations, the EU's comprehensive strategy promotes transparency and innovation within the AI landscape. As we navigate the Digital Decade, the EU is forging a resilient path, ensuring that people and businesses can harness the benefits of AI within a secure and protected environment.
Important Achievements towards AI Regulation in Europe
This timeline outlines key developments in the regulation and governance of Artificial Intelligence (AI) in the European Union:
- March 2018: Establishment of the European AI Alliance and the high-level expert group on AI.
- April 2018: Introduction of the "Artificial intelligence for Europe" initiative, including a declaration of cooperation on AI.
- June 2018: Launch of the European AI Alliance and the high-level expert group on AI.
- December 2018: Release of the coordinated plan on AI and the initiation of stakeholder consultation on draft ethics guidelines for trustworthy AI.
- April 2019: European Commission Communication on "Building trust in human-centric artificial intelligence" and the release of ethics guidelines for trustworthy AI.
- June 2019: First European AI Alliance Assembly and the presentation of policy and investment recommendations by the high-level expert group on AI.
- December 2019: Piloting of the assessment list of trustworthy AI by the High-Level Expert Group on AI.
- February 2020: European Commission White Paper on AI, emphasizing a European approach to excellence and trust, accompanied by public consultation.
- July 2020: Inception impact assessment on ethical and legal requirements for AI and the final assessment list on trustworthy AI by the High-Level Expert Group on AI.
- October 2020: 2nd European AI Alliance Assembly.
- April 2021: European Commission's Communication on fostering a European approach to AI, along with the proposal for a regulation on harmonized rules for AI and an updated coordinated plan on AI.
- June 2021: Public consultation on civil liability in the context of digital age and AI by the European Commission.
- November 2021: Council of the EU presents the Presidency compromise text on the AI Act, and a High-Level Conference on AI is held.
- December 2021: Committee of the Regions offers an opinion on the AI Act, and the European Central Bank provides its opinion.
- June 2022: Spain launches the first AI regulatory sandbox, advancing AI Regulation.
- September 2022: Proposal for an AI liability directive.
- December 2022: General approach of the Council on the AI Act.
- June 2023: European Parliament establishes its negotiating position on the AI Act.
- December 2023: Political agreement is reached by co-legislators on the AI Act.