Lawmakers will continue to discuss detailed regulations in the coming weeks with a view to finalizing the process early next year, with the plan to apply from 2026.
Until then, companies are encouraged to sign up to the voluntary AI Pact to fulfil the key obligations of the rules.
Below are the main contents of the agreement agreed by the EU.
High risk system
So-called high-risk AI systems – those deemed likely to cause significant harm to health, safety, fundamental rights, the environment, democracy, elections and the rule of law – will have to comply with a range of requirements, such as undergoing impact assessments on fundamental rights and EU market access obligations.
Low-risk systems, meanwhile, would be subject to lighter transparency obligations, such as labeling AI-generated content so users can consider using it.
AI in law enforcement
Law enforcement agencies are only permitted to use real-time remote biometric identification systems in public spaces, to identify victims of kidnapping, human trafficking, sexual exploitation, and to prevent specific and imminent terrorist threats.
Authorities will also be allowed to use AI technology to track suspects of terrorism, human trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organization and environmental crimes.
General and Platform AI (GPAI) Systems
GPAI and the underlying models will be subject to transparency requirements such as producing technical documentation, complying with EU copyright law, and publishing detailed summaries of the content used to train the algorithms.
Platform models that fall under the category of potentially creating systemic risk and high-impact GPAI will be required to conduct a general model assessment, review and mitigate risks, conduct reverse engineering testing, notify the European Commission of serious incidents, ensure cybersecurity and report on energy consumption.
Until EU harmonised standards are published, GPAIs are at systemic risk of relying on codes of practice to comply with regulations.
AI systems are banned
Prohibited conduct and content include: Biometric classification systems that use sensitive characteristics such as politics, religion, philosophical beliefs, sexual orientation, and race;
Non-targeted scanning of facial images from the Internet or CCTV footage to create a facial recognition database;
Emotion recognition in the workplace and educational settings;
Social scoring based on social behavior or personal characteristics;
AI systems manipulate human behavior to subvert their free will;
AI is used to exploit human weaknesses due to their age, disability, economic or social circumstances.
Sanctions
Depending on the violation and the size of the company involved, fines will start at €7.5 million ($8 million) or 1.5% of annual global turnover, rising to €35 million or 7% of global turnover.
(According to Reuters)
EU reaches historic deal to regulate artificial intelligence
EU reaches agreement on Smart Device Protection Act
EU gets tough on tech giants, threatens to break up violating companies
Source
Comment (0)