EU members have previously agreed to limit the use of facial scanning technology in public places to certain law enforcement situations, a “red line” for countries in negotiations with the European Parliament and Commission.
Some centre-right members proposed exceptions that could use biometric tracking technology to help find missing children or prevent terrorist attacks, but this was also not approved in the plenary vote.
Lawmakers have agreed to impose additional measures on generative AI platforms like GPT-4, requiring companies like OpenAI and Google to conduct risk assessments and disclose what copyrighted material was used to train AI models.
The EU’s approach to regulation is based on a risk assessment. It focuses on regulating the use of AI rather than the technology itself, banning some applications like social scoring outright and setting standards for the use of the technology in “high-risk” situations.
The full text of the draft AI Act was adopted on June 14, paving the way for a “tripartite” discussion between the EU parliament, member states and the European Commission to follow.
The Commission hopes to reach agreement by the end of the year to put the AI Act into effect for companies as early as 2026. Meanwhile, some officials are pushing for a voluntary “code of conduct” for companies that would apply to the G-7 nations, along with India and Indonesia.
The EU's tightening of regulation of artificial AI could have a major impact on a sector estimated to be worth more than $1.3 trillion over the next 10 years, as breaches of the bloc's rules could result in fines of up to 6% of annual turnover.
(According to Bloomberg)
Source
Comment (0)