The European Commission set out guidelines on Friday to help AI
models it has determined have systemic risks and face tougher obligati
ons to mitigate potential threats comply with European Union artificial intelligence regulation (AI Act).
The move aims to counter criticism from some companies ab
out the AI Act and the regulatory burden while providing more clarity to bus
inesses which face fines ranging from 7.5 million euros ($8.7 million) or 1.5%
of turnover to 35 million euros or 7% of global turnover for violations.
The AI Act, which became law last year, will apply on Aug. 2 for AI models with systemic risks and foundation models s
uch as those made by Google, OpenAI, Meta Platforms, Anthropic and Mis
tral. Companies have until August 2 next year to comply with the legislation.
The Commission defines AI models with systemic risk as those with very advanced computing capabilities th
at could have a significant impact on public health, safety, fundamental rights or society.
Watch More Image Part 2 >>>
The first group of models will have to carry out model evaluatio
ns, assess and mitigate risks, conduct adversarial testing, report serious incidents to the Commission and ensure ad
equate cybersecurity protection against theft and misuse.
General-purpose AI (GPAI) or foundation models will be subject to transparency requirements such as drawing up tec
hnical documentation, adopt copyright policies and provide detai
led summaries about the content used for algorithm training.
“With today’s guidelines, the Commission supports the smooth
and effective application of the AI Act,” EU tech chief Henna Virkkunen said in a statement.
($1 = 0.8597 euros)
(Reporting by Foo Yun Chee; editing by Elaine Hardcastle)




































