deen

Legal Advice

Artificial Intelligence Act: EU agrees on the Regulation of Artificial Intelligence

The EU has re­ached an agree­ment on a new re­gu­la­tion for the use of Ar­ti­fi­cial In­tel­li­gence, the Ar­ti­fi­cial In­tel­li­gence Act (AI Act). Will this enable them to strike a ba­lance bet­ween re­gu­la­tion and the com­pe­ti­tiv­en­ess of Eu­ro­pean com­pa­nies? On one hand, the AI Act sets clear gui­de­lines for the use of AI, while on the other hand, it pres­ents nu­me­rous chal­len­ges for com­pa­nies.

Citizen Trust and Competitiveness as EU Goals

Al­re­ady in April 2021 – long be­fore AI with ChatGPT be­came a hot to­pic - the EU Com­mis­sion made a pro­po­sal on how Ar­ti­fi­cial In­tel­li­gence should be used in Eu­rope. With the pro­po­sed AI re­gu­la­tion, the EU ai­med to strengt­hen ci­ti­zens' trust in what is li­kely a ground­brea­king new tech­no­logy and to create a le­gal frame­work for com­pe­ti­tive use wi­thin the EU.

The ne­go­tia­ti­ons of the AI Act were over­ta­ken by the re­lease of the chat­bot ChatGPT, which ge­ne­ra­tes texts, images, or source code in real time using ge­ne­ra­tive AI. At the end of 2023, the tri­lo­gue pro­cess bet­ween the EU Com­mis­sion, EU Par­lia­ment, and Eu­ro­pean Coun­cil re­sul­ted in an agree­ment. The fi­nal text is ex­pec­ted to be de­fi­ni­tively ad­op­ted in March 2024 and the AI re­gu­la­tion is set to come into ef­fect shortly the­reaf­ter. Howe­ver, it is ex­pec­ted to un­fold its ef­fects af­ter a tran­si­tion pe­riod of two years, li­kely in the sum­mer of 2026. For com­pa­nies plan­ning to de­ploy AI ap­pli­ca­ti­ons, this is no re­ason to rest on their lau­rels.

Risk-Based Approach in the Classification of AI Systems

The AI Act in­iti­ally crea­tes a uni­form frame­work that clas­si­fies AI sys­tems ba­sed on risk. The law dis­tin­gu­is­hes bet­ween AI sys­tems with un­ac­cep­ta­ble, high, low, or mi­ni­mal risk. The im­pact can be sum­ma­ri­zed briefly: The hig­her the risk, the hig­her the re­qui­re­ments for the re­spec­tive AI sys­tem. These range up to a ge­ne­ral ban on AI sys­tems with an un­ac­cep­ta­ble risk.

The EU sees an un­ac­cep­ta­ble risk in all AI ap­pli­ca­ti­ons where AI is used to in­flu­ence the be­ha­vior of in­di­vi­du­als in such a way that harm is in­flic­ted on the per­son or a third party. It also pro­hi­bits prac­tices ai­med at ex­ploiting or in­flu­en­cing vul­ne­ra­ble groups (age, disa­bi­lity, so­cial si­tua­tion) or at using so­cial sco­ring to the de­tri­ment of the in­di­vi­du­als con­cer­ned. The use of real-time re­mote bio­me­tric iden­ti­fi­ca­tion sys­tems in pu­blic spaces for law en­force­ment is ge­ne­rally pro­hi­bi­ted, with a few spe­ci­fic ex­cep­ti­ons.

The key area of re­gu­la­tion is high-risk AI ap­pli­ca­ti­ons. These are sys­tems that pose a si­gni­fi­cant risk to health, safety, or fun­da­men­tal rights. They are sub­ject to strict re­qui­re­ments re­gar­ding trans­pa­rency, data ac­cu­racy, and hu­man over­sight. High-risk sys­tems in­clude AI ap­pli­ca­ti­ons in the field of au­to­no­mous dri­ving or me­di­cal tech­no­logy, but a wide range of other sys­tems fall into this ca­te­gory. These in­clude AI sys­tems in cri­ti­cal in­fra­struc­tures, edu­ca­tion, em­ploy­ment, and law en­force­ment.

For sys­tems that pose only low risk, the AI Act, in turn, pro­vi­des for a sim­pli­fied ca­ta­log of ob­li­ga­ti­ons. Here, trans­pa­rency ob­li­ga­ti­ons are at the for­efront. This is to en­sure that the end-user knows that they are using a sys­tem with AI.

AI sys­tems with mi­ni­mal risk, howe­ver, are not co­vered by the AI Act. They can the­re­fore be used wi­thout re­stric­tions. The EU had in mind sim­ple AI sys­tems, such as au­to­ma­ted ele­ments of fire­walls or SPAM fil­ters. With the in­cre­asing spread of ge­ne­ra­tive AI, more and more AI sys­tems are li­kely to fall wi­thin the scope of the AI Act in the fu­ture.

High Requirements for High-Risk Systems

Com­pa­nies that de­ve­lop, dis­tri­bute, or want to use AI sys­tems with a high risk must com­ply with a mul­ti­tude of re­qui­re­ments. In ad­di­tion to the ge­ne­ral trans­pa­rency ob­li­ga­ti­ons in­tro­du­ced with the AI Act for the re­gu­la­ted clas­ses, nu­me­rous other ob­li­ga­ti­ons must be im­ple­men­ted. First, a com­pre­hen­sive risk ana­ly­sis must be car­ried out and, com­pa­ra­ble to a data pro­tec­tion im­pact as­sess­ment, mea­su­res must be ta­ken to mi­ni­mize the risks. Fur­ther­more, it must be en­su­red that the AI used in the sys­tem has been trai­ned only with re­lia­ble and high-qua­lity data.

This is to avoid bias and in­ac­cu­rate re­sults. The sys­tems must also be par­ti­cu­larly se­cure against ma­ni­pu­la­tive in­ter­ven­ti­ons, such as cy­ber-at­tacks. In ad­di­tion, pro­vi­ders must en­sure that hu­man con­trol of the AI is pos­si­ble. For ex­am­ple, it must be en­su­red that hu­man in­ter­ac­tion can cor­rec­tively in­ter­vene in the ac­tivity of the high-risk sys­tem or stop it. A ty­pi­cal ap­pli­ca­tion would be dri­ver in­ter­ven­tion in a semi-au­to­no­mous ve­hi­cle.

If the sys­tem pro­ces­ses per­so­nal data, ad­di­tio­nal data pro­tec­tion re­qui­re­ments must be ob­ser­ved, and pro­vi­ders must keep de­tai­led re­cords of the de­ve­lop­ment, trai­ning, de­ploy­ment, and use of high-risk AI sys­tems to en­sure tra­ce­abi­lity and ac­coun­ta­bi­lity.

With these high re­qui­re­ments, the EU in­tends to create a frame­work that uti­li­zes the be­ne­fits of AI while si­mul­ta­neo­usly mi­ni­mi­zing risks and pro­tec­ting fun­da­men­tal va­lues and rights. Com­pa­nies that de­ve­lop, of­fer, or use such sys­tems must fully com­ply with the re­qui­re­ments. Other­wise, they face fi­nes of up to 35 mil­lion eu­ros or 7 % of the glo­bal an­nual group tur­no­ver.

Risk-Independent Requirements for "General Purpose AI"

Since the spread of lan­guage mo­dels like ChatGPT oc­cur­red af­ter the first draft of the AI Act, the le­gis­la­tor felt com­pel­led to also im­pose ad­di­tio­nal re­gu­la­ti­ons for such AI sys­tems that have a broad ge­ne­ral ap­pli­ca­tion area or pur­pose, which ap­ply re­gard­less of the pre­viously de­scri­bed risk clas­si­fi­ca­tion (in­clu­ding sys­tems with mi­ni­mal risk).

All pro­vi­ders of "Ge­ne­ral Pur­pose AI" must im­ple­ment com­pre­hen­sive trans­pa­rency ob­li­ga­ti­ons. This is es­pe­cially true con­cerning the use of such lan­guage mo­dels for ge­ne­ra­ting or ma­ni­pu­la­ting texts and images.

There are ad­di­tio­nal re­qui­re­ments if the sys­tems are par­ti­cu­larly power­ful and may pose sys­te­mic risks. Pro­vi­ders will have to com­ply with ad­di­tio­nal ob­li­ga­ti­ons, such as se­rious in­ci­dents mo­ni­to­ring or mo­del eva­lua­tion. The rights of aut­hors are also strengt­he­ned, ma­king it ea­sier for them to ob­ject to trai­ning or pro­ces­sing of co­py­righ­ted works.

Since the cur­rently most wi­dely used large lan­guage mo­dels like ChatGPT or Ge­mini do not ori­gi­nate from pro­vi­ders in the EU, the EU also had their pro­vi­ders in mind with the AI Act and im­po­ses nu­me­rous ob­li­ga­ti­ons not only on Ger­man pro­vi­ders but on all those who dis­tri­bute their pro­ducts in the EU or use data from the EU.

Do Not Sleep on the Implementation of Requirements

Even though most of the AI Act's re­gu­la­ti­ons will not fully take ef­fect un­til mid-2026, com­pa­nies that want to in­te­grate Ar­ti­fi­cial In­tel­li­gence into their pro­ducts and ser­vices are well ad­vi­sed to fa­mi­lia­rize them­sel­ves with the re­qui­re­ments of the AI Act early on. In par­ti­cu­lar, the ob­li­ga­ti­ons that al­re­ady ap­ply to the trai­ning and de­ve­lop­ment of such sys­tems should be im­ple­men­ted early on to avoid a rude awa­ke­ning once the re­spon­si­ble su­per­vi­sory aut­ho­ri­ties mo­ni­tor and ve­rify com­pli­ance with the AI Act.

Note: In early June 2024, face-to-face events on the to­pic of "Ar­ti­fi­cial In­tel­li­gence" in small and me­dium-si­zed en­ter­pri­ses will take place in Ham­burg, Co­lo­gne, and Stutt­gart as part of the "Fo­cus Law" event se­ries. Sven Körner, aut­hor, foun­der, and AI re­se­ar­cher, will give a keynote speech. Sub­se­quently, the IT law ex­perts from RSM Eb­ner Stolz will pre­sent the le­gal frame­work in the con­text of im­ple­men­ting in­no­va­tive AI pro­jects. Fur­ther in­for­ma­tion on these events and re­gis­tra­tion op­ti­ons will be avail­able here shortly.

back to top