deen

Legal Advice

Artificial intelligence: EU regulation plans move forward

Ar­ti­fi­cial In­tel­li­gence (AI) has been on ever­yone's lips at least since ChatGPT was made avail­able to the pu­blic in No­vem­ber 2022. But AI is much more than the chat­bot de­ve­lo­ped by OpenAI. AI sys­tems can be used in the most di­verse areas of ever­yday (busi­ness) life. The re­sul­ting op­por­tu­nities and risks ari­sing from this have now promp­ted the EU to create an in­itial frame­work for the le­gal re­gu­la­tion of the tech­no­logy.

The plan­ned le­gal frame­work for ar­ti­fi­cial in­tel­li­gence is in­ten­ded to set uni­form stan­dards for the pro­tec­tion of se­cu­rity and fun­da­men­tal rights th­roug­hout Eu­rope. At the same time, the EU wants to use it to pro­mote ac­cep­tance of and in­vest­ment in AI.

© Unsplash

Proposal of the EU Commission for the regulation of AI

In April 2021, the EU Com­mis­sion al­re­ady pre­sen­ted a first draft for the re­gu­la­tion of ar­ti­fi­cial in­tel­li­gence in the EU in April 2021. The re­gu­la­tion is to ap­ply not only to pro­vi­ders, i. e. the de­ve­lo­pers of AI sys­tems, but also to their users, pro­vi­ded they use AI for pro­fes­sio­nal ac­tivi­ties. This me­ans that the ent­ire cor­po­rate use of AI is co­vered, es­pe­cially if the AI is in­te­gra­ted into a com­pany's own pro­ducts.

The EU Com­mis­sion's pro­po­sal fol­lows a risk-ba­sed ap­proach - com­pa­ra­ble to the GDPR. The use of AI is the­reby di­vi­ded into four dif­fe­rent risk ty­pes (un­ac­cep­ta­ble risk, high risk, li­mited risk, mi­ni­mal risk) de­pen­ding on the in­ten­ded use.

For ex­am­ple, AI sys­tems that per­form bio­me­tric iden­ti­fi­ca­tion or ca­te­go­riza­tion, or are used in law en­force­ment or cri­ti­cal in­fra­struc­ture such as trans­por­ta­tion, are to be clas­si­fied as high risk AI. In con­trast, the free use of AI-powered spam fil­ters will be low risk.

Risk groups

Un­ac­cep­ta­ble Risk ➤ Use of AI is pro­hi­bi­ted

AI sys­tems that pose a th­reat to hu­ma­nity will be pro­hi­bi­ted.

They in­clude:

  • Co­gni­tive be­ha­vioral ma­ni­pu­la­tion of in­di­vi­du­als or cer­tain vul­ne­ra­ble groups,
  • So­cial sco­ring: clas­si­fy­ing peo­ple ba­sed on be­ha­vior, so­cio­eco­no­mic sta­tus, and per­so­nal cha­rac­te­ristics
  • Real-time re­mote bio­me­tric iden­ti­fi­ca­tion sys­tems

High-risk AI sys­tems ➤ The use of AI is con­ti­nually being eva­lua­ted

AI sys­tems with a high risk to the health and safety or fun­da­men­tal rights of na­tu­ral per­sons.

  • AI sys­tems in pro­ducts co­vered by EU pro­duct safety re­gu­la­ti­ons.
  • AI in eight spe­ci­fic areas to be re­gis­te­red in an EU da­ta­base:
    • Bio­me­tric iden­ti­fi­ca­tion and ca­te­go­riza­tion of na­tu­ral per­sons.
    • Ma­nage­ment and ope­ra­tion of cri­ti­cal in­fra­struc­ture
    • Edu­ca­tion and trai­ning
    • Em­ploy­ment, work­force ma­nage­ment and ac­cess to self-em­ploy­ment
    • Ac­cess to and use of es­sen­tial pri­vate and pu­blic ser­vices and be­ne­fits
    • Law en­force­ment
    • Ma­nage­ment of mi­gra­tion, asylum, and bor­der con­trol.
    • As­sis­ting in the in­ter­pre­ta­tion and ap­pli­ca­tion of laws.

These AI sys­tems must be as­ses­sed and ve­ri­fied be­fore they are placed on the mar­ket and th­roug­hout their lifecy­cle.

Ge­ne­ra­tive AI: Ad­di­tio­nal trans­pa­rency re­qui­re­ments.

Ge­ne­ra­tive AI that ge­ne­ra­tes con­tent ba­sed on re­quests and spe­ci­fi­ca­ti­ons, such as ChatGPT must meet ad­di­tio­nal trans­pa­rency re­qui­re­ments, such as dis­clo­sing that the con­tent was ge­ne­ra­ted by AI.

Li­mited-risk: Low trans­pa­rency re­qui­re­ments

Li­mited-risk AI sys­tems have mi­ni­mal trans­pa­rency re­qui­re­ments to al­low users to make in­for­med de­ci­si­ons.

Dif­fe­rent ob­li­ga­ti­ons and re­qui­re­ments should ap­ply de­pen­ding on the ca­te­go­riza­tion of the AI sys­tem in ques­tion. Sys­tems with un­ac­cep­ta­ble risk, i. e. those that con­tra­dict the EU’s ethi­cal prin­ci­ples, are to be ban­ned. Ac­cor­ding to the EU Com­mis­sion, this should ap­ply, for ex­am­ple, to so­cial sco­ring sys­tems.

High-risk AI is to be the most hea­vily re­gu­la­ted. Ac­cor­ding to the Com­mis­sion's draft, pro­vi­ders and users of such sys­tems are to be sub­ject to the fol­lo­wing ob­li­ga­ti­ons, among others.

  • En­sure high data qua­lity
  • In­for­ma­tion re­qui­re­ments to end users
  • Hu­man over­sight mea­su­res to mi­ni­mize risk
  • Re­cord­kee­ping and do­cu­men­ta­tion ob­li­ga­ti­ons
  • Im­ple­ment risk as­sess­ment and mit­iga­tion sys­tems.

Pro­vi­ders and com­mer­cial users of li­mited-risk AI sys­tems, on the other hand, are pri­ma­rily re­qui­red to com­ply with cer­tain trans­pa­rency re­qui­re­ments.

Ac­cor­ding to the EU Com­mis­sion, the vast ma­jo­rity of AI sys­tems cur­rently de­ployed in the EU fall into the "low risk" ca­te­gory. Such sys­tems should be able to be de­ve­lo­ped and de­ployed wi­thout ad­di­tio­nal le­gal ob­li­ga­ti­ons.

Note: In ad­di­tion to the AI Re­gu­la­tion, the EU Com­mis­sion has pre­sen­ted a draft for an AI Lia­bi­lity Di­rec­tive, which is in­ten­ded to re­gu­late the lia­bi­lity con­se­quen­ces of da­mage cau­sed by AI sys­tems.

Negotiating position of the EU Parliament

With the pu­bli­ca­tion of its fi­nal po­si­tion on the Com­mis­sion's draft, the EU Par­lia­ment has made some fur­ther chan­ges to the AI Re­gu­la­tion on June 14, 2023 and has now in­tro­du­ced it into the le­gis­la­tive pro­cess.

Among other things, the al­re­ady broad de­fi­ni­tion of AI sys­tems was broa­de­ned again. Ac­cor­din­gly, AI is now de­fi­ned as "a ma­chine-ba­sed sys­tem that is de­si­gned to ope­rate with va­ry­ing de­grees of au­to­nomy and that can ge­ne­rate re­sults, such as pre­dic­tions, re­com­men­da­ti­ons, or de­ci­si­ons, for ex­pli­cit or im­pli­cit goals that af­fect the phy­si­cal or vir­tual en­viron­ment".

The EU Par­lia­ment's ad­di­tion of so-cal­led ge­ne­ra­tive AI to the re­gu­la­tion, which in­clu­des the ChatGPT tool, is also par­ti­cu­larly no­te­wor­thy. In ad­di­tion to trans­pa­rency ob­li­ga­ti­ons, pro­vi­ders of such mo­dels are to en­sure that the sys­tem does not pro­duce il­le­gal con­tent and publish de­tai­led sum­ma­ries of the pro­prie­tary data they have used for trai­ning pur­po­ses.

To ac­com­pany this, the EU Par­lia­ment wants to re­duce the le­vel of fi­nes for vio­la­ti­ons of the ru­les, with only a few ex­cep­ti­ons. In ad­di­tion, ex­emp­ti­ons for re­se­arch ac­tivi­ties and AI com­pon­ents made avail­able un­der open source li­cen­ses are in­ten­ded to en­cou­rage AI in­no­va­tion.

Further procedural steps

Fol­lo­wing the po­si­tion of the EU Par­lia­ment, the fi­nal ne­go­tia­ti­ons in the tri­lo­gue pro­ce­dure can now be­gin. As part of the coor­di­na­tion bet­ween the EU Par­lia­ment, the Coun­cil of Mi­nis­ters and the EU Com­mis­sion, the fi­nal draft of the AI Re­gu­la­tion is to be drawn up and agreed upon. This pro­cess is ex­pec­ted to be com­ple­ted by the end of 2023. If suc­cess­ful, the re­gu­la­tion would en­ter into force this year and the ma­jo­rity of the pro­vi­si­ons would have to be im­ple­men­ted by the com­pa­nies con­cer­ned wi­thin a pe­riod of 24 months.

Note: Due to their de­sign as a re­gu­la­tion, the ru­les are di­rectly ap­plica­ble. Prior trans­po­si­tion into na­tio­nal law is not re­qui­red.

back to top