Home » Deep Dive: How the EU can succeed in clever AI regulation

Deep Dive: How the EU can succeed in clever AI regulation

by admin
Deep Dive: How the EU can succeed in clever AI regulation

Since mid-June, the EU Parliament has been on course for a regulation to regulate artificial intelligence (AI), the so-called AI Act. In particular, the classification of new services such as ChatGPT from OpenAI was eagerly awaited compared to the initial draft by the European Commission.

Advertisement

“The basic idea back then was that the classification systems in particular should be regulated,” says Sandra Wachter from the Oxford Internet Institute. Because these systems would often be used as an aid to decision-making. “Should I hire this person, should I give him a loan, should I let him go to university? Then came ChatGPT and then the world was completely different.”

Lawyer Sandra Wachter has been dealing with AI ethics, explainable AI and the question of how artificial intelligence should be regulated for many years. In the new podcast episode, she discusses, among other things, the bias problem of large AI models and what ambiguities the draft of the AI ​​Act still contains in an interview with TR editor Wolfgang Stieler.

Wachter thinks it’s good that the inclusion of generative AI in the law hasn’t meant that the risk-based cause has been abandoned. Also that ChatGPT and Co. are not classified as high-risk technology from the outset, but should be strictly regulated. In order to ensure transparency, training data should be largely open. In this way, the causes of possible distortions, i.e. bias or prejudices, should become clear and correctable.

Here you will find an overview of our three podcast formats: the weekly news podcast “Weekly” and the monthly podcasts “Unscripted” and “Deep Dive”.

See also  Bosch is investing millions in artificial intelligence

On the other hand, she is critical of the fact that the manufacturers of such systems certify themselves in a “self-assessment” that their products conform to the AI ​​Act. Even more, it is, for example, “that you should make an effort or put in ‘best effort’ to make sure that you deal with the bias,” criticizes Wachter. It also gives an insight into what the law means for users and business and how much influence lobbyists have in the regulation.

Advertisement

More on this in the whole episode – as an audio stream (RSS feed):

(wst)

To home page

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy