Home » AI: “Even smaller models can pose a danger”

AI: “Even smaller models can pose a danger”

by admin
AI: “Even smaller models can pose a danger”

The EU’s new AI law assesses threats, but experts warn of unconsidered threats to security.

In Brussels, the EU member states agreed on the world‘s first law to regulate artificial intelligence. NurPhoto/Getty Images

After long negotiations, the EU states, the EU Commission and the EU Parliament have fundamentally agreed on legal regulation of artificial intelligence (AI). The so-called “AI Act” is the world’s first AI law. This framework will protect the security and fundamental rights of people and companies, commented EU Commission President Ursula von der Leyen.

The EU Commission had already proposed such a law in April 2021. But there were many points of contention arising from the different interests of individual EU states. It has been negotiated for years, and even now some important details still need to be clarified before the European “AI Act” can actually be submitted for adoption.

In particular, an agreement was negotiated last week on the major controversial issue surrounding the biometric monitoring of people in public spaces using AI. The draft law proposes to ban their use for automated facial recognition. However, the EU states were able to impose exceptions to this ban – for example when it comes to national security. There are also exceptions for military AI applications.

Read too

Stick to these rules if you use ChatGPT in a startup – otherwise you will face penalties

Another point of contention was the question of whether the powerful AI basic models, which can then be used for various applications in the next step, need to be regulated at the source. Or is it enough to regulate only the specific applications that can be implemented using basic models? In Europe there are only a few companies that develop their own basic AI base models.

See also  Tasca d'Almerita, artificial intelligence to govern vineyards and cellars

It is not surprising that the very countries in which these companies are based are skeptical about regulating basic models. Stricter regulation compared to non-European competitors could ultimately hinder the competitiveness of these companies in Germany, France and Italy.

There is agreement that AI applications that pose high risks to the safety of individuals and society should be prevented. But how can it be determined objectively and legally in court whether a specific AI system is a high-risk technology and should therefore be regulated?

As is well known, politics is the possibility of compromise and so the “AI Act” will probably boil down to simply using the number of calculation steps used to train an AI as a criterion for the potential risk that can arise from a basic model.

So you can then agree that basic models should be regulated – but only when more than 10 to the power of 26 calculation steps have been carried out in the computer for their training. This number is so large that no AI company in Europe will produce such a huge AI model for the time being. For comparison: Not even the Chat GPT language model from the US company OpenAI, which is generally known to be effective, would fall under a regulatory obligation under this definition. To do this you would have to lower the bar to 10 to the power of 24 arithmetic operations.

Professor Sandra Wachter from the Internet Institute at the University of Oxford is skeptical about this type of regulation. “Even smaller AI models can pose a danger,” says the scientist. For example, she considers AI systems for recognizing human emotions to be a high-risk technology.

See also  New nuclear reactor goes online in Finland

Much smaller AI models are sufficient for this application, but they should not be regulated. Wachter cites so-called predictive policing as another example of a high-risk application – i.e. the control of police operations using forecasts created by AI.

So should it be better to check each individual AI application to see whether it meets certain standards? This is what is meant to happen in some ways. In the future, it must be made transparent whether copyrighted material was used when training an AI. And the quality of the training data must meet certain requirements in order to prevent discrimination when using AI.

There should also be a labeling requirement for texts, images and videos created by artificial intelligence. “The problem is that there is usually not enough time to adequately test the AI ​​system,” says Professor Philipp Hacker from the European University Viadrina Frankfurt (Oder). “We definitely need more security research.”

Hackers see a particularly high risk of misuse in AI-supported information gathering with the aim of criminal and terrorist use. For example, AI systems should under no circumstances give a requester information about how to build a bomb or a bioweapon.

In order to prevent this, certain security structures must be built into AI systems. And that gives hackers a central argument against open source AI systems. If the source code of an artificial intelligence is generally known, these protection systems can also be removed by interested parties.

The EU is a global pioneer with the law regulating AI. Most other states have so far only issued regulations and decrees. Proponents of the European “AI Act” hope that this law could become a blueprint for countries for which the regulations of the USA and China are too loose or too restrictive.

Read too

See also  Reply's new parallel worlds

Founders are using ChatGPT incorrectly, says AI luminary Amy Webb – here’s how to get the most out of the tool

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy