Home » Debate about AI: Cheap polemics don’t help

Debate about AI: Cheap polemics don’t help

by admin
Debate about AI: Cheap polemics don’t help

First of all: I do not want to unconditionally defend the open letter, in which more than 1,000 signatories are now demanding a forced break for the development of large AI models.

Yes, I also know where the “Future of Life Institute” comes from. The “long-termism” of this institution, the “concern for the long-term survival of mankind” looks totally sympathetic at first glance. On closer inspection, however, it is an ideological-philosophical chamber of horrors in which the principle of “earn to give” (get rich as quickly as possible, for example with crypto money, in order to then be able to donate a lot) is still a comparatively sympathetic idea – some of these people also end up there ever in eugenics.

After studying physics, Wolfgang Stieler switched to journalism in 1998. He worked at c’t until 2005, after which he became editor of Technology Review. There he oversees a wide range of topics from artificial intelligence and robotics to network policy and questions of future energy supply.

Second caveat: Sure, the proposal in the paper is technocratic, elitist, and undemocratic. Companies and researchers should develop and exchange procedures and rules for the safe development and application of this technology. Which is essentially a self-commitment – ​​a principle that has failed quite often.

And, last but not least: Yes, I also know that it is silly to warn about the hypothetical dangers of a superhuman AI that may one day exist. The existing systems are already causing problems. We have discussed in detail, from various points of view. gift.

But has anyone read it? There are sentences that I would not have believed these Silicon Valley technocrats capable of. There is talk of: “Capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm”.

Supervisory authorities, auditing and certification, product responsibility (and liability), mandatory labeling of machine-generated content – all of this is also included in the EU draft AI Act. However, the law regulating AI applications is being delayed by a bitter dispute over the question of whether large language models should generally be defined as high-risk technology. Critics of this idea fear that it will stifle “innovation” in Europe. An argument that is advocated by the lobby of American tech companies, among others. (The so-called trilogue, negotiations between the EU Commission, the European Council and the EU Parliament are scheduled to begin in April – and will probably initially end in a dead end).

But what really gets on my nerves is the level of polemics, personal attacks, cheap rhetorical tricks and contempt that poison an important debate – and thus make it increasingly impossible. When Emily Bender, for example, only disparagingly speaks of the preprint paper “Sparks of Artificial General Intelligence” as “fan fiction”, which has not been sufficiently independently checked anyway, she denies any scientific relevance to a preprint platform like Arxive, not only in passing . It also indirectly demands a kind of attitude check: anyone who has not shown through previous publications that they are very critical of the development and use of large language models simply must not be taken seriously.

That may be appropriate for the League for the Defense of Offended Computational Linguists, and while the occasional wrestling-level slugfest is hilarious, it doesn’t get anyone anywhere. On the contrary, it is harmful to throw this paper into the argumentative dustbin with a simple rhetorical trick.

Sure, there is a lot to criticize about this study – among other things, Sebastien Bubeck and his colleagues even complain themselves that they did not receive any information about the training data from GPT-4, so they could only guess which answers the model reproduced from this training data , and which ones it recreates. But the psychological tests, for example, carried out by the researchers on GPT-4, can not only be interpreted in naïve AI enthusiasm as “theory of mind” – as the ability of the AI ​​to put itself in the shoes of a counterpart. Even with a skeptical view, these tests reveal the possibilities for manipulation that open up when the AI ​​can assign emotions to a human interlocutor based on their behavior. It doesn’t matter whether the machine can understand the emotions or not.

In short: self-commitment and a moratorium are useless. What we need is a debate about the regulation of generative AI. It must now be conducted and results quickly achieved, because there is a kind of Wild West in terms of AI application at the moment. And that has to stop. As quickly as possible.


(jl)

To home page

See also  Strengthen the WiFi signal – this is how it works

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy