Home » Scientist explains dangers: “AI could stealthily take over”

Scientist explains dangers: “AI could stealthily take over”

by admin
Scientist explains dangers: “AI could stealthily take over”

Philosopher of science Simon Friederich explains in an interview what threats he sees in artificial intelligence (AI) and how humanity could best protect itself from an AI takeover. Friederich is Associate Professor of Philosophy of Science and Academic Head of the Humanities Department at the University of Groningen, as well as an external member of the Munich Center for Mathematical Philosophy.

Mr. Friederich, you have one statement signed, which warns of the dangers of artificial intelligence. In addition to them, prominent comrades-in-arms such as Chat GPT founder Sam Altman and AI guru Geoffrey Hinton have also done so. What was your intention behind it?

Simon Friederich: I am not an expert in machine learning, but have a doctorate in physics and philosophy and teach philosophy of science in Groningen. There I also give a course on major risks for humanity, and in the course of this I have dealt closely with artificial intelligence on a larger scale. And I see a big risk there, which is why I signed.

What kind of risk do you see?

I agree with the statement that artificial intelligence brings with it risks comparable to nuclear weapons or pandemics – and may even lead to human extinction. That sounds like science fiction, and if you look at today’s systems, it sounds premature. But it’s better to talk about it now than when it’s too late.

Simon Friederich

(Photo: Private)

The end of the world has been summoned many times. Why now also through artificial intelligence?

There are two big problems with AI: the systems are getting more powerful and we’re giving them more and more control. This could lead to a creeping, perhaps at some point sudden, AI takeover. People would only lead a shadowy existence or be completely “eliminated”. The other concern is that AI will lead to a concentration of power. Individuals, i.e. governments and companies, are given extreme power because people are becoming superfluous as workers – especially in the area of ​​cognitive challenges.

See also  What the EU Parliament has in store for 2024

So you’re saying human skills could atrophy if they outsource too much to AI?

Yes that’s right. On a small scale, I can already see this in my students, whose motivation to write and argue sometimes suffers. But that is also logical when I see what acceptable performance language models deliver. And if, as a student, I then ask myself what my cognitive skills will be used for later on the job market, then I can understand that at first glance.

What are the most real threats for you – and what might be science fiction?

It’s not about robots suddenly gaining consciousness like in the movies. It is more about the fact that an AI could, for example, accumulate resources such as computing power or energy in order to achieve certain goals. And that we could stand in her way. We don’t know what an AI takeover would look like – maybe it would kill us, maybe it will be a creeping takeover over several centuries. The concern is that we won’t know until it’s too late.

Which AI specifically caused you to worry?

I had heard about such developments as we are currently experiencing a long time ago – for example through the book “Superintelligence” by Nick Bostrom. But back then, in 2014, I thought that it was very far away, maybe centuries. When GPT 3 came out, I was very surprised at what it could do. With version 4 there was another giant leap. At the end of last year, GPT 3 couldn’t solve my logic problems, meanwhile GPT 4 achieves the highest mark in my philosophy of science exam. That’s impressive.

See also  Online giant Alibaba plans to split into six divisions

And how do you adapt to this in teaching?

Very different. Many of my colleagues continue as before. I am concerned that we will lose parts of our cognitive abilities if we outsource everything. That’s why I try to keep cell phones out of the seminar room as far as possible, and use the technology in a targeted manner at certain moments.

Many of the signatories are now calling for regulation of AI or at least a code of ethics. But is there any good regulation at all when individuals can always break out – and in the end there is a rat race?

I believe that the signatories have extremely different ideas about what constitutes good and bad regulation. There are likely to be huge differences between scientists and entrepreneurs alone.

Why do a number of entrepreneurs sign an indirect call for regulation? Do they want to destroy their own business?

No certainly not. I think there are very different motivations. And sometimes they might have been extremely far apart. For example, the founders of Open AI were not motivated by the potential of the technology, but rather by the risks. Your consideration seems to have been more: “Generally applicable AI – Artificial General Intelligence – can be incredibly dangerous. It’s better if responsible people like us do it.” A bit like the atomic bomb that the US wanted to develop before Hitler had it.

In your opinion, where should good regulation start?

Personally, I’m concerned that things are happening too quickly – that we’re not ready for systems that will cognitively overtake us in a few years. But whether the forthcoming EU regulation or a moratorium are the right solutions at all – I don’t know …

There are also dissenting voices to the statement. For example, from Meta’s AI boss Yann LeCun, who thinks the criticism is premature. Currently, the AI ​​is not even smarter than a dog, and as long as you don’t have to talk about an apocalypse. What do you make of it?

See also  Digital Cup 2023: between sport and digital

Yann LeCun, along with Geoffrey Hinton and Yoshua Bengio, won the 2018 Turing Award for his AI research – which is something like the Nobel Prize in computer science. Interestingly, he is the only one of the three who has not signed. And otherwise there are enough serious voices who consider the current time to be right.

Another criticism revolves around the question of how great the potential of artificial intelligence actually is. Some critics say that the AI ​​is at best a parrot that reproduces what its owner tells it. That would speak against the progress narrative. But: Incidentally, this leads to the stigmatization of minorities because the historical dataset shows, for example, poorer qualifications for black people. What do you think of this criticism?

I think the concern about discrimination is absolutely justified. There are plenty of examples of this. The Child benefit affair here in the Netherlands was particularly dramatic, with tragic consequences for the victims.

Do you think there are also opportunities from AI?

The basic expectation of technological progress is positive for me. So far, technological developments have mostly made the world a better place – be it through the industrial revolution, the Internet, etc. And, of course, if we can now automate processes that are tedious for many people and thus increase productivity, then that’s great . So I also see huge opportunities. However, we should be careful not to leave every “cognitive niche” on this planet to the AI.

Jannik Tillar spoke to Simon Friedrich.

This interview first appeared on capital.de

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy