Home » Deep Dive: For a future after ChatGPT and Co.

Deep Dive: For a future after ChatGPT and Co.

by admin
Deep Dive: For a future after ChatGPT and Co.

Advertisement

breaking latest news, general artificial intelligence, i.e. an AI that has human-like or even superhuman abilities, has long been considered completely unrealistic, or at least a very, very ambitious goal – depending on who you spoke to. But now many are wondering, if ChapGPT can do so much, what comes next? Maybe really something like an breaking latest news?

In the latest podcast episode of Deep Dive, TR editor Wolfgang Stieler discusses with Katharina Zweig the question of how intelligent or stupid the large language models that already exist really are, how we should use them now and in the future and what we should do better with Keep your hands off.

Katharina Zweig is a professor at the TU Kaiserslautern-Landau and also head of the Algorithmic Accountability Lab there. She is not only a scientist, but also a very sought-after speaker and author of popular science books on the subject of AI.

Here you will find an overview of our three podcast formats: the weekly news podcast “Weekly” and the monthly podcasts “Unscripted” and “Deep Dive”.

For something to be called intelligent, she says, citing the philosopher Brian Cantwell-Smith, it must be “existentially dependent on one’s worldview.” “I’ve just been riding my bike and I constantly have to assess whether there are cars behind me, whether they can see me, whether I can still drive across the street. I am dependent on my existence for being able to read the speeds of cars correctly assess.” She doesn’t see this ability in a language model: “The technological basis doesn’t mean that a language model has any interest in the correctness of its worldview,” says Zweig. “That’s why hallucinations occur, because there are no reprisals, for example.”

See also  In Brussels, a robot from Genoa asks Renzi: "Do you speak English?"

In order for us to be able to use ChatGPT and its possibly even smarter successors as responsibly as possible, we would have to think about our relationship with AI. “We don’t yet have a good word for what the language models can do at the moment. Because of course there is a kind of understanding involved,” says Zweig. “I think we have to invent new words for the kind of supposed understanding that the language model reflects to us. But that makes the difference to the understanding of a child or an adult clear again.” Because only if we had a good understanding of what this technology can do and where it is not as reliable or different than us, “then we will work very well together with it.”

The entire episode – as an audio stream (RSS feed):

(wst)

To home page

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy