Home » RightsCon: It’s Time to Speak About the Real AI Risks​

RightsCon: It’s Time to Speak About the Real AI Risks​

by admin
RightsCon: It’s Time to Speak About the Real AI Risks​

RightsCon: It’s Time to Speak About the Real AI Risks​

At the beginning of June, RightsCon took place, the world‘s largest digital rights conference. After several years of only virtual discussions, the leading internet ethicists, activists and policy makers met again in person in Costa Rica.

Not surprisingly, everyone was talking about artificial intelligence (AI) and the recent rush to large language models. Ahead of the conference, the United Nations issued a statement urging participants to focus on AI oversight and transparency. What was surprising, however, was how different the discussions about the risks of generative AI at RightsCon were from all the warnings from the big Silicon Valley voices on the news.

In recent weeks, tech luminaries like OpenAI CEO Sam Altman, ex-Googler Geoff Hinton, top AI researcher Yoshua Bengio, Elon Musk and many others have called for regulation and urgent action to address the “existential risks” – to the point of human extinction—to combat what AI represents to humanity.

Certainly the rapid deployment of large language models without risk assessments, disclosure of the training data and processes, or even seemingly much attention to how the technology could be misused is worrying. However, in several panels at RightsCon, speakers reiterated that this AI gold rush is a product of corporate profit-seeking and is not necessarily due to regulatory clumsiness or technological inevitability.

In the very first session, Gideon Lichfield, editor-in-chief of Wired and former editor-in-chief of US Technology Review, and Urvashi Aneja, founder of the Digital Futures Lab, exchanged blows with Google’s Kent Walker. “Microsoft’s Satya Nadella said he wanted to make Google dance. And Google danced,” Lichfield said. “We’re all jumping in the void now and holding our noses because these two companies are trying to outdo each other.” In his response, Walker emphasized the social benefits that advances in artificial intelligence could bring in areas like drug discovery and reaffirmed Google’s commitment to human rights.

See also  Vectra AI powers Maire's Cyber ​​Fusion Center

The next day, AI researcher Timnit Gebru addressed the talk of AI’s existential risks head-on: “Ascribing agency to a tool is a mistake, and that’s a red herring. And when you see who’s talking like that, it’s literally the same people who have poured billions of dollars into these companies.”

Gebru continued: “Just a few months ago Geoff Hinton spoke about GPT-4 and how it is the butterfly of the world. ‘Oh, it’s like a caterpillar that takes in data and then turns into a beautiful butterfly,’ and now it’s suddenly an existential risk. I mean why do people take these people seriously?”

Frustrated by the narratives surrounding AI, experts like Frederike Kaltheuner, director of technology and human rights at Human Right Watch, suggest addressing the risks we already know are associated with AI, rather than about it to speculate what might come. In fact, there are some clear, well-documented harms from current use of AI. Some examples:

Increased and Reinforced Misinformation: Recommendation algorithms on social media platforms like Instagram, Twitter, and YouTube have been shown to prioritize extreme and emotionally engaging content, regardless of its accuracy. Large language models contribute to this problem by producing persuasive misinformation known as “hallucinations”.

Kaltheuner particularly warns against the use of generative AI chatbots in risky contexts such as psychotherapy: “I worry about absolutely unconsidered use cases of generative AI for things for which the technology is simply not designed or suitable.”

Gebru reiterated her concern about the environmental impact of the large power demands of the computing power needed to run demanding large language models. She was fired by Google for raising these and other concerns in internal investigations. In addition, the poorly paid moderators of ChatGPT have also experienced post-traumatic stress syndrome through their work to mitigate the toxic expenditure of the models.

In relation to the concerns for the future of humanity, Kaltheuner asks: “Whose extinction? The extinction of the entire human race? We are already seeing people who are historically marginalized being hurt. So I find that a bit cynical.”

See also  Pay close attention to ChatGPT ethical issues and legal norms- Sina

(vs.)

To home page

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy