Home » The AIs are out of (democratic) control

The AIs are out of (democratic) control

by admin
The AIs are out of (democratic) control

We urgently need more attention, funds and human resources to introduce regulatory systems for AI, such as those in the aviation, pharmaceutical and food industries, says the scientist Lê Nguyên Hoang.

This content was published on April 14, 2023
minutes

March 29th is a open letterexternal link appeared, calling for “pausing gigantic AI experiments.” So far, more than 20,000 academics and tech leaders have signed it. This appeal was long overdue.

Over the past decade, impressive algorithms have been hastily developed and deployed on a large scale, namely ChatGPT and Midjourney. Similar artificial intelligences have been widely commercialized for fraud detection, resume filtering, video surveillance, and customer service (often though defectsexternal link and Biasesexternal link were known).

But their main use is arguably in marketing. Many of the tech giants of our time, such as Google, TikTok and Meta, make money primarily from advertising targeting. ChatGPT’s first publicly known customer is none other than Coca Cola. This alone should be a red flag.

Le Nguyen Hoang

Lê Nguyên Hoang is co-founder and CEO of cybersecurity startup Calicarpa, and is also co-founder and president of the not-for-profit Tournesol Association. Hoang’s YouTube Channel”Science4Allexternal link” has garnered more than 18 million views since it launched in 2016.

End of insertion

In addition, one has seen how algorithms spread misinformation, recommend pseudo-medicine that mental healthexternal link endanger and were used to illegal (even slavery) marketsexternal link to coordinate. They fueled hatred, helped destabilize democracies, and even to contributed to genocidesexternal link, according to the United Nations and Amnesty International. Algorithms threaten national security.

See also  Intel archives investment in Italy: "No active projects, focus on Germany and Poland"

Nevertheless, their development remains extremely opaque. Hardly any outside authority has insight into the algorithms of Google, Meta or OpenAI. Internal opposition forces have even been removed: Google fired its ethics team, Meta dismantled its responsible innovation department, and Microsoft’s ethics team had to leave after raising alarms about a hasty, unethical and unsafe release. Powerful, for-profit corporations have successfully built a world state where their algorithms have little accountability.

Effective regulation of AI is urgently needed

The software industry is far from the first industry out of control. For decades, the airline, auto, pharmaceutical, food, tobacco, construction, and energy industries—among many others—put untested products on the market. This has cost millions of lives. Ultimately, civil society has resisted the lack of accountability. All democracies now have strong laws and powerful, well-resourced regulatory agencies that exercise democratic control over these markets. The software industry needs similar oversight.

We urgently need to prioritize safe and ethical technologies, rather than requiring our countries to be at the forefront of the race to have the most impressive AIs. Specifically, how impressive the algorithms that power our electricity grids, cars, planes, power plants, data centers, social media and smartphones should matter far less than their cybersecurity.

Like my colleague and I in one Book of 2019external link warned: If these algorithms are fragile, vulnerable, have backdoors to an unreliable operator or have been outsourced to them straight away – or if they violate human rights, what usually the caseexternal link is – then we are all in great danger.

See also  Habeck insists on billion-dollar factories for the East - whatever the cost

Nevertheless, the software industry and academia, as well as current legal and economic incentives, mostly hinder the security mindset. Too often, the most cited, most celebrated, and most funded scientists, the highest-paid jobs in the software market, and the more successful companies are those who neglect cybersecurity and ethics. As a growing number of experts are recognizing, this needs to change. Urgent.

Our democracies probably cannot afford the decades it took to establish laws and inspection agencies for other countries. Given the speed at which increasingly sophisticated algorithms are being developed and released, we have a very small window to act. The open letter that I and other AI researchers have signed aims to widen this window a little bit.

What you, your organizations and our institutions can do

Bringing today’s most critical algorithms under democratic control is urgent – a vast and exquisite undertaking that will not be achieved in time without the involvement of a large number of individuals with diverse talents, expertise and responsibilities.

A first challenge is attention. We all urgently need to invest much more time, energy and resources to ensure that our colleagues, organizations and institutions pay much more attention to cybersecurity.

Big Tech employees should no longer be invited and celebrated without confronting them about the safety and ethics of the products they make a living from. Especially not from universities and the media. In general, in tech discussions, the question “What can go wrong?” be asked.

A second challenge is institutional. It needs new laws, but today’s big algorithms are probably already breaking existing laws – like that Benefit from ad-based fraudexternal link. But the current complete lack of external oversight bodies prevents this from being enforced. We must demand that legislators set up well-funded regulatory agencies to enforce the law online.

See also  FAW-Volkswagen ID.6 CROZZ electric car price announced from 240,000 yuan-IT and traffic

In setting democratic norms, Switzerland has often played the role of a role model. This is one way to continue this noble tradition. Furthermore, the Lake Geneva region has recently set itself the goal of “Trust Valleyexternal link” in the field of digital trust and cyber security. In order to be recognized as such worldwide, strengthening inspection and cyber security organizations is key.

A third challenge lies in creating democratically controlled, secure alternatives to today’s most powerful algorithms. That’s what I’ve mostly worked on for the past five years. My colleagues and I have the non-profit project for this Sunflowerexternal link built up. Basically, Tournesol’s algorithm is the product of a safe and fair vote by the Tournesol community, open to everyone.

The faster we put the security of our information ecosystems first, the faster we have the chance to protect our societies from the current massive cybersecurity weaknesses.

A break in further exploration? AI researcher Jürgen Schmidhuber believes that there won’t be any:

Edited by Sabina Weiss. Translated from English: Benjamin von Wyl

The views expressed in this article are solely those of the author and do not necessarily reflect the position of SWI swissinfo.ch.

In accordance with JTI standards

In accordance with JTI standards

More: JTI certification from SWI swissinfo.ch

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy