Home » AI: Researchers urge US agency to freeze GPT-4 – Italy shuts down ChatGPT

AI: Researchers urge US agency to freeze GPT-4 – Italy shuts down ChatGPT

by admin
AI: Researchers urge US agency to freeze GPT-4 – Italy shuts down ChatGPT

A non-profit research organization has asked the US Federal Trade Commission (FTC), responsible for consumer protection, to investigate OpenAI. The Center for AI and Digital Policy (Policy Center for short) filed an official complaint this week. The central allegation is that OpenAI’s market launch of GPT-4 violates US commercial law. The product deceives and endangers people, is biased and poses a risk to private life and public safety.

In the 46-page document, the Policy Center calls on federal agency officials to investigate and, if necessary, regulate the generative AI systems created by OpenAI, the letter is publicly available. The Federal Trade Commission addressed therein is an independently operating federal authority based in Washington, DC. Its powers go beyond the tasks of a competition authority by also regulating consumer protection in the USA.

In the case of direct complaints from consumers, organizations or companies, the authority takes action against individual companies. It can also intervene in response to inquiries from the Congress or after publications in the media. The aim of the authority is to ensure the functioning of a competitive market and to counter unfair or deceptive anti-competitive practices.

Disinformation and manipulation campaigns, the proliferation of conventional and unconventional weapons, and cybersecurity are considered specific threats that OpenAI itself has acknowledged. The AI ​​company itself warned that AI systems have significant potential to spread and solidify ideologies, worldviews, truths and untruths. The Policy Center is particularly critical of the fact that OpenAI excludes any liability for consequences and damage caused by the use of its AI system.

The US consumer protection agency has made the use of artificial intelligence in products subject to several conditions: transparency, explainability, fairness, an empirical foundation and traceability, as well as clear accountability, which results in both accountability and liability in the event of damage. According to the complaint, GPT-4 doesn’t meet any of these conditions, so it’s time for the agency to intervene. The Policy Center proposes independent oversight and evaluation of commercial AI products coming to market in the United States. Measures are required to protect consumers, companies and trade as a whole, and further publications of AI products by OpenAI must be prevented.

The Center for AI and Digital Policy is a Washington, DC-based not-for-profit research organization that includes a global network of AI regulatory experts and legal professionals from 60 countries. The Policy Center offers professional training for decision makers in the field of AI regulation (future AI policy leaders). The organization advises US state governments and international organizations on regulatory decisions related to AI and emerging technologies.

also read

The US authority had already warned of advertising lies with AI in February and took a critical look at the market presence of providers. At the time, she warned companies doing business in the US to heed the official AI guidelines from April 2023 (“Aiming for truth, fairness, and equity in your company’s use of AI”) – even before new products are launched. In October 2022, the US government launched the blueprint for a “US Bill of AI Rights” – a prospective charter of fundamental rights for the AI ​​age intended to prevent discrimination by algorithms. Data protection and explainability are among the principles set out therein.

What both documents have in common is that they do not have direct legal force, but are intended to set an example. Since 2021, the Federal Trade Commission has been developing new specifications for products and services in the IT sector (“FTC Explores Rules Cracking Down on Commercial Surveillance and Lax Data Security Practices”). Similar to the legislative process in the EU surrounding the upcoming AI Act, the assessment and classification of risks is at the heart of the emerging legislation. A possible ban on AI applications such as ChatGPT is in the air.

Italy has blocked ChatGPT “until further notice” and OpenAI is threatened with a million fine there – for data protection reasons. The Italian data protection authority Garante per la Protezione dei Dati Personali (“Garante della privacy” for short) is the first authority in the world to prevent the use of a generative AI chatbot. The reason for the restrictive measure was a security problem on the part of OpenAI, which made chat histories and payment information from other users visible on March 20th. The massive storage and use of personal data for “training purposes” is not transparent and is not in line with the protection of personal data in the EU. There is also no age filter to protect minors from disturbing content. OpenAI now apparently has three weeks to implement protective measures and “get the problem under control,” said the lawyer and data protection expert Guido Scorza from the authority in two languages ​​in his own blog and to the IT portal Wired.

As Scorza explained to Wired, the ban is a temporary measure. This concerns the processing of personal data. If ChatGPT works without personal data, further use is not a problem. In fact, OpenAI will have to block access to its software from Italy or at least limit it to functions that do not contain personal data. It is unknown how long it may take after the publication of the decision of March 30th – and it is still unclear whether OpenAI will comply with the request. In addition, it is not possible to intervene in connections via virtual private networks (VPN), which allow the connection to be routed to other networks outside Italy, thus bypassing the possible blocking by proxy. The decision is available in two languages ​​(Italian and English) in the documents area of ​​the Garante della Privacy.

An open letter is making waves in this context: The manifesto published by the Future of Life Institute raises the question of the risks of large AI systems and calls for a moratorium for at least six months. Over 1700 part prominent figures from the world of AI and technology support the call public. Experts are divided: While some six months of work on large AI models was not far enough, others are calling for acceleration instead of a forced break in order to master the risks and catch up with OpenAI and GPT-4.

In a TIME commentary on the open letter, a well-known AI-critic Alignment researcher went so far as to project dramatic conditions into the future, call for bombings to destroy AI data centers, and suggest tracking all AI-enabled GPUs sold — Eliezer Yudkowsky’s explanations are ripe for science fiction.

More moderate voices, who care about security as well as research and competition, can perhaps be found in a petition from LAION, the Large-scale Artificial Intelligence Open Network: Its members propose a CERN for AI research and the training of large open sources -models before. The petition includes a research-driven rationale worth reading, as well as viewpoints and points of focus on the ongoing discussion on AI risks. The Dealing with the arguments of both sides could be helpful to come to an objective assessment yourself.

also read

Update

31.03.2023

20:00

Clock

Updated the section on suspension of ChatGPT in Italy based on Italian sources. OpenAI has received a request to block country access for Italy. Functions of the application that do not require personal data are exempt from the decision of the Italian data protection authority. It is still unclear whether and to what extent the US company will comply.


(sih)

To home page

See also  SpaceX successfully sent four NASA astronauts to the space station | NASA | Rocket | SpaceX

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy