Home » Why courts could dictate American AI rules

Why courts could dictate American AI rules

by admin
Why courts could dictate American AI rules

Little by little it is becoming clear that in the USA it is probably not politics that will regulate artificial intelligence, but the courts. Recently, the Federal Trade Commission (FTC) launched an investigation into whether OpenAI violated consumer protection laws by using people’s online data to train its popular AI chatbot ChatGPT. But this process will probably end up before a judge. It is already clear that artists, authors and photo agencies such as Getty are hiring lawyers. This group is now taking action against AI companies like OpenAI, Stability AI and Meta for allegedly infringing copyright law when training their models.

Advertisement

If these lawsuits prove successful, they could force OpenAI, Meta, Microsoft and co. to change the way AI is programmed, trained and deployed in a way that is fairer and more equitable – at least that is the hope. Courts could also create new ways for artists, writers and other creatives to obtain licenses or royalties if their works are used for training.

Actually, the boom in generative AI systems had revived the enthusiasm of American politicians for the passage of AI-specific laws. But with a divided US Congress and intense lobbying by tech giants, it’s unlikely that regulation will happen as early as next year, says Ben Winters, senior counsel at net civil rights NGO Electronic Privacy Information Center (EPIC). Even the most prominent attempt to create new AI laws, Senator Chuck Schumer’s “SAFE Innovation Framework”, does not contain any concrete policy proposals.

“It seems that the easier way [zu einem KI-Regelwerk] is to start with the existing laws,” agrees Sarah Myers West, executive director of the AI ​​Now Institute, an AI research organization. And that means litigation — in all areas. And the existing laws do indeed provide ample ammunition for those who claim their rights have been violated by AI companies.

See also  Xiaomi 14 series: cameras still at the center of innovation, but the Ai ecosystem is emerging

Over the past year, AI companies have been hit by a spate of lawsuits, most recently from comedian and author Sarah Silverman, who alleges OpenAI and Meta illegally harvested their copyrighted material from the internet to train their models. Their actions are similar to those of artists in another class action lawsuit alleging that popular AI software used their copyrighted images without consent. Microsoft, OpenAI and GitHub’s AI-powered programming tool Copilot are also facing a class action lawsuit alleging that the product is “pirating software on an unprecedented scale” by training it on existing code harvested from existing websites.

Meanwhile, the FTC is investigating whether OpenAI’s data security and privacy practices are “unfair and deceptive” and whether the company may have harmed consumers — including reputational damage — in training its AI models. The FTC writes that they have real evidence to support their concerns: Earlier this year, OpenAI had a security hole in which a bug in the system led to chat histories and even individual payment information of users escaping. In addition, AI language models often spit out inaccurate and invented content, sometimes about specific people.

Advertisement

OpenAI is calm about the FTC investigation – at least in public. When asked for comment, the company only shared a Twitter thread by CEO Sam Altman, in which he said the company was “confident that we are complying with the law”. An agency like the FTC can take companies to court to enforce standards for an industry and achieve better business practices, says Marc Rotenberg, president and founder of the Center for AI and Digital Policy (CAIDP), a nonprofit AI policy organization. CAIDP itself filed a complaint with the FTC in March, asking it to investigate OpenAI. The agency has the power to create new rules (“guard rails”) that tell AI companies what they can and can’t do, says AI Now’s Myers West.

See also  Ex-Stability AI boss says being CEO sucks

The FTC could order OpenAI to pay fines, delete illegally collected data, and even shut down the algorithms that used the illegally collected data, Rotenberg says. In the most extreme case, ChatGPT could even be taken offline. There’s even a precedent for this: The agency forced the diet company Weight Watchers to delete data and algorithms in 2022 after it illegally collected information from children.

Other government agencies in the United States could certainly initiate investigations. For example, the Consumer Financial Protection Bureau (CFPB), which specializes in finance, has signaled that it wants to investigate the use of AI chatbots in banking. And should generative AI play a (decisive) role in the upcoming 2024 US presidential election, the Federal Election Commission (FEC) could also investigate, according to Winters. In the meantime, rulings on the first lawsuits should come in, although it will take at least a few years for the class action lawsuit and FTC investigation to reach court.

Mehtab Khan, a resident fellow at Yale Law School who specializes in intellectual property, data governance and AI ethics, fears that many of the lawsuits filed this year could be dismissed in front of the judge as “too broad.” But they served an important purpose nonetheless. The lawyers cast a wide net and see what they catch. This then allows for more accurate court cases, which could prompt companies to change the way they develop and use their AI models, adds the expert.

The lawsuits could also force companies to improve their practices in documenting the data they use, Khan says. So far, technology companies only have a very rudimentary idea of ​​what data actually goes into their AI models. More transparency could uncover illegal practices, but also help to defend yourself in court.

See also  The Legend of Zelda: Tears of Kingdoms is heavily leaked ahead of release - Engadget 中文版

To home page

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy