Home » Regulate AI like nuclear weapons, OpenAI executives issue a document discussing “superintelligence” regulation – WSJ

Regulate AI like nuclear weapons, OpenAI executives issue a document discussing “superintelligence” regulation – WSJ

by admin
Regulate AI like nuclear weapons, OpenAI executives issue a document discussing “superintelligence” regulation – WSJ

Some time ago, OpenAI CEO Sam Altman, who has always been dressed casually, appeared in front of the public in the form of suits and ties, attending a hearing on the theme of “Oversight of AI: Rules for Artificial Intelligence”. He was joined by longtime AI critic Gary Marcus and Christina Montgomery, IBM’s chief privacy and trust officer.

The hearing was the first in a series as lawmakers hoped to learn more about the potential pros and cons of artificial intelligence and eventually “set the rules” for it before it “too late to regulate social media.” mistakes.

At the hearing, Altman largely agreed with lawmakers on the need for regulation of the increasingly powerful artificial intelligence technologies his company and companies like Google and Microsoft are developing. At this early stage in the conversation, though, neither they nor lawmakers can say what that regulation should look like.

Now, a week after the hearing, OpenAI executives have written their views on regulation. In a blog titled “Governance of superintelligence,” Altman et al. write, “Now is a good time to start thinking about the governance of superintelligence — the AI ​​systems of the future will be even more powerful than breaking latest news.”

Altman et al. believe that within the next decade, AI systems are expected to exceed the level of experts and exhibit production capabilities comparable to today’s largest enterprises in most fields. However, the potential benefits and risks of superintelligent technology will be greater than any other technology in the past. To achieve a more prosperous future, we must manage the risks of superintelligence and ensure its smooth integration into society.

To achieve this goal, Altman et al. point out, there are many things we can do. First, we need to coordinate among leading development efforts to ensure safe and controllable development of superintelligence. In addition, we need to establish an international agency similar to the International Atomic Energy Agency to monitor the research and application of superintelligence, ensure that it meets safety standards, and limit its deployment and security level. In addition, we need to work on developing the technical capabilities to make superintelligence safe and controllable.

See also  Ferrari smiles: on an industrial and financial level, the results are record-breaking

But Altman et al. also caution that, while mitigating the risks of today’s AI technologies, similar regulatory mechanisms should not be applied to models below a significant threshold of capability.

However, how to define the threshold here is also a headache.

The following is the original blog post:

Given what we’re seeing today, it’s conceivable that within the next decade, AI systems will surpass the level of experts in most domains and be able to perform productive activities comparable to the largest corporations today.

In terms of potential advantages and disadvantages, superintelligence will be more powerful than other technologies that humans have faced in the past. We can have a more prosperous future, but we must manage risk to make it happen. Given the potential for extinction risk, we cannot just be reactive. Among technologies with this characteristic, nuclear energy and synthetic biology are common examples.

We must mitigate the risks of today’s AI technologies, but superintelligence will require special handling and coordination.

starting point

For us, there are many ideas that can help us better grasp this development direction. Here, we initially present our thinking on three of these issues.

First, we need some degree of coordination among major developmental efforts to ensure that superintelligence development proceeds in a manner that both ensures safety and facilitates the smooth integration of these systems into society. This could be done in a number of ways; major governments around the world could set up a program of which many of the current efforts would be part, or we could reach a consensus (with the support of a new The annual growth rate of artificial intelligence capabilities is limited to a certain rate.

See also  Ningde Times, Yangtze Power and other 28 stocks received northbound funds to increase their holdings by more than 100 million yuan jqknews

Of course, individual companies should be held to extremely high standards and fulfill their responsibilities.

Second, we may end up needing something like the International Atomic Energy Agency (IAEA) to regulate work on superintelligence; any effort beyond a certain threshold of capability (or computing resources, etc.) Audits are required, products are tested for compliance with security standards, restrictions are placed on the degree of deployment and security level, and so on. Tracking computing resource and energy usage can go a long way towards realizing this idea.

In a first step, companies could voluntarily agree to begin implementing elements that the agency might one day require; in a second step, individual countries could implement them. Importantly, such an agency should focus on reducing extinction risk, rather than addressing issues that should be determined by individual nations, such as defining what AI should be allowed to say.

In the third step, we need to have the technical capabilities to make superintelligent systems safe. This is an open research problem, and we and others are putting a lot of effort into solving it.

Regulatory scope

We believe it is important to allow companies and open source projects to develop models below a significant capability threshold that do not require the kind of regulation we describe here (including mechanisms such as cumbersome licenses or audits).

Current systems will create enormous value for the world, and while they do have risks, the level of those risks appears to be on par with other Internet technologies, and society’s response appears appropriate.

By contrast, the systems we focus on will possess power beyond any current technology, and we should be careful not to undercut our focus by applying similar criteria to technologies far below that level.

See also  Polls: the PD of a glue in front of everyone, FdI loses altitude. Green Pass extension approved by 3 out of 4 Italians

Public Opinion and Potential

However, rigorous public scrutiny is necessary for the governance of the most robust systems, as well as decisions about their deployment. We believe that people around the world should democratically determine the scope and default settings of AI systems. We don’t yet know how to design such a mechanism, but we plan to experiment with its development. We still believe that on these broad scales, individual users should have a great deal of control over the behavior of the AI ​​they use.

Given the risks and difficulties, it’s worth pondering why this technology is being built.

At OpenAI, we have two basic starting points. First, we believe that superintelligence will lead to a better future than the world we can imagine today (we’ve already seen early examples in areas such as education, creative work, and personal productivity). The world faces many problems that need more help from us to solve; this technology can improve our society, and we will surely be amazed at the creativity of each individual using these new tools. The economic growth and improvement in quality of life will be astounding.

Original link: https://openai.com/blog/governance-of-superintelligence

Source of this article: Heart of the Machine, original title: “Regulate AI like regulatory nuclear weapons, OpenAI executives issued a document discussing “superintelligence” regulation”

Risk Warning and Disclaimer

Market risk, the investment need to be cautious. This article does not constitute personal investment advice, nor does it take into account the particular investment objectives, financial situation or needs of individual users. Users should consider whether any opinions, opinions or conclusions expressed herein are applicable to their particular situation. Invest accordingly at your own risk.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy