Home » People’s Daily Online: Fasten the “seat belt” and generative artificial intelligence will develop better_Oriental Fortune Network

People’s Daily Online: Fasten the “seat belt” and generative artificial intelligence will develop better_Oriental Fortune Network

by admin
People’s Daily Online: Fasten the “seat belt” and generative artificial intelligence will develop better_Oriental Fortune Network

artificial intelligenceThe general large-scale model was born, and the world saw the AIGC (artificial intelligencegenerated content) powerful. At the same time, issues such as AIGC’s credibility, privacy protection, and ethics have also sparked unprecedented controversy.

The relevant departments of our country responded quickly. On April 11, the State Internet Information Office issued the “Generativeartificial intelligenceService Management Measures (Draft for Comment) (hereinafter referred to as the “Management Measures”), proposing 21 measures. The industry believes that this is the first time in China to explicitly solicit management opinions on generative artificial intelligence services, which will help promote the healthy and stable development of the AIGC industry.

The negative impact of AIGC appears, and the industry urgently needs to be regulated

At present, artificial intelligence is more and more applied to the generation of text, images, videos, etc., and has become a new content production method (AIGC) after professionally generated content (PGC) and user generated content (UGC).

However, from the moment AIGC entered the public eye, various controversies have been surrounding it.

The copyright dispute bears the brunt. In early 2023, the commercial gallery Getty Image repeatedly accused the AI ​​company Stability AI of unauthorized copying and use of more than 12 million copyrighted images in its gallery to train its AI image generation model. The Wall Street Journal also stated that News Corporation of the United States is preparing to open AI,MicrosoftCompanies such as Google and Google have filed lawsuits, requiring AI tools to compensate for the use of their content during training.

Data Security, privacy and security issues also follow.According to South Korean media reports, there have been three cases of misuse and abuse of ChatGPT within Samsung, including two cases of “device information leakage” and one “conference content leakage”.semiconductorConfidential content such as equipment measurement data and product yield rate.andJPMorganSoftbank and other companies have begun to restrict employees from using ChatGPT in the workplace.

Some people even used AIGC’s powerful content generation capabilities to falsify information, causing heated discussions many times. For example, recently, a photo of a woman in Guangzhou on the subway was “undressed with one click” by others using AI technology, and false pictures were widely spread on the Internet; A picture of myself kissing a female celebrity; In February this year, someone used ChatGPT to write a fake news that “Hangzhou Municipal Government will cancel the policy of restricting motor vehicle tail numbers”, which made many people believe it was true…

Many countries have launched investigations into ChatGPT. In early April, the Italian Personal Data Protection Agency announced a ban on the use of chatrobotChatGPT, restricted its development company OpenAI from processing Italian user information, and started an investigation. Canada’s federal privacy regulator also announced an investigation into OpenAI for its alleged “collection, use and disclosure of personal information without consent.”The German Federal Data Protection Commissioner also stated that consideringData Securitydoes not rule out banning ChatGPT.

See also  Masi Agricola: revenues down 10% in 1H

AI development is accelerating, and AI governance also needs to accelerate.Many industry expertsPeople’s Daily OnlineThe reporter said that my country’s introduction of the “Administrative Measures” is very timely, and such detailed management regulations are rare in the world.

Qi Jiayin, vice chairman of the China Information Economics Association and a professor at the School of Cyberspace Security of Guangzhou University, said that the release of the “Administrative Measures” is only four months after OpenAI released the global public test of ChatGPT, and it may become the first international regulated generative artificial intelligence. The rules and regulations reflect my country’s agile response capabilities in the field of artificial intelligence governance, and undoubtedly play an important leading role in regulating and promoting the development of AIGC. It will provide a template for global AIGC supervision, help accelerate the construction of artificial intelligence governance systems, and promote the orderly development of AIGC industry norms.

Emphasis on safety is to better promote development

Industry experts believe that the “Administrative Measures” emphasizes the protection of rights and interests, and mainly prevents abuse. On the one hand, it is to prevent the platform from abusing artificial intelligence technology to infringe on user privacy, and on the other hand, it is to prevent users from abusing generative artificial intelligence to infringe upon the rights and interests of others.

People’s Daily OnlineYe Zhenzhen, chairman, president, and director of the National Key Laboratory of Communication Content Cognition believes that the “Administrative Measures” issued by the competent authority is not intended to restrict the development of artificial intelligence, but to “fasten the seat belt” through rules and regulations to accelerate development and make AI more efficient. Good development. He emphasized: “On the issue of coordinating development and security, no development is the most insecure.”

my country attaches great importance to the development of artificial intelligence technology and regards it as a strategic emerging technology that drives technological innovation, industrial upgrading and productivity improvement. In recent years, relevant departments have successively released the “New Generation Artificial Intelligence Development Plan”, “Three-Year Action Plan for Promoting the Development of the New Generation Artificial Intelligence Industry”, “Guiding Opinions on Accelerating Scenario Innovation and High-level Application of Artificial Intelligence to Promote High-quality Economic Development”, “About A number of policy documents such as the “Notice on Supporting the Construction of New-Generation Artificial Intelligence Demonstration Application Scenarios” have provided important guidance for the research of core artificial intelligence technologies, the application of products, and the exploration of new models and new paths for development.

At this year’s National Two Sessions, the relevant person in charge of the Ministry of Science and Technology also specifically emphasized that “artificial intelligence will be regarded as a strategic emerging industry and a new growth engine”, and will continue to give strong support in promoting basic research, application promotion, and open cooperation of artificial intelligence. , and at the same time promote the establishment of a safe and controllable governance system for artificial intelligence.

See also  Respond to common challenges and create a better future - Boao Forum for Asia 2024 Annual Conference gathers strength to work together - Xinhuanet Client

Qi Jiayin pointed out that the “Administrative Measures” released this time have drawn the bottom line and red line for AIGC’s research and development and application, that is, the provision of generative artificial intelligence products or services should comply with the requirements of laws and regulations, and respect social morality, public order and good customs. At the same time, it is clear that the “provider” of AIGC is responsible for R&D and application, that is, organizations and individuals who use generative artificial intelligence products to provide services such as chat and text, image, and sound generation need to be responsible for product applications, security assessments, algorithms, etc. legality, legality of data, etc. In addition, the “Administrative Measures” also provide a correction mechanism for AIGC’s research and development and application, including continuous optimization of correction, AIGC logo reminder, reviewable mechanism, supervision and guidance mechanism, service termination mechanism and punishment mechanism.

“The “Administrative Measures” established the rules for the research and development and application of AIGC for the first time, which provided a regulatory framework for the research and development and application of AIGC, and played a very important and positive role in regulating and guiding the development of this field.” Qi Jiayin said.

Experts call for greater attention to potential risks

The reporter noticed that the “Administrative Measures” also put forward preventive measures for some possible derivative problems, such as requiring that the content generated by generative artificial intelligence “should reflect the core socialist values”, prevent all kinds of discrimination, and “shall not use algorithms to , data, platforms and other advantages to implement unfair competition”, “take measures to prevent the generation of false information”, “prevent harming the physical and mental health of others”, etc.

“This is very worthy of attention.” Qi Jiayin explained that taking ChatGPT as an example, it has powerful human-like self-learning ability and interactive word processing ability, and the automatic language generation has a strong human-like language style, which makes ChatGPT may get rid of a single technical identity and have a human-like social role, becoming an “agent” who deeply participates in and intervenes in social relations. If it is not restricted, the “human-machine” social interaction brought about by AIGC may replace the real “human-human” social interaction and become the core element that interferes with the real social relationship of human beings, which will bring about the alienation of technology and the alienation of human beings. Alienation must be guarded against.

Ye Zhenzhen pointed out that the AI ​​platform has a position, the content generated by AI is oriented, and the so-called “technology neutrality” does not exist. Therefore, whether it is the corpus data feeding artificial intelligence or the algorithm of the model itself, we must pay close attention to its political direction, public opinion orientation and value orientation. AI cannot be allowed to escape the human vision, and the potential risks of AIGC’s “generation is dissemination” feature cannot be ignored, and the way of AI application and governance must be actively explored.

See also  Superbonus, FI: "Evaluate some interventions and changes in Parliament"

Regarding the cultivation of AIGC values, Professor Shen Yang of Tsinghua University suggested that there should be more “human intervention”. The key is to provide more explainability for large language models, which is very important for correcting AI’s value bias and knowledge prejudice.

It is understood that the current director of the People’s Daily, relying onPeople’s Daily OnlineThe National Key Laboratory of Communication Content Cognition under construction has preliminarily established an AI ideological risk assessment system. The system adopts the method of human-computer cooperation for evaluation.

In order to prevent “technical derailment” from the source, in terms of system construction, on April 4, the Ministry of Science and Technology solicited opinions on the “Measures for the Review of Science and Technology Ethics (Trial)”announcementIt is proposed that units engaged in scientific and technological activities such as life sciences, medicine, artificial intelligence, etc., should set up a scientific and technological ethics (review) committee if the research content involves sensitive areas of scientific and technological ethics.

Shen Yang said that each AIGC platform should have an ethical review committee to ensure that it complies with legal morality and ethical rules when a large-scale emergence occurs. “We are not paying enough attention to trainers, psychologists, ethicists, and reminders, and we need to further improve.”

Qi Jiayin pointed out that with the rapid development of technology, relevant rules and regulations need to be continuously optimized, and it is recommended to further introduce AIGC security assessment specifications and management process specifications.

“We should improve the multi-dimensional standard system of R&D, audit, application and safety, and improve the AIGC technology innovation ecology.” Yang Song, director of the Science and Technology Development Department of the National Key Laboratory of Communication Content Cognition, suggested that we should speed up the construction of national top-level planning, leading enterprises to lead, The innovation chain industrial chain integration system jointly supported by universities and scientific research institutions improves the data sharing mechanism and vigorously promotes the convergence of data resources.

(Article source: People’s Daily Online)

Article source: People’s Daily Online

Article author: Zhao Zhuqing

Original title: fasten the “seat belt”, generative artificial intelligence will develop better

Solemnly declare:Oriental Fortune publishes this content to disseminate more information, has nothing to do with the position of this site, and does not constitute investment advice. Proceed accordingly at your own risk.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy