Home » GPT-5 raises concerns about AI security risks Musk took the lead in petitioning to stop Zhou Hongyi’s rebellion because he “can’t afford to lose”? _Financial Focus_Financial Management_Securities Star

GPT-5 raises concerns about AI security risks Musk took the lead in petitioning to stop Zhou Hongyi’s rebellion because he “can’t afford to lose”? _Financial Focus_Financial Management_Securities Star

by admin

(Original title: GPT-5 raises concerns about AI security risks. Musk took the lead in petitioning to stop Zhou Hongyi’s rebellion because he “can’t afford to lose”?)

Zhou Hongyi posted on Weibo today, expressing his views on Musk and other thousands of people petitioning to stop the development of GPT-5.

Zhou Hongyi said that the petition of Musk and others will not affect China’s development of its own big language model,Some Chinese companies, including 360, have already demonstrated their works, and there are still two years away from the level of GPT-4. It is too early to worry about the risks.

While ChatGPT has set off a global boom, concerns about AI ethics and security risks have always existed.

Tian Feng, dean of SenseTime’s Intelligent Industry Research Institute, pointed out that the governance challenges under the AIGC upsurge have triggered “cold thinking” by all parties. The New York City Department of Education banned access on its network equipment due to concerns about cheating, negative impacts on student learning, and the accuracy of the content.

In addition to teaching, Tian Feng mentioned that AIGC has also triggered discussions on copyright protection, as well as increased challenges in cyberspace content governance and cybersecurity risks. Criminals may use it to generate fake photos, false information, fraudulent words, and malicious codes that are exactly the same as real photos.

Yang Xiaoguang, CPO of Donghua Software Zhuyun Technology and an expert in digital transformation, said, “The results given by artificial intelligence models such as ChatGPT rely on a large number of data sets, so the production of data sets can be manipulated by people or institutions.It is entirely possible that ChatGPT and the like may output inaccurate, biased or even dangerous answers to users’ questions due to their reliance on large and biased data sets.This risk is bound to increase as AI continues to become more advanced and widely used。”

See also  MediaForEurope: first quarter profit rises to 10.1 million, revenues -1% to 646.6 million

Regarding how humans will respond to the future security challenges brought about by AI, Zhou Hongyi said that no one can give a definite answer.

“I once thought about whether to add a code word in the program, such as ‘the king of heaven covers the earth tiger, the pagoda suppresses the river demon’. As soon as this sentence is mentioned, the AI ​​​​can stop working. However, the current development situation is no longer a code word or pulling out a word. It can be controlled when the power is turned off. The evolution of artificial intelligence is no longer dependent on personal will, even Musk.”

But even if there are unknown challenges, Zhou Hongyi still firmly believes, “Lack of development is the greatest insecurity。”

Yang Xiaoguang also told the “Kechuangban Daily” reporter that this petition will not have a major impact on the development of artificial intelligence.

These suspension efforts are futile, like an arms race, no one dares to stop, otherwise all previous efforts will be wasted, and this risk is even greater。”

Some industry analysts pointed out that “spear” and “shield” complement each other, and the anti-generative AI technology is expected to explode with applications such as ChatGPT.

Essence Securities believes that with the open source of deep synthesis and generative AI technology, and the increase of deep synthesis and generation products and services, the technical threshold for such content production is getting lower and lower, realizing the “civilianization” of technology. Once the deep synthesis and generative AI technology is abused, it will cause security risks and substantial harm, causing personal and property damages such as portraits and reputations to individuals and companies. Therefore, the technical demand for anti-generative AI is expected to usher in an explosion.

See also  Mps, in the event of sale, a one-billion-dollar put for Axa emerges

CITIC Securities stated that the phenomenon-level explosive products represented by ChatGPT mean that a new round of artificial intelligence technology will quickly enter the industrialization, and the future industrial model will be a large model superimposed with various vertical and scene models. Promoting more collisions and integrations with security, both AI-enabled network security and AI’s own security are worth considering.

Yang Xiaoguang pointed out that applications such as ChatGPT will bring technical challenges and new opportunities.

“Security issues are extremely important to the country and individuals. We must be prepared for danger in times of peace, be vigilant, and prevent problems before they happen. This will be an important topic in the field of digital security in the future.”

Some people in the industry told the reporter of “Science and Technology Innovation Board Daily”,Zhou Hongyi’s anti-Musk also has a reason why he “can’t afford to lose” in this AI evolution and competition. “Since GPT is an invention that surpasses the Internet and the iPhone, including 360, who dares to neglect Internet companies and technology giants?!”

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy