Home Ā» ChatGPT is powerful but there are still key issues unresolved

ChatGPT is powerful but there are still key issues unresolved

by admin
ChatGPT is powerful but there are still key issues unresolved

On January 27, Reuters reported that Sciences Po, one of France’s top universities, announced that it would ban the use of ChatGPT. The punishment for using the software may be severe enough to be expelled from the school, or even the entire French higher education.

“ChatGPT software is raising a serious issue of fraud and plagiarism for educators and researchers around the world,” Sciences Po said in the notice, “without a transparent reference, except for specific course purposes. Students may not use the software to produce any written work or presentations.”

Sciences Po is not the first school to announce a ban on ChatGPT. Less than a month after ChatGPT was released, it has attracted great attention from the American education community. Many American middle schools and universities have successively announced that ChatGPT will be banned on campus, and by reducing homework, students will be prevented from cheating by using their home network to access ChatGPT.

The New York City Department of Education in the United States even asked students and teachers in New York City not to use this AI tool, and set up cards for the equipment or the Internet under the jurisdiction of the Department of Education to restrict access to ChatGPT.

The ethical and legal issues of AI-generated content are triggering extensive discussions and thinking from all walks of life.

Academic worries behind AI

“ChatGPT is fun, but it’s not the author,” said Holden Thorp, editor-in-chief of the journal Science, in an editorial on artificial intelligence published on January 26.

Holden Thorp pointed out that “originality” is the basis for Science to publish papers, and the act of using ChatGPT to write text is equivalent to plagiarizing from ChatGPT.

“We are updating our editorial policy to ask authors not to use text, data, images, or graphics generated by ChatGPT (or any other artificial intelligence tool). Violation of this policy will be regarded as academic misconduct by Science journals, which is the same as image manipulation or plagiarism.ā€ However, Holden Thorp also said that the above regulations do not include data generated for research purposes in AI papers.

In AI academia, ICML (International Conference on Machine Learning), one of the well-known machine learning conferences, also recently announced that it is forbidden to “publish papers (except for related research) that contain text (such as ChatGPT) generated from large language models (LLM, Large Language Model) )ā€ to avoid ā€œunintended consequences and unsolvable problemsā€.

For academic journals and top conference papers, the biggest problem with AI-generated content is knowledge ownership and responsibility determination.

As the author of the paper, the researcher is undoubtedly responsible for the viewpoint and content of the paper, but how can AI be responsible for the content of the article? If the content generated by AI is false, inappropriate, falsified, or even plagiarized, how should we be held accountable?

See also  The United States condemns Erdogan's "anti-Semitic remarks" about the Jewish people

Considering the common problems caused by “AI cheating”, at present, from OpenAI officials, to academic journal publishers, and “grassroots” developers are studying how to distinguish whether the “author” of an article is a human or a machine?

Currently, OpenAI is developing corresponding AI detection tools.

OpenAI visiting researcher Scott Aaronson said in a talk at the University of Texas that they are fighting cheating by “watermarking” AI-generated content.

This technology will adjust the rules of ChatGPT to generate words and create “pseudo-random” specific words at specific positions in the generated content. It is difficult for readers to detect, but just like encrypted codes, people who “hold the key” can easily judge Whether this content is generated by ChatGPT.

Nature’s publisher, Springer Nature, is also developing technology that can detect LLMs.

And not long ago, Edward Tian, ā€‹ā€‹a 22-year-old Chinese-American student at Princeton University, developed an app called GPTZero, which is designed to find fault with ChatGPT. It can detect text “perplexity” and “burst” ( Burstiness)ā€ to judge whether the content is human-created or machine-generated.

AI raises copyright issues

Outside the school, the “moral” issue of AI itself has made art “teachers” seriously question the new AI drawing tools, which directly points to another crucial issue-copyright.

Since the AIGC became popular, the work of drawing pictures in many industries has become much easier. Taking the media industry as an example, serious media have high requirements for news illustrations, which must conform to the meaning of the text and be responsible for copyright.

Therefore, whether it is a work of art or a photo, it is either created by yourself or from an authorized gallery, but many articles are limited by objective conditions, and it is not easy to find a picture that meets the requirements.

With AI drawing, many media people began to try to use AI to draw pictures, and even directly draw portraits of news figures. Then posting pictures made by yourself with AI, will it not infringe copyright? This question is not sure.

Portraits of Biden & Trump generated by 6penAI

In the second half of 2022, just after AIGC became popular, some well-known artists protested against AI drawing. In 2022, Erin Hanson and many other American artists launched a protest against Stable Diffusion. They believed that some of the paintings generated by Stable Diffusion plagiarized their style, and this behavior seriously violated their legal rights.

Although AIGC is called AI generation, it does not have the ability to create itself. It can only learn from the production process of human beings to follow the cat and draw the tiger. AI drawing is a typical example. In the process of AI training, it is necessary to “feed” a large number of human paintings, so as to learn human composition and painting skills, and realize AIGC (AI Generative Content, AI generated content).

See also  Diabetes, the best diagnosis if it is artificial intelligence

However, in this generation process, the AI ā€‹ā€‹learns so much that many paintings are directly consistent with the style of the human painters in the model library.

“An artist with a little reputation may have encountered incidents in which his works were stolen and his ideas were infringed.” Xiang Huxiu, an art director of a well-known domestic game company, said that although the AI ā€‹ā€‹did not completely embezzle the artist’s work, the artist’s style Plagiarism on the Internet is also very difficult to accept, regardless of whether the plagiarist is a human or an AI.

“If someone is holding a painting that is exactly the same as mine, I don’t think it’s a study or a reference. But if it’s ‘generated’ by him, it’s no different from claiming originality. Whether it’s for commercial use or not , I would have been seriously offended.”

AI plagiarism, plagiarism and cheating should not only be moral issues, but also need to be constrained by laws and regulations. At present, AI has legal blind spots in many fields.

Legislation in the field of AI in my country is still in the stage of establishing rules and regulations. Currently, two regulatory regulations related to AI have been promulgated, namely the “Internet Information Service Algorithm Recommendation Management Regulations” and the “Internet Information Service Deep Synthesis Management Regulations”, and both have come into force.

In addition, there are some scattered in the “Civil Code”, “Data Security Law”, “Network Security Law”, “Personal Protection Law”, “Regulations on the Management of Network Audio and Video Information Services”, “Regulations on the Ecological Governance of Network Information Content” Individual provisions in laws and regulations and some documents supporting the nature of industrial development have not yet become a complete system.

“AI-generated content is an emerging technology. The lag and stability of the law have not yet specifically regulated this phenomenon of suspected plagiarism by AI.”

Wang Yuwei, partner of Guantao Law Firm, said that from the perspective of foreign industry practices, more and more platforms and AI drawing tools have made strict requirements on the copyright of works, and beware of copyright infringement caused by AI plagiarism.

In our country, at present, it is still necessary to judge whether there is a substantial similarity between the content generated by AI and the works of human artists according to the Copyright Law, and then determine whether it is plagiarism.

See also  The alarm from Brussels "Are you really sending Draghi away?" Fly the spread, Stocks down

In addition to the AIGC content suspected of plagiarizing the artist’s work, on the other hand, even randomly generated content and paintings will still involve copyright issues.

When AI drawing was just emerging on Douyin and Xiaohongshu, netizens with big brains proposed that the pictures generated by AI could be posted in the paid gallery as a charging resource to realize “earning while lying down”.

In the face of such coquettish operations, Adobe chose to open AI drawing “with restrictions”. Generative AI artwork is allowed to be uploaded for sale in Adobe Stock, an image library, as long as it meets certain criteria. But AI-generated content must be marked before uploading, and you need to own the commercial copyright of its referenced images or text.

As an open source model, Stability AI and RunwayML, the developers of Stable diffusion, believe that since it is open source, it should also refer to the open source CC0 protocol accordingly, that is, the copyright belongs to the public, and anyone can use it freely, including commercial use.

For AIGC models such as DALL-E 2 and Midjourney, which are not open source, the ownership of the generated content is relatively vague.

At present, there are also many copyright chaos in the AIGC market. Some manufacturers who develop applications based on Stable Diffusion claim that the copyright of the pictures generated by their own AI programs belongs to themselves, and even produce NFTs based on this to sell to users.

In this regard, Wang Yuwei believes that when users of AI applications use such applications for creation, if the generated pictures meet the requirements of originality, they can constitute works under the Copyright Law.

However, the manufacturer has clearly informed the user of the ownership of the copyright, which is equivalent to an agreement in the contract, so the copyright of this kind of work should belong to the manufacturer. It is not to say that AI applications are developed based on the open source model, and the works generated by it do not have copyright, but the copyright ownership should be determined according to the provisions of the “Copyright Law”.

“The biggest obstacle and resistance to the formulation of relevant laws at present may be how the law maintains the balance between technological development and ethical values.”

Wang Yuwei pointed out that on the one hand, the rapid development of AI technology may violate the basic ethical cognition of human beings and violate social public order and good customs; Further development.

ChatGPT is very powerful, but there are still key issues unresolved

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy