Plagiarism, plagiarism, and cheating by AI should not only be moral issues, but also need to be constrained by laws and regulations. Even randomly generated content and paintings will still involve copyright issues.
On January 27, Reuters reported that Sciences Po, one of the top universities in France, announced that the school would ban the use of ChatGPT.Penalties for using the software can be as severe as expulsion from the school, or even expulsion from French higher education as a whole.
“ChatGPT software is raising a serious issue of fraud and plagiarism for educators and researchers around the world,” Sciences Po said in the notice, “without a transparent reference, except for specific course purposes. Students may not use the software to produce any written work or presentations.”
Sciences Po is not the first to announce a ban onChatGPTschool.Less than a month after ChatGPT was released, it has attracted great attention from the American education community. Many American middle schools and universities have successively announced that ChatGPT will be banned on campus, and by reducing homework, students will be prevented from cheating by using their home network to access ChatGPT. The New York City Department of Education in the United States even asked students and teachers in New York City not to use this AI tool, and set up cards for the equipment or the Internet under the jurisdiction of the Department of Education to restrict access to ChatGPT.
The ethical and legal issues of AI-generated content are triggering extensive discussions and thinking from all walks of life.
AIThe academic worries behind
“ChatGPTIt’s fun, but it’s not the author”，This is an editorial on artificial intelligence published by Holden Thorp, editor-in-chief of the journal Science, on January 26. Holden Thorp pointed out that “originality” is the basis for publishing papers in Science, whileuseChatGPTThe act of writing text is equivalent to starting fromChatGPTplagiarism.
“We are updating our editorial policy to ask authors not to use text, data, images or graphics generated by ChatGPT (or any other artificial intelligence tool).Violations of this policy will beScienceJournals consider this to be academic misconduct, which is no different from image manipulation or plagiarism.However, Holden Thorp also said that the above regulations do not include data generated for research purposes in AI papers.
In AI academia, ICML (International Conference on Machine Learning), one of the well-known machine learning conferences, also recently announced that it is forbidden to “publish papers (except for related research) that contain text (such as ChatGPT) generated from large language models (LLM, Large Language Model) )” to avoid “unintended consequences and unsolvable problems”.
For academic journals and top conference papers, the biggest problem with AI-generated content is knowledge ownership and responsibility determination. As the author of the paper,Researchers are undoubtedly responsible for the opinions and content of the paper, andAIHow to be responsible for the content of the article?If the content generated by AI is false, inappropriate, falsified, or even plagiarized, how should we be held accountable?
Considering the common problems caused by “AI cheating”, at present, from OpenAI officials, to academic journal publishers, and “grassroots” developers are studying how to distinguish whether the “author” of an article is a human or a machine?
Currently, OpenAI is developing corresponding AI detection tools. OpenAI visiting researcher Scott Aaronson said in a talk at the University of Texas that theyis passing toAIgenerate content“watermark”to combat cheating. This technology will adjust the rules of ChatGPT to generate words and create “pseudo-random” specific words at specific positions in the generated content. It is difficult for readers to detect, but just like encrypted codes, people who “hold the key” can easily judge Whether this content is generated by ChatGPT.
Nature’s publisher, Springer Nature, is also developing technology that can detect LLMs. And not long ago, Edward Tian, a 22-year-old Chinese-American student at Princeton University, developed an app called GPTZero, which is designed to find fault with ChatGPT. It can detect text “perplexity” and “burst” ( Burstiness)” to judge whether the content is human-created or machine-generated.
AIraises copyright issues
Outside of school, the “moral” issue of AI itself has caused art “teachers” to raise serious doubts about the new AI drawing tools.This points to another crucial question——copyright。
Since the AIGC became popular, the work of drawing pictures in many industries has become much easier. Taking the media industry as an example, serious media have high requirements for news illustrations, which must conform to the meaning of the text and be responsible for copyright. Therefore, whether it is a work of art or a photo, it is either created by yourself or from an authorized gallery, but many articles are limited by objective conditions, and it is not easy to find a picture that meets the requirements.
After having AI drawing, many media people began to try to use AI to draw pictures, evenDraw portraits of newsmakers directly.that hair for your own useAIThe pictures I made will never infringe copyright, right? This question is not sure.
In the second half of 2022, just after AIGC became popular, some well-known artists protested against AI drawing. In 2022, Erin Hanson and many other American artists launched a protest against Stable Diffusion. They believed that some of the paintings generated by Stable Diffusion plagiarized their style, and this behavior seriously violated their legal rights.
Although AIGC is called AI generation, it does not have the ability to create itself. It can only learn from the production process of human beings to follow the cat and draw the tiger. AI drawing is a typical example. In the process of AI training, it is necessary to “feed” a large number of human paintings, so as to learn human composition and painting skills, and realize AIGC (AI Generative Content, AI generated content).
However, during this generation process,AIIt’s so similarso that many paintings are directly consistent with the style of human painters in the model library.
“An artist with a little reputation may have encountered incidents in which his works were stolen and his ideas were infringed.” Xiang Huxiu, an art director of a well-known domestic game company, said that although the AI did not completely embezzle the artist’s work, the artist’s style Plagiarism on the Internet is also very difficult to accept, regardless of whether the plagiarist is a human or an AI. “If someone is holding a painting that is exactly the same as mine, I don’t think it’s a study or a reference. But if it’s ‘generated’ by him, it’s no different from claiming originality. Whether it’s for commercial use or not , I would have been seriously offended.”
AIPlagiarism, plagiarism and cheating should not only be moral issues, but also need to be bound by laws and regulations.At present, AI has legal blind spots in many fields.
Legislation in the field of AI in my country is still in the stage of establishing rules and regulations. Currently, two regulatory regulations related to AI have been promulgated, namely the “Internet Information Service Algorithm Recommendation Management Regulations” and the “Internet Information Service Deep Synthesis Management Regulations”, and both have come into force. In addition, there are some scattered in the “Civil Code”, “Data Security Law”, “Network Security Law”, “Personal Protection Law”, “Regulations on the Management of Network Audio and Video Information Services”, “Regulations on the Ecological Governance of Network Information Content” Individual provisions in laws and regulations and some documents supporting the nature of industrial development have not yet become a complete system.
“AI-generated content is an emerging technology. The lag and stability of the law have not specifically regulated this phenomenon of AI suspected plagiarism.” Wang Yuwei, partner of Guantao Law Firm, said that from foreign industries In practice, more and more platforms and AI drawing tools have made strict requirements on the copyright of works, and beware of copyright infringement caused by AI plagiarism. In our country, at present, it is still necessary to judge whether there is a substantial similarity between the content generated by AI and the works of human artists according to the Copyright Law, and then determine whether it is plagiarism.
In addition to the AIGC content allegedly plagiarizing the artist’s work, on the other hand,Even randomly generated content and artwork can still involve copyright issues.
When AI drawing was just emerging on Douyin and Xiaohongshu, netizens with big brains proposed that the pictures generated by AI could be posted in the paid gallery as a charging resource to realize “earning while lying down”.
In the face of such coquettish operations, Adobe chose to open AI drawing “with restrictions”. Generative AI artwork is allowed to be uploaded for sale in the image library Adobe Stock as long as it meets certain criteria. However, AI-generated content must be marked before uploading, and you need to own the commercial copyright of its reference image or text.
As an open source model, Stability AI and RunwayML, the developers of Stable diffusion, believe that since it is open source, it is alsoopen source should be cited accordinglyCC0Agreement, that is, the copyright belongs to the public, and anyone can use it freely, including commercial use.For AIGC models such as DALL-E 2 and Midjourney, which are not open source, the ownership of the generated content is relatively vague.
At present, there are also many copyright chaos in the AIGC market. Some manufacturers who develop applications based on Stable Diffusion claim that the copyright of the pictures generated by their own AI programs belongs to themselves, and even produce NFTs based on this to sell to users.
In this regard, Wang Yuwei believes that when users of AI applications use such applications for creation, if the generated pictures meet the requirements of originality, they can constitute works under the Copyright Law. However, the manufacturer has clearly informed the user of the ownership of the copyright, which is equivalent to an agreement in the contract, so the copyright of this kind of work should belong to the manufacturer. It is not to say that AI applications are developed based on the open source model, and the works generated by it do not have copyright, but the copyright ownership should be determined according to the provisions of the “Copyright Law”.
“At present, the biggest obstacle and resistance to the formulation of relevant laws may be how the laws maintain the balance between scientific and technological development and ethical values.” Wang Yuwei pointed out that on the one hand, the rapid development of AI technology may violate the basic ethical cognition of human beings. Social order and good customs; on the other hand, legal regulations cannot set too strict standards for AI technology, which will affect the further development of AI.
Author of this article: Qi Jian, source: Tiger Sniff APP, original title: “ChatGPT is very powerful, but there are still key issues unresolved》
Risk Warning and Disclaimer
Market risk, the investment need to be cautious. This article does not constitute personal investment advice, nor does it take into account the particular investment objectives, financial situation or needs of individual users. Users should consider whether any opinions, opinions or conclusions expressed herein are applicable to their particular situation. Invest accordingly at your own risk.