Home » 89% of American college students use ChatGPT to write homework, New York University professor warns: using AI is plagiarism domeet webmaster

89% of American college students use ChatGPT to write homework, New York University professor warns: using AI is plagiarism domeet webmaster

by admin
89% of American college students use ChatGPT to write homework, New York University professor warns: using AI is plagiarism domeet webmaster

The survey found that 89% of American college students are already using ChatGPT for homework, and 72% of them also support the ban. In this regard, the teachers’ attitudes are mixed, which is worth pondering.

It has only been two months since ChatGPT was born, but the ‘blockbuster’ it dropped on the world has never stopped.

Although in many schools, teachers are anti-ChatGPT like a scourge, but they are still repeatedly prohibited.

According to a survey, 89% of college students in the United States now use ChatGPT for homework.

89%? Yes, even, the real ratio is higher.

Although ChatGPT was completely banned by New York’s education system before, how could it be possible to really ban it with the ingenuity of the students?

Sure enough, now educators have to face reality: Students are already using ChatGPT to cheat with impunity.

For example, Antony Aumann, a professor of philosophy at Northern Michigan University, found that the first paper in the class was written using ChatGPT when he was grading himself.

Online course provider Study.com asked 1,000 students over the age of 18 about their use of ChatGPT in the classroom.

The results show–

These staggeringly high data make us have to face up to this phenomenon: AI has been integrated into the social structure of human beings, and has produced extensive and far-reaching consequences.

Interestingly, while nearly 90% of students use ChatGPT for homework at home, nearly three-quarters of students want ChatGPT banned at school.

That is to say, don’t worry about the shortage but the unevenness. No matter what the scene is, the students hope to stand on the same starting line.

Either they all use it together, or they don’t.

Meanwhile, Study.com also surveyed more than 100 educators to gain insight into their feelings about ChatGPT.

It seems that the teachers are far more enlightened than we imagined. So how are they going to use ChatGPT?

According to a survey by Study.com, 21% of teachers have begun to use ChatGPT to assist teaching work——

It can be seen that, unlike our impression, most of the teachers participating in the survey have a relatively open attitude towards AI, and 66% of them believe that ChatGPT can be used as a resource to help students.

In contrast, the students’ own trust is much lower, as high as 72% think that ChatGPT should be banned in schools.

Throughout human history, the birth of a new thing is often accompanied by many controversies.

Obviously, in the face of ChatGPT, a two-month-old ‘newborn’, there is no unanimous opinion within the teaching team.

Some of the teachers have a very clear attitude towards students using ChatGPT-it’s pure cheating!

See also  Palermo, IVS contributions: time-barred tax bills

In the past few days, many schools in the United States have started school, and the hottest topic among teachers and students is undoubtedly ChatGPT.

At New York University, the ‘Academic Integrity’ section of the syllabus has explicitly banned the use of AI as cheating.

In addition, the students also received warnings from their professors on the first day of class.

In a class at NYU’s Tisch School of the Arts, the professor put it bluntly on the syllabus—

  ‘Q: Is using ChatGPT or other AI tools that generate text or content considered cheating? Answer: Yes. ‘

Even in classes that don’t require essay writing, the professor raised the ChatGPT warning.

A macroeconomics syllabus reads: ‘We are intentionally time-bound so that you cannot have time to consult a book, ChatGPT, or other resource and still complete the test. Students are not allowed to communicate with anyone (including ChatGPT) during the 24 hours of the test. ‘

Of course, it is well known that ChatGPT is often stupid in the face of math problems, so the professors in the mathematics department are exempt from this layer of worry.

Jenni Quilter, associate dean of the New York University College of Arts and Sciences, said that professors are now worried that students will use ChatGPT to cheat.

According to Quilter, incidents of students using ChatGPT occurred as early as December.

‘Using ChatGPT without permission has the same consequences as any academic plagiarism incident, and the results include redoing homework, deducting points, and writing checks. ‘

David Levene, a professor of classics at New York University, said he is keeping a close eye on all ChatGTP-related plagiarism.

‘I have clearly warned students that any use of ChatGPT in any form unless they have my permission is cheating. ‘

’ I also told them that I’ve tried ChatGPT for writing papers and it’s been a B- at best and an F at worst. So if they want anything better than a B-, they should avoid it like the plague! ‘

The concerns of NYU professors are not unfounded.

According to a survey conducted by the Stanford Daily, 17% of students have used ChatGPT for fall semester assignments and exams.

However, compared with Study.com’s 89% and 48%, Stanford’s ratio is obviously much lower.

Many professors worry that AI chatbots will have a disastrous impact on education.

‘Just because there’s a machine that helps me lift dumbbells doesn’t mean I’m going to be muscular,’ Johann Neem, a professor of history at Western Washington University, told The Wall Street Journal.

‘Similarly, having a machine that can write papers does not mean that my mind will develop. ‘

But other professors argue that ChatGPT’s powerful technology should be harnessed to prepare students for the new reality.

“I hope it inspires and educates you enough that you want to learn how to use these tools, not just learn to cheat better,” said Alex Lawrence, a professor at Weber State University. ‘

See also  Russia, fire in Tver in the institute that develops missiles

And Ethan Mollick of the University of Pennsylvania says he hopes students in his literature department will use technology to ‘write more’ and ‘write better’.

‘ChatGPT is a force multiplier for writing,’ adds Mollick. ‘I hope they use it. ‘

While sparking a storm of academic integrity, many experts believe the technology is just the beginning of a new era of learning — and AI writing tools are the future of learning.

Phillip Dawson, director of Deakin University’s Center for Digital Studies, said: ‘I think this is a momentous moment in human empowerment. ‘

’ From my perspective, students graduating five years from now will be able to do a lot more than students today because they have these AI tools. ‘

He draws an analogy between a student writing a thesis and a pilot flying a modern airplane. ‘Yes, you have to learn to use all the instruments, you need to know how the instruments work, but you also need to fly the plane when the instruments fail. ‘

Dr Cheryl Pope, a lecturer in the School of Computing and Mathematics at the University of Adelaide, said ChatGPT was great for writing first drafts, but it couldn’t replace the need for human editing and fact-checking. ‘You need to understand the topic in order to comment on the answers it generates. ‘

ChatGPT will get you a few steps, but not a high score. But its possibilities are exciting and can move us to a higher standard. Like we have different expectations for a two-hour written test versus a two-month paper.

Another reason is that getting help requires a lot of social resources.

It would be embarrassing to ask someone a stupid question, but in the face of AI, we will never have such worries.

Where there is offense, there is defense, and the AI ​​cheating detection tool that can relieve teachers’ worries has also been born quickly.

Recently, a research team from Stanford University proposed a new method for detecting AI-generated text – DetectGPT.

Generally speaking:

DetectGPT detects whether text comes from a pre-trained language model by exploiting the local curvature of the model’s logarithmic probability function (generated by LLMs which tend to occupy regions of negative curvature).

DetectGPT uses only the log probabilities computed by the interest model and random perturbations from another general-purpose pretrained language model (such as T5), eliminating the need to train a separate classifier, collect real or generated paragraph datasets, or watermark generated text .

DetectGPT performs better than existing zero-shot methods, especially improving the detection rate of fake news generated by 20B parameter GPT-NeoX from 0.81 AUROC to 0.95 AUROC.

We can observe that machine-generated text (left) tends to lie in regions of log-negative curvature, while nearby samples have, on average, lower model log-probability.

See also  FrieslandCampina fined €561,000 for infant formula

In contrast, human text (right) does not significantly occupy regions of negative log probability curvature.

Next, we want to determine whether a piece of text was produced by a specific LLM, such as GPT-3.

First, DetectGPT needs to use a general pre-training model (such as T5) to slightly perturb the paragraph. Then let DetectGPT compare the original sample with the log probability of each perturbed sample.

If the average log ratio is high, the sample is likely from the source model.

The specific test results are as follows:

Furthermore, supervised detection models trained on large datasets of real and generated text perform as well as or better than DetectGPT on distributed text. (superior)

However, for new domains such as PubMed medical texts and German news data from WMT16, zero-shot works out-of-the-box, while supervised detection methods break down due to excessive distribution shift. (Down)

However, DetectGPT itself has obvious limitations.

First, DetectGPT is based on the white-box assumption, i.e. we can evaluate the log-probabilities about the model. For the models behind those APIs (such as GPT-3), evaluating probabilities also costs money.

Second, DetectGPT needs to obtain a reasonable perturbation function. Although in this work, the authors used off-the-shelf masking models such as T5 and mT5 (for non-English languages), the performance of DetectGPT in some domains may suffer if the existing models do not represent the space well. may decrease.

Finally, DetectGPT is more computationally intensive than other detection methods because it needs to sample and score a perturbation set for each candidate passage, rather than just scoring the candidate passages.

Although DetectGPT is not open at this stage, it is not a big problem.

After all, there are plenty of tools out there that can be used out of the box.

Especially GPTZero, not only free, but also outstanding.

The editor personally tested and found that the latest version of GPTZero can even clearly indicate which paragraph is generated by AI and which paragraph is written by humans in a text.

In principle, GPTZero mainly relies on ‘perplexity’ (randomness of text) and ‘burst’ (change in perplexity) as indicators for judgment.

In each test, GPTZero also picks the sentence with the highest perplexity, which is the most human-like sentence written by AI.

References:

This article is from the WeChat public account: Xinzhiyuan (ID: AI_era)

Massive information, accurate interpretation, all in the Sina Finance APP

Editor in charge: Wang Maohua

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy