Home » Researchers find that most of the code generated by ChatGPT is insecure, but it won’t tell you

Researchers find that most of the code generated by ChatGPT is insecure, but it won’t tell you

by admin


IT House News on April 23, the ChatGPT chat robot can generate various texts, including codes, based on user input. However, four researchers at the University of Quebec in Canada found that the code generated by ChatGPT often has serious security problems,And it will not actively remind users of these problems, and will only admit its mistakes when users ask.

The researchers presented their findings in a paper that they asked ChatGPT to generate 21 programs and scripts in languages ​​such as C, C++, Python, and Java. These programs and scripts are designed to demonstrate specific security vulnerabilities such as memory corruption, denial of service, deserialization, and encryption implementations. The results show that,Only 5 of 21 programs generated by ChatGPT on the first try were safe. After being further prompted to correct its wrong steps, the large language model managed to generate 7 more secure applications, though only “safer” in relation to the specific vulnerability being evaluated,That’s not to say the final code doesn’t have any other exploitable vulnerabilities.

Part of ChatGPT’s problem, the researchers note, is that it doesn’t account for adversarial code execution models. It will repeatedly tell users that security issues can be avoided by “not entering invalid data”, but this is not feasible in the real world. However, it appears to be aware of and acknowledge critical vulnerabilities in the code it suggests.

“Obviously, it’s just an algorithm. It doesn’t know anything, but it can identify unsafe behavior,” Raphaël Khoury, a professor of computer science and engineering at the University of Quebec and one of the paper’s co-authors, told The Register. The original ChatGPT response to security concerns suggested only using valid inputs, which was clearly unreasonable. It provided useful guidance only when later asked to refine the question.

See also  People familiar with the matter: Evergrande suspends the sale of Evergrande property shares | Evergrande Group | Hopson Development | Yuexiu Real Estate

According to the researchers, this behavior of ChatGPT is not ideal,Because users know what questions to ask requires some knowledge of specific vulnerabilities and coding techniques.

The researchers also noted that,There is a moral inconsistency in ChatGPT. It refuses to create exploit code, but instead creates vulnerable code. They cited an example of a Java deserialization vulnerability, “The chatbot generated vulnerable code and offered advice on how to make it more secure, but said it couldn’t create a more secure version of the code.”

Khoury argues that ChatGPT is a risk in its current form, but that’s not to say there aren’t ways to make good use of this erratic, underperforming AI assistant. “We’ve seen students use this tool, and programmers use this tool in real life,” he said, “so having a tool that generates unsafe code is very dangerous. We need to make students aware that if the code is generated with this type of tool, then it is likely to be insecure.” He also said that he was surprised that when they asked ChatGPT to generate code for the same task in different languages, sometimes for one language, It generates secure code, whereas for another language it generates vulnerable code,” because this language model is kind of a black box, and I don’t really have a good explanation or theory for that. “


You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy