Home » They use ChatGPT, the most advanced chatbot in the world, to commit fraud

They use ChatGPT, the most advanced chatbot in the world, to commit fraud

by admin
They use ChatGPT, the most advanced chatbot in the world, to commit fraud

Santiago, Chile, January 13, 2022. Check Point Research, the Threat Intelligence division of Check Point Software Technologies Ltd. (NASDAQ: CHKP), has detected the first cases of cybercriminals using ChatGPT to develop malicious tools. In underground forums on the Dark Web, cybercriminals are creating Infostealer, encryption tools, and facilitating fraudulent activity. The researchers want to warn of the growing interest of attackers for ChatGPT.

There has been much discussion about artificial intelligence (AI), a disruptive technology with the potential to dramatically improve our lives through personalized medicine, safer transportation, among other uses. It has great potential to help the cybersecurity industry speed up the development of new protection tools and validate some aspects of secure coding. However, the introduction of this new technology also carries a potential risk that must be taken into account.

The world experienced a 38% increase in cyberattacks in 2022 (compared to 2021). Businesses on average were attacked 1,168 times per week. Education and health were two of the most attacked sectors, which have seen hospitals and schools paralyzed. We may now see an exponential increase in cyberattacks due to ChatGPT and other AI models.

Check Point Research explains three recent cases that show this danger and the growing interest of cybercriminals in ChatGPT to escalate and expose malicious activities:

  • Infostealer Creation: On December 29, 2022, a thread called “ChatGPT – Malware Benefits” appeared on a popular underground hacking forum. The editor of the thread revealed that he was experimenting with ChatGPT to recreate malware strains and techniques described in research publications and writings on common malware. These messages appeared to be teaching other, less technically-savvy attackers how to use ChatGPT for malicious purposes, with real-life examples that they could immediately apply.
  • Creating a multi-layer encryption tool: On December 21, 2022, a cybercriminal nicknamed USDoD published a Python script that he referred to as his “first script ever created.” When another cybercriminal commented that the code style resembled OpenAI code, USDoD confirmed that OpenAI gave it a “good [mano] to end the script with a good scope.” This could mean that potential cybercriminals with little or no development skills could take advantage of ChatGPT to develop malicious tools and become attackers with technical skills. Of course, all the mentioned code can be used in a benign way. However, this script can be modified to encrypt a computer without any user interaction. For example, you could turn the code into ransomware.
  • Facilitate ChatGPT for fraudulent activities: in this case, a cybercriminal shows how to create a marketplace for scripts on the Dark Web using ChatGPT. The primary role of the marketplace in the illicit underground economy is to provide a platform for automated trading of illegal or stolen goods such as stolen payment accounts or cards, malware, or even drugs and ammunition, with all payments being made in cryptocurrency.
See also  Tom Skinner brings good news - Giovanni Ansaldo

“Cybercriminals are finding ChatGPT attractive. In recent weeks we are seeing evidence of cybercriminals starting to use it by writing malicious code. ChatGPT has the potential to speed up the process by giving them a good starting point. ChatGPT can be used positively to help developers write code, but also for malicious purposes. Although the tools we discuss in this report are fairly basic, it is only a matter of time before more sophisticated cybercriminals improve the way they use AI-based tools. CPR will continue to investigate cybercrime related to ChatGPT in the coming weeks”, warns Gery Coronel, technical director of Check Point Software for Chile.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy