Home » Generative AI like ChatGPT used to distribute malware

Generative AI like ChatGPT used to distribute malware

by admin
Generative AI like ChatGPT used to distribute malware

Since March, Meta experts have observed cybercriminal groups distributing popular browser extensions and malicious applications under the guise of tools that can use ChatGPT or other similar Generative AI-powered software.

At least a dozen malware belonging to different families of malicious code have already been identified, and probably distributed by different groups, demonstrating the growing interest by attackers in using software based on Generative AI as bait.

“Since March alone, our security analysts have found around 10 malware families posing as ChatGPT and similar tools to compromise Internet accounts. For example, we have seen threat actors create malicious browser extensions available on official stores that claim to offer ChatGPT-related tools. it is read In the relationship Q1 2023 Meta Security Report “Indeed, some of these malicious extensions included ChatGPT functionality that worked in conjunction with the malware. This is probably to avoid suspicion on the part of shops and users.”

Malicious codes are hidden inside working software, the execution of which kicks off the infection process. These applications are advertised through messages published on social networks or shared on instant messaging platforms.

Meta revealed that it has blocked over 1,000 links shared across its platforms and pointing to the aforementioned applications.

The IT giant has also reiterated the importance of sharing when discovered with other companies in the industry operating social networking and social media platforms in order to prevent the spread of these malicious software.

The rapid evolution of AI-powered technologies is attracting an increasing number of malicious actors, which is why Meta recommends being vigilant about the evolving threat landscape.

See also  Mother 3’s dad (and uncle Reggie): Just ask Nintendo

“ChatGPT are the new cryptocurrencies.” said Guy Rosen, Chief Information Security Officer of Meta, referring to the interest catalysed by the topic.

Finally, the report reports that the company’s research groups are working on projects involving the development of systems based on generative AI to detect and block online disinformation campaigns.

Unfortunately, generative AI could also be used for other malicious purposes by attackers, think for example of the creation of phishing and spear-phishing emails, the content of which could be interpreted by a human as legitimate and therefore be induced to share sensitive information.

generative AI could also be used for the creation of fake social accounts that can be coordinated to spread false news or promote various types of fraud. Generative AI can also be used to create fake news using fake content (ie images, video or audio) capable of manipulating public opinion on specific issues.

Finally, systems based on generative AI could assist the development of malicious code by expert programmers.

Here we have only illustrated the possible abuses of generative AI, however it is essential to remember that the same technology, as anticipated by Meta, can be used for defensive purposes.

There are several projects underway to use these systems to prevent and detect attacks.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy