Home » From voice deepfakes to hijacking: a guide to understanding the cybersecurity of the future

From voice deepfakes to hijacking: a guide to understanding the cybersecurity of the future

by admin
From voice deepfakes to hijacking: a guide to understanding the cybersecurity of the future

Listen to the audio version of the article

2023 has seen a further surge in cyber attacks on a global scale, both in the business and consumer sectors, and at the same time the threats have further evolved and become even more effective, hitting high-profile targets: only in the first half of the year Confirming this trend, approximately fifty ransomware groups have reported having hacked and publicly extorted data from more than 2,200 victims including large corporations, government agencies and other organizations. The response to the spread of this phenomenon is the constant strengthening of spending on cybersecurity solutions and risk management, spending which Gartner analysts have estimated could rise by 14% in the next twelve months, reaching 215 billion globally. dollars. But what should we expect in 2024, a year that includes events such as the 2024 US election and the Paris Olympics? Here’s what the experts of some international security companies that also operate in Italy think.

Machine learning to attack and defend

Next year, Check Point Research predicts, will see more and more cybercriminals adopting machine learning algorithms to enhance every aspect of their attack kits, to develop new malware variants more quickly and at lower cost, and to exploit deepfake technologies in phishing campaigns. The significant investments made by companies in artificial intelligence to raise the level of their cybersecurity will continue in 2024, while the steps forward made in Europe and the USA in the regulation of AI will produce further changes in the way this technology will be used for both offensive and defensive activities. Everyone agrees on one aspect: in 2024, when we talk about cybersecurity,

See also  NVIDIA co-hosts an event with Cyberpunk 2077! The prize is the RTX4090 with a specially made backplane

AI will be everywhere.

The impacts of Gen AIThe evolution of generative AI, as highlighted by the researchers at Fortinet’s FortiGuard Labs, is an aspect that absolutely should not be overlooked, and in particular with regard to the progressive use (by cybercriminals) of the models of large format language to support malicious activities, from evading detection to social engineering to imitating human behavior. Companies, for their part, will intrinsically align security with the software development pipeline: according to Palo Alto Networks forecasts, the proliferation of generative AI applied to this process could lead to an exponential growth of full self-developed software of bugs and an acceleration of attacks against these applications. The consequence? For at least one in three enterprises, application security will be the third most important cyber risk for 2024. Most cybercriminals, Acronis confirms, will use LLM tools to generate new malicious programs and distribute the results on a very large scale, making it difficult for defenders to diagnose bugs and (potentially) resulting security vulnerabilities. Finally, a no less significant impact of the pervasive use of generative artificial intelligence will concern the methods of financing development activities linked to AI. If in the past computational capabilities in the cloud were a primary target for crypto mining, 2024 will see the emergence of the so-called “GPU Farming”: cybercriminals, in other words, will focus on creating hardware architectures in the cloud to cover the costs of creation (via algorithms) of fraudulent code.

Ransomware changes its face again

While many organizations have strengthened their defenses against this threat, incidents of data loss or theft are likely to increase over the next year. In short, ransomware will continue to grow, making every company, regardless of size or sector, a target: cybercriminals, in fact, will shift their attention from large companies towards medium-sized companies and will look for further solutions to scale their operations , automating deployment as much as possible. One factor that will contribute to the further escalation of these attacks, experts say, is the growing reliance on SaaS platforms for storing sensitive data — platforms that are not immune to new vulnerabilities that malicious entities can exploit. The increase in ransomware attacks, Check Point further notes, will require careful interpretation, being potentially “inflated” due to the new obligations imposed by reporting protocols. Finally, the scenario will also change depending on a different attitude of cybercriminals: the easiest targets to violate are in fact rapidly running out and for this reason the bad guys could concentrate on those critical sectors (such as healthcare, finance, transport and public utilities) which, if violated, would have a significant negative impact on society as a whole.

See also  Pro-Russian attacks on sites of Italian banks and transport companies. Limited outages

Deepfake technology takes another step forward

Deepfakes are often used as weapons to create content that can influence opinions, alter stock prices or more – looking ahead to the next twelve months (and beyond) it is almost a foregone conclusion that threat actors will continue to launch social engineering attacks rely on these tools to obtain permissions and access sensitive data. According to experts at Barracuda Networks, in 2024 criminals could use the evolution of deepfake technology to spread disinformation campaigns and manipulate the media for malicious purposes, especially during events such as the US presidential elections. From a technological perspective, as highlighted by Kaspersky, it is reasonable to expect a further acceleration in the spread of vocal deepfakes and the recent launch of OpenAI’s Text-to-Speech API, characterized by advanced human language generation capabilities, is a clear sign of the progress that is affecting specialized solutions in the creation of artificial voices. And with it the possibility that these tools are used by scammers and bad actors to create increasingly convincing and accessible misleading content.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy