Home » Zscaler, AI transforms social engineering schemes

Zscaler, AI transforms social engineering schemes

by admin
Zscaler, AI transforms social engineering schemes

With the arrival of AI, digital security is transforming and traditional social engineering schemes are becoming even more insidious. The opinion of Zscaler .

In the security world we are starting to see what old attack patterns are reused with new technologies.

When AI is used creatively

An example: “Deepfake” is a term that is increasingly reported in newspapers to describe videos manipulated digitally and used to recreate a person’s image or falsify an identity. The latest example of deepfake targeting, in which a $25 million wire transfer was made following a spoofed video call, has caught attention for several reasons. First of all due to the amount of money that cybercriminals managed to steal with a single spoofed video call. In itself, the operational scheme used to deceive the victim is nothing new. However, this deepfake example once again demonstrated how sophisticated the level of adulteration is when AI is used creatively.

The fear of new things

Generally, people they fear a relatively new technology, such as AI. This is because they cannot immediately grasp its full potential and are afraid of what they don’t know. Likewise, technological developments scare people when they perceive them as a threat to their sense of security or their professional life. Like, for example, the possibility of losing your job due to artificial intelligence.

How AI transforms traditional social engineering schemes

Social engineering techniques used by cybercriminals are evolving constantly. And usually there are more criminals fast in adopting new technologies to their advantage than security companies are a to protect their victims. We find examples of this type in the not too distant past. In the days of modem connectivity, common malware would connect to a modem in the middle of the night and connect it to a toll number, resulting in high bills. A few years ago, a series of malicious Android applications hacked into cell phones to dial toll numbers for quick and easy money, essentially a modern form of the old modem dialer tactic. Cryptominers that exploit the computing power of infected systems represent the next step in this evolution.

The human risk factor

See also  Announcement of Ping An Fund Management Co., Ltd. on Suspension of Sales Organizations of Shenzhen Jinhai Jiuzhou Fund Sales Co., Ltd. and Beijing Sina Cangshi Fund Sales Co., Ltd. to handle related sales business_zhixin_sales_funds

History has shown us a number of examples of using old social engineering tactics. The technique of counterfeit the rumor of a high-level executive reusing publicly available audio clips to threaten users into taking action is already quite well known. There forgery of video sessions showing a series of people on a live, interactive call, however, demonstrates the new (and scary) level of forgery cybercriminals have reached. This has sown a new level of fear regarding the technological evolution of AI. It’s the perfect demonstration of how easily humans can be tricked or coerced into action, and how cybercriminals can exploit it to their advantage.

Lack of ad hoc training

This attack also highlights how new technology can allow cybercriminals to carry out the usual tasks, but in a different way efficient. And, of course, bad actors are taking advantage of this. Unfortunately, there is not yet a general awareness of the constant evolution of social engineering techniques. In general, the general public does not follow security news and believes that these types of attacks will never affect them. This is what makes traditional training difficult and effective awareness to IT security. The user (as an individual) does not believe they can be targeted. So when this happens, he is unprepared and falls victim to a social engineering attack.

Human beings are not machines

In the wake of this recent attack, questions have also been raised about how an employee would have any chance of realizing that it was a false, if AI is really that good at making these videos look so realistic. The fact is that humans are not machines and will always be a factor of risk as the first line of defense within a company. This is because they will have a variable level of security awareness (regardless of how good the internal training process may be).

See also  Panasonic Toughbook 40 is unrivaled in emergencies

Artificial intelligence never rests

Let’s imagine a user who has had a bad evening or returned home late from a business trip or a sporting event. The next day he may simply not be as focused on detecting modern social engineering techniques or paying attention to details. On the other hand, artificial intelligence will not have a bad day: its operating mode will remain constant.

Traditional social engineering schemes, how AI transforms them

The fact that these types of strategic schemes continue to prove effective demonstrates that companies have not yet adapted security and organizational processes to manage them. One way to counteract Deep fake videos start at the (security) process level. The first idea that comes to mind is simple: have video conferencing systems include a feature to authenticate a logged in user as a human. A simple plugin could do the trick, employing two-factor authentication to verify identity within Zoom or Teams, for example. Hopefully, such an API would be quite easy to develop and would be a huge step forward in preventing sniffing attacks even through the phone.

Use technology to stay ahead of criminals

Furthermore, the cultural approach that fears artificial intelligence must change. It’s amazing technology, not just when it’s used improperly. Society just needs to understand its limitations. AI can actually be implemented to stop these types of modern attacks if security managers they learn to control the problem and use technology to stay ahead of cybercriminals. Technologies based on the art of deception already exist. And AI can be used to detect anomalies much more quickly and effectively, demonstrating its positive potential. From a more global perspective, adopting a Zero Trust approach to security can enable companies to continuously improve their level of security across processes.

La soluzione Identity Threat Detection and Response

See also  The currently best offers at a glance

Zero Trust solutions can not only help at the level of connectivity, but also improve security workflows. In fact, it helps to check whether all participants in a call are authenticated compared to an internal directory. Zscaler’s Identity Threat Detection and Response (ITDR) solution mitigates threats that target user identities. With the new service, identity risk becomes quantifiable, misconfigurations are detected, and real-time monitoring and privilege escalation help prevent breaches.

Why and how AI transforms traditional social engineering schemes

Finally, returning to the initial example of the successful deepfake, it is hard to believe that so much money can be transferred from a company without processes verify that operate in the background. It is critical that companies verify the overall risk level of these processes within their infrastructure. If solid ones were put in place processes administrative to reduce risks, not only from a security perspective, but also for operational processes such as payment authentication, the barriers against attacks would increase significantly. Not everything needs to be improved by a technological solution. Sometimes a new process where two people have to sign off on a funds transfer can be the step that saves the company from losing $25 million.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy