Home » What AI holds in store for security

What AI holds in store for security

by admin
What AI holds in store for security

Abuse of Chatbots?

Of course, such technology is also open to abuse – for example, by cybercriminals who commit fraud crimes with the help of a chatbot. Where today entire call centers with human employees are used to perform support scams, for example, there may one day be a computer or a data center. Combined with another technology from OpenAI called Vall-E, the possibilities here are downright terrifying. Vall-E is capable of mimicking voices with uncanny accuracy, and with relatively little source material. A small voice sample is enough. Until recently, these possibilities were the domain of science fiction writers. In the here and now, this kind of voice imitation from the computer could also be used to persuade mobile phone providers to disclose or change personal data.

Used correctly – or rather incorrectly – ChatGPT and Vall-E can become a nightmare for IT security. Especially when they are used in industrial espionage. In the worst case, one might not even trust the mail or the call from colleagues or the boss anymore, because the voice is so deceptively genuine that the ruse is not noticeable over the phone. Is there perhaps also a regression here that makes it imperative to do more things face-to-face if you want to be sure you’re not talking to an artificially intelligent version of the other person?

And in times of ever-improving deepfakes that can make people do and say things they have never said or done themselves, this composite of technologies could even trigger wars. Against this backdrop, it would also not be out of place to speak here of a risk technology whose use must be closely regulated. Both politicians and manufacturers have already made it clear back in 2018 that there must be clear regulations and laws here. For example, the European Commission is also addressing this issue in a draft.

See also  They approve a new extension of the Exception Regime to continue the fight against gangs

The use for spreading and generating fake news is also absolutely conceivable. Because the texts generated by ChatGPT are human in an almost uncanny way. One of the reasons for this will receive attention later in this text.  ChatGPT is also able to write program code. A simple request is all it takes for the bot to spit out the desired lines of code. This naturally aroused concerns that software developers could become obsolete in the future. But there is no cause for alarm in this respect. One of the reasons is that ChatGPT never learned to develop software. Thus, the system does not “know” how secure code works. The generated code is functional, but the security may be doubted. Source code generated by an AI may therefore contain security vulnerabilities. Unsurprisingly, there have even been attempts to use ChatGPT to program ransomware. So far, the aspect of AI-generated software is still in its infancy. After all, no one has yet generated entire software suites exclusively with an AI. How and by what AI-generated software could be recognized is still unclear – this topic alone very likely contains material for several PhD theses.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy