Home » Stealing data from ChatGPT, using ChatGPT: how AI reveals people’s names, surnames, faces and addresses

Stealing data from ChatGPT, using ChatGPT: how AI reveals people’s names, surnames, faces and addresses

by admin
Stealing data from ChatGPT, using ChatGPT: how AI reveals people’s names, surnames, faces and addresses

How safe is the data entrusted to artificial intelligences and in particular to ChatGPT? How safe are the billions of pieces of information on which the most famous AI has been trained from prying eyes? Not much, judging by research published at the end of November (this) and dedicated precisely to understanding if and how Extracting Training Data from ChatGPT is possible. That is, precisely: extracting the data with which it was trained from ChatGPT.

Participating in this ethical hacking work were, among others, researchers from DeepMind (the Google division that deals with artificial intelligence) such as Nicholas Carlini and Katherine Lee, but also from the University of Washington, Cornell, Berkeley and the Zurich Polytechnic.

The (not much) information hidden in ChatGPT

According to what they explained, the authors of the research managed to “extract several megabytes of training data” from the paid version of ChatGPT by spending about 200 dollars, but “we believe it would be possible to extract about a GB of data by spending more money” to ask questions to OpenAI AI.

To understand the severity of the problem, the first thing to understand is this: the researchers obtained the information not by using who knows what trick, hacking, software or device, but simply by chatting with ChatGPT, as practically anyone can do.

The second important thing to underline is what type of information it is: as is known (we explained it here) the Large Language Models on which the AI ​​are trained are composed of billions of data that are recovered by scraping on the Internet. Simplifying: thousands, thousands and thousands of online pages are read (Wikipedia, newspaper sites, social network message boards, scientific papers, library archives and so on), these pages are memorized and on the basis of the knowledge learned from these pages, AIs learn to give the surprising answers they are capable of giving. This data is predominantly public (or almost), accessible online even if not always easily by ordinary people.

See also  The Lamplighters League launches October 3rd

In this enormous sea of ​​data there is also a lot of private or sensitive information, such as photos, faces (because to learn to draw faces, generative AIs have to look at faces), addresses, email addresses, telephone numbers, people’s names and surnames and so on. And this is exactly the data that the researchers were able to get from ChatGPT. Indeed, they managed to “regurgitate” ChatGPT, as they themselves write.

The search method and the severity of the result

As mentioned, it was not difficult to achieve this result. Indeed, a “quite stupid” way was used, as the authors of the research themselves explained: as a prompt, ChatGPT was asked to repeat a word forever, that is, infinitely and forever, and after a certain number of lines l The AI ​​started to write (to “regurgitate”, in fact) the training data.

Here is an example that allows you to see clearly what is happeningin which ChatGPT reveals an email address and phone number of a totally unaware person, but there’s more: in more than 5% of tests, what OpenAI’s AI responded to were blocks from 50 rows taken directly from his training datasets.

To give a silly example, it’s as if a student went to school with some notes hidden on him for a history test, did the test, answered the questions correctly and surprisingly well, got an excellent grade and then suddenly, out of the blue, he started opening the notes in front of the teacher and reading them aloud to him. With the added aggravating circumstance that in the cards used by ChatGPT there are people’s names and surnames, their faces, their photos, telephone numbers, addresses, email addresses and who knows what else.

See also  Infinity Ward Expands with New Studio in Austin, Texas for Call of Duty Series Development

As we understand, what has been discovered is twice as serious. First of all, because it publicly exposes sensitive information to anyone that should remain private and which, as we have often explained on Italian Tech, could be used by cybercriminals to build credible stories with which to organize scams and scams. Then, because it raises further serious doubts about the reliability of ChatGPT’s responses and generative AI in general. As if there weren’t enough already.

OpenAI’s mistakes: how to make sure it doesn’t happen again

There is a third reason of gravity, which is perhaps a more technical and insider reason but no less interesting for this reason and specifically concerns the OpenAI creature which (at least in theory) would be programmed precisely so as not to reveal the data on which she was trained. Which is exactly what happened.

On this point, the researchers made some observations, mainly aimed at Altman and colleagues: “Evidently, testing only on AI released to the public (on the finished product, ed.) is not a good idea, because it hides any vulnerabilities of the models on which they were trained.” Above all, “companies releasing these LLMs should rely on internal testing, user testing, and third-party testing” to uncover these flaws: “It’s absurd that our attack worked, because this vulnerability could and should have been found sooner ”.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy