Home » Can the Guarantor stop artificial intelligence?

Can the Guarantor stop artificial intelligence?

by admin

The reasons given raise perplexities on a legal level while the decision taken creates clear discrimination compared to other AI-based services.

The decision of the Guarantor to block ChatGPT in Italy because, in his opinion, it does not respect the privacy regulations, has raised a number of reactions and judgments of various kinds. There are two reasons given: the management of personal data and the absence of systems for verifying the age of minors.

In the provision, the Privacy Guarantor notes the lack of information to users and all interested parties whose data is collected by OpenAI, but above all the absence of a legal basis that justifies the mass collection and storage of personal data, in order to “train” the algorithms underlying the functioning of the platform. To reinforce the decision of the Guarantor there is also the observation that “the information provided by ChatGPT does not always correspond to the real data, thus determining an inaccurate treatment of personal data”.

By interpreting the Guarantor’s communication, the indications that it is possible to provide to ChatGPT to comment or correct the answers it provides are therefore considered personal data, so as to improve the linguistic model on which it operates.

However, according to the Guarantor, “They are personal data information that identifies or makes identifiable, directly or indirectly, a natural person and which can provide information on your characteristics, your habits, your lifestyle, your personal relationships, your state of health, your economic situation, etc.”. The question I ask myself is whether, correcting a wrong answer on what a CUP is for example (which happened to me), is providing personal data. In this case, I do not believe that explaining that a CUP is a system that allows users to book exams and visits reveals my habits, my lifestyle, my personal relationships, my health or my economic status.

See also  new agreement to address the shortage

According to the Guarantor, a legal basis is needed to do this, i.e. a law for artificial intelligence. However, the Guarantor seems to ignore or fail to consider the fact that we have been living for some time in a world where AI is present and provides services to us users. The content recommendation of any streaming service is based on AI that is trained based on the choices we make. The possibility of searching for photos by subject is possible thanks to the training of our images, as well as indications on traffic or on the crowding of public establishments through the acquisition and processing of our movements. What about social media then?

So I ask the Guarantor, what legal basis are these services based on? Do we want to block these too?

Regarding then the absence of “any filter for verifying the age of users exposes minors to answers that are absolutely unsuitable for their degree of development and self-awareness” I would like to ask the Guarantor what protection is present on the web for children under the age of 13 with respect to the information they can search for and consult on the many sites that do not have any filters and which contain inaccurate or unsuitable information. Is ChatGPT more harmful to a guy than a porn site?

It is not clear which criterion the Guarantor uses to formulate his measures. The one relating to ChatGPT places Italy in a restricted list of countries that have banned it for reasons that have nothing to do with privacy but with the control of information sources.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy