Home » The problem of generative AI is not just privacy, European rules are needed (soon).

The problem of generative AI is not just privacy, European rules are needed (soon).

by admin
The problem of generative AI is not just privacy, European rules are needed (soon).

In recent days, an open letter signed by several AI experts, including Elon Musk, has been published, calling for a six-month moratorium on the development of any artificial intelligence. The fear of these scholars is that the rapid development of intelligent systems such as ChatGPT could lead the world towards apocalyptic scenarios, already envisaged in the past by scientists such as Stephen Hawking.
However, the proposed solution – the global stop to research – is certainly unfeasible and in any case ineffective. Innovation cannot be curbed but can, instead, be regulated to limit the negative effects, an action, however, which certainly cannot be completed in six months.
The policy making activity on technological issues, on the other hand, is already underway. In 2018 the European Commission in a communication on “A European approach to AI” declared its intention to regulate AI in an anthropocentric perspective, ensuring that the technology is at the service of man. The first interventions in this sense date back to 2014 thanks to the work of MEP Mady Delvaux-Stehres in the European Parliament.
A first concrete result is the proposal for a regulation known as the AI ​​Act”, of April 2021. A regulation – such as the GDPR – applicable as it is written in each member state from the moment of its approval, which is expected by the end of the 2023.
With this proposal, the European Union has sanctioned a clear change of pace in the correct direction, abandoning the rhetoric of ethics and soft-law, certainly unfit to govern such complex phenomena. However, the system is still insufficient, above all to the extent that it tries to regulate all together things that are too different from each other: from the autonomous vehicle, to the chatbot, to fintech systems and expert systems in medicine. Instead, the approach should probably abandon the one-rule-fits-all approach and instead try to build specific formulations for different macro use cases.
Conversely, Elon’s request is inadmissible in theory. In fact, there is no reason why companies with economic interests, exposure, competition, policies and complex strategies should listen to a multi-billionaire entrepreneur who would want to stop their research and development when he himself, in doing so, has built his own fortune.
Not only. The same is also unfeasible in practice; you cannot stop the development of AI globally and no one could sanction the transgressor of such a ban. It is not necessary to recall game theory to understand that even if all of the West were to stop for six months or a year, the rest of the world (China, for example) would not. We would end up giving another unexpected gift to our global competitors, but with far more significant consequences. An AI developed in a context that interprets democracy in its own way would certainly be even more dangerous and would tend to propose cultural biases that we do not recognize as our own. Once widespread and used globally, it would then be almost impossible to fix (as is the case today with TikTok).
In the event that the proposal were accepted, we would also have no advantage in practice. Six months, a year or even two will not be enough to regulate AI safely and effectively. Firstly, because it is too complex and is applied in so many different contexts that it will be necessary to intervene many times in a targeted way. We cannot regulate fintech the same way we regulate the use of AI in medicine or consumer products.
Secondly, because – to draw a parallel – regulation is not a battle – which is fought only once – but a war, made up of many successive battles which must adapt the strategy to the evolving context in which it is applied. As a society, through politics, we must claim the right to try to govern technological development, without leaving the last word either to the market or to what is technologically possible. That is, we must choose which “Gift of the evil spirit” (to say it with Guido Calabresi) we want to accept.
Much can be done and everything remains to be done, but it will take decades. Innovation does not wait and the law must run.
However, one fundamental aspect must be underlined. The main problem posed by AI is not the protection of personal data, which must also be guaranteed in increasingly effective ways and also – but not only – through technology.
The most relevant problems are probably others and much more complex to regulate: for example the ability of the AI ​​to manipulate the human being and his perception of reality, with deep fakes or by simulating intelligence, feelings, personality to induce emotional attachment on the part of people to synthetic systems. These aspects cannot be governed through privacy and, also for this reason, the tools available to the Guarantor Authority when it intervenes on ChatGPT or on Replika (the app that simulates being your sentimental partner) are clearly insufficient compared to the real problem .
In short, if artificial intelligence knows “what we cannot resist” (so says a scholar of the Alan Turing Institute, Christopher Burr) or if it is able to “extract our attention” (as Elettra Bietti of Harvard Law School talking about platforms), we cannot limit ourselves to protecting our privacy but we must affect – with precise rules – these aspects. We have the right not to be manipulated and we have the right to protect our time and our ability to concentrate. But this war is called Technology regulation and it has only just begun.

See also  We Shouldn't Worry About The Last of Us Type Cordyceps Infection, Mycologist Says - The Last of Us (HBO Max)

*Andrea Bertolini, professor at the Sant’Anna School in Pisa

*Roberto Marseglia, research assistant at the University of Pavia

Find out more

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy