Home Ā» Why is the Italian government on the side of those who don’t want to regulate AI?

Why is the Italian government on the side of those who don’t want to regulate AI?

by admin
Why is the Italian government on the side of those who don’t want to regulate AI?

For months now, we have been following the events concerning the most recent developments in artificial intelligence, generative intelligence, and gods foundation model. There are those who even live like one soap opera, especially compared to recent changes at the top of OpenAI which remind me more and more of some episodes of the TV series Succession. At a certain point, we too convince ourselves that itā€™s all a game of power and marketing, a story that repeats itself endlessly. This is the gossip, then there it is reality.

In March, shortly after the release of GPT-3, a group of researchers and entrepreneurs published an open letter asking governments to six month break to the development of ā€œgigantic artificial intelligence experimentsā€. They wrote they were worried about the future of human beings because the potential of such a large, revolutionary, intelligent tool that would soon surpass us needed to be better understood first. Several things have happened since that letter. We understood and tried to make people understand that it is a gigantic bubble made up of competition on the market (which led to the birth of several alternative companies, founded by the signatories of that letter, in the following months, or to the same changes of chair in OpenAI in recent days) and contradictory theories (among others the ā€œdoomerismā€ and the ā€œlong-terminismā€).

We tried to balance this harmful science fiction with a disclosure on real risks of artificial intelligencethe daily ones that have a huge impact on peopleā€™s lives, and which have existed since well before generative AI.

We asked Europe, which fortunately was already working to regulate this terrain, to establish clear, ongoing rules on foundation models. A very difficult undertaking, but the only way to guarantee that they respect the precarious balance of our societies and at the same time that a healthy market for these technologies develops. We donā€™t believe in self-regulation by companies, and we think that for technology in particular it has never worked.

See also  The sounds of the 80s that we no longer hear

In reality, even US entrepreneurs, like Sam Altman himself, have appealed at various times to Europe begging it to regulate them: it was the only way to stop the impending catastrophe. A gigantic contradiction for a market that wanted to expand dramatically while he shouted ā€œplease regulate us, we have a medium that will destroy humanity!ā€ (not the last one, Q*, the one before, which in any case seemed to have to destroy civilization according to them). We apparently shared the same request but with profoundly different motivations and methods: no catastrophe, simply the need to establish requirements for an uncertain technology and to be governed with care. And especially, the only interest in requiring strong regulation of generative models by the same companies that developed them was to make it impossible for the world open-source the development of the same models.

Europe accepted the invitation, and proposed additions to the draft text of the Regulation on Artificial Intelligence (AI Act), with specific rules for suppliers foundation models, including the obligation to carry out independent audits, security and cybersecurity testing, data governance measures, risk assessments and risk reduction efforts.

Letā€™s think about some use cases: generative models used to create images that can then be used to spread (dis)information, models used to create texts for schools or to generate hate speech. At the end of October, there seemed to be a consensus between the Commission, Parliament and the Spanish Presidency of the Council on the type of rules to introduce for foundation models with a greater impact on society.

The AI ā€‹ā€‹Act, which sets out different rules for technologies based on their risk, is currently in the final stage of the legislative process, with the three main EU institutions meeting in so-called trilogues to hammer out the final provisions of the law. A phase complicated above all by the different national interests represented by the Council. And in fact, there is news in recent days that some countries are backtracking reneging on the consensus already found on the rules to be given to generative AI, with the intention of blocking negotiations until a new agreement is found towards complete deregulation. Among these is Italy.

See also  New technologies, the domestic robot that controls the newborn arrives

During a meeting of a technical working group of the EU Council of Ministers, the representatives of France, Germany and Italy yes. I am opposed to any type of regulation of foundation models. As Connor Dunlop, EU Policy Lead of the Ada Lovelace Institute in London explains on Euractiv, this new position is driven by the lobbying efforts of a French start-up, Mistral ā€“ supported by the former French Secretary of State for Digital Cedric O, and the leading German AI company, Aleph Alpha.

Italyā€™s interests are unclear and as transparent as the others, we are apparently not protecting any business made in Italy. And so why?

For months our government, and the President herself, has seemed to express itself on artificial intelligence as a crucial issue. They come two Commissions were appointed, the one chaired by Giuliano Amato, and the one appointed by the Undersecretary for Innovation Alessio Butti. Both see the participation of experts (and some experts) respectively responsible for providing opinions and guidelines regarding the relationship between algorithms, media and information and the latter, wanted by Butti, to restart the now forgotten national strategy on AI.

Despite this apparent commitment, confirmed during the press conferences dedicated to the G7, Italy continues not to adopt a public position regarding political decisions on technology. We are all up to date on what is happening to companies in Silicon Valley and we do not know that our country is pushing to entrust control over artificial intelligence to a government (and not independent) authority. And now it sides, at European level, alongside those who ask to remove all responsibility from companies that provide generative models. Regulation for basic model technology is fundamental because they require different obligations from those already foreseen by the AI ā€‹ā€‹Act, because the rule was designed for different models in specific environments, not so scalable, systemic and with exorbitant development and investment costs . It is essential to evaluate and manage the risks arising from these characteristics comprehensively throughout the value chain. Furthermore, compared to other software, it is even more complicated to fix them ex-post once problems are found.

See also  SpaceX Sets Record for Most Flights in One Year with 62nd Orbit

Responsibility must rest in the hands of those who have the ability and effectiveness to deal with them: very few companies. And in fact, the regulatory proposal that Italy is helping to block would have a very narrow scope of application, essentially limited to less than 20 entities in the world (with a capitalization exceeding 100 million dollars). If these actors were effectively regulated, thousands of downstream implementers could benefit. And indeed la DIGITAL SME Alliancerepresenting 45,000 ICT SMEs in Europe, called for fair attribution of responsibility in the value chain. This proves that innovation and rules are not at all in conflict. Large developers can and should take responsibility for risk managementand we cannot accept that there are voluntary ā€“ and therefore unenforceable ā€“ codes of conduct to guarantee this power over the future.

The European Union has a long history of regulating technologies that pose risks to public safety, artificial intelligence cannot be an exception. For this reason, the Italian organizations Privacy Network, Hermes Center for Transparency & Digital Human Rights, The Good Lobby Italia, StraLI, Period Think Tank and GenPol Insights have sent a letter to more than 500 actors, including institutional ones, to ask for a repositioning of the government Italian in line with the requests of experts and civil society and to report more details on the position in Parliament.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy