Home » Artificial intelligence, the responsibility must be attributed to the producers

Artificial intelligence, the responsibility must be attributed to the producers

by admin
Artificial intelligence, the responsibility must be attributed to the producers

Listen to the audio version of the article

If the rules that govern the markets are well thought out they do not slow down but encourage innovation. Also because they serve to coordinate the action of different operators in complex supply chains and ecosystems for the benefit, at least in theory, of a common good. The case of generative artificial intelligences and large language models, such as ChatGpt, is particularly instructive. An informed discussion is urgent, given that in recent days the “trilogue” between the Council, Parliament and the European Commission decides the fate of the Ai Act, a systemic regulation proposal for the AI ​​market.

And there are over 300 Italian scientists and experts who wanted to contribute with an open letter to clarify the situation and propose that the government pay maximum attention to this matter.

Generative language models like Gpt-4, of gigantic complexity, are obtained by training on huge data resources from various sources (web pages, books, social media and more). They demonstrated impressive performance on a variety of language tasks. ChatGpt introduced the use of such templates to the global general public. Systems like Stable Diffusion and MidJourney have revolutionized the creation of images from textual descriptions.

Such generalist pre-trained generative models can be used by developers of a myriad of specialized applications in different domains, with disruptive effects on society and the economy: education, health, science, technology, industry, public administration, and so on, fueling an innovative ecosystem and value chain.

On the other hand, these generative models are the result of recent and still partially immature technology and show clear gaps in reliability and safety. Among these, the lack of transparency on the training data and their origin, the presence of biases and unpredictable errors (hallucinations), the ease of use for manipulative purposes (production of misinformation), the difficulty of interpreting or explaining the answers that they produce and the errors they make. The manufacturers themselves have expressed their concerns on the matter.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy