Home » Artificial intelligence? Let’s touch wood – the Republic

Artificial intelligence? Let’s touch wood – the Republic

by admin
Artificial intelligence?  Let’s touch wood – the Republic

What developments can we foresee for artificial intelligence in the short and medium term? Let’s try to address the issue with a critical and informed approach, beyond the emotional wave that accompanies the diffusion of this technology.

To do this we are not interested in a textbook definition, we just need to analyze the elements of which AI is made up. Regardless of how it has been conceived since its inception in the 1950s and how it will evolve, it is based on three pillars: algorithms, data and “iron”, i.e. computers on which algorithms and data meet and become AI. So let’s try to understand how algorithms will evolve, what will change in the management and use of data, and how hardware will support these developments.

The event Kicks off in Turin Biennale Tecnologia 2024, the exhibition that combines tech and humanity by the Italian Tech editorial team 26 March 2024

Algorithms

If we look at the last three decades, when AI left the laboratories and entered our lives, we can say that the first turning point occurred in the 1990s. It happened thanks to SVM (Support Vector Machines) and AI kernels systems based on metrics adapted to non-linear spaces, which have significantly extended the scope and capabilities of AI, making it possible to effectively address even complex problems that require solutions other than a simple yes or no. This is how spam filters and biometric recognition were born, for example.

In the following years, we first moved to cashing with practical applications, then we applied these innovations to other types of data, such as sounds, text, DNA strings, space-time series of banking transactions.

Around 2009-2010, however, these methods also began to show their limits and problems emerged that were difficult to overcome with the available techniques. Until in 2012 the pioneering work of Hinton, Krizhevsky and Sutskever on convolutional networks scored one revolution in the processing of visual data, allowing algorithms to learn directly from the data, with highly scalable models and a large amount of parameters. This time too we moved on to collection with practical applications, and therefore we extended the field of action to other data.

See also  Steam digital puzzle game Logiart Grimoire deduces the correct answer from the narrative and completes the coloring and drawing of pixel illustrations # Number Weaving (212162)

Again we encountered obstacles, which forced us to take a different approach: we thus moved from vector data to tokens, which led us in the era of Large Language Models and Transformers.

From an algorithm perspective, we had innovation cycles of 10-12 years; what we’re in is just beginning, and I don’t expect revolutions for at least 6-8 years. A new turning point will come when we encounter other obstacles and there is another conceptual barrier to overcome.

Data

The other component is data: the more, the better. For decades, those who created the web infrastructure allowed us to use it as a showcase to advertise our opinions, ourselves, our products as companies, and in exchange they used the data we produce. A tacit pact that no longer applies: in 2024, access to web infrastructure is too low a valueand this is shown for example by the legal battles of Disney or the New York Times, who demand fair compensation for their data used in training AI models. There intellectual property issue it is crucial to the development of AI, and I believe it is only a matter of time before companies are followed by individual citizens.

In the next 5-10 years they might new compensation models emerge, such as free access to certain services, but also more complex strategies, perhaps mediated by government agreements: if the big tech companies don’t pay enough taxes to the states, let them at least pay compensation for the data they use. What is certain is that web scraping, once common practice for training AI models, is becoming less and less ethically and legally sustainable.

See also  More security with the HiSolution HiMobile service

Hardware

The third component of artificial intelligence is the machine in its concreteness, “iron”, although obviously it is silicon. To train and use algorithms we need computers that have adequate computing capacity, electricity, and lots of water to cool the machines. Furthermore, chip production requires a very high level of precision, which few companies are able to guarantee: it is not for nothing that Nvidia is on its way to becoming the most highly valued hi-tech company in the world.

Towards ITW23 For sustainable artificial intelligence by Barbara Caputo 17 September 2023

As the development of AI is becoming increasingly linked to the real possibility of accessing iron. And only those who are able to do so can think of creating a proprietary Large Language Model, a new generative AI capable of challenging the big ones like OpenAI.

And let’s think about training: how can we teach boys and girls what artificial intelligence is and how it works, without giving them the chance to touch iron? Unfortunately today this is only possible in very few areas of the world; in others, potential new AI talent will fail to do so and will not be able to grow; in still others they will only have limited access.

Italy

Let’s take Italy: it is often cited Leonardo, which with a computing capacity of 105 computational units, is the sixth most powerful computer in the world. We talk about it for basic and applied research, for large companies and state-owned companies: but will its power be enough for all this? Really we will be able to train all the new talents Italians who are approaching AI in our universities and in the National Doctorate in Artificial Intelligence, we will help all new and existing startups in AI, we will support the adoption of AI of all Italian companies and the PA, with just one iron, however powerful?

In search of a better future With Ecmwf and Cineca Bologna is the Italian capital of data by Marco Bettazzi 18 May 2023

See also  Noam Chomsky's Consensus Factory in the 21st Century

It’s true, something is moving: for example, with PNRR funds the construction of a National Center for Supercomputing has begun. But to progress in the only possible direction, that of the digital economy, we also need something else: we must support research, training, retraining and industrial development. Must guarantee the security of computer centers, which must be protected like aqueducts, electricity networks and other essential infrastructures.

Without an overall strategy, the human capital of talent, ability and inventiveness that we possess in Italy will not be able to express itself. And if he does, it will be abroad, where he will be able to strike iron: not out of superstition, but because this is the only possible way to count for something in the world of artificial intelligence.

* Lecturer at the Polytechnic of Turin, where he directs the AI ​​Hub. She is one of the co-founders of the European Laboratory for Learning and Intelligent Systems Society, in 2021 she received an Honorary Doctorate from the University of Southern Denmark. She is co-founder and president of Focoos AI, a spin-off of the Polytechnic of Turin.

(text collected and adapted by Bruno Ruffilli)

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy