Home » Artificial intelligence, because it is wrong to anthropomorphize the machine and computerize the mind

Artificial intelligence, because it is wrong to anthropomorphize the machine and computerize the mind

by admin
Artificial intelligence, because it is wrong to anthropomorphize the machine and computerize the mind

Artificial intelligence (AI) can generate misunderstandings in many ways. If the dizzying developments in software and hardware are beyond the reach of most of us, perhaps the deepest source of confusion comes from the technical vocabulary of AI. So crowded with terms derived from cognitive science and neuroscience (BCS, which include cognitive science and neuroscience), AI acquires unjustified biological and cognitive properties that undermine its understanding. In turn, scientific disciplines that study the brain functions underlying learning and behavior have increasingly borrowed from the computer and computational sciences on which AI is based, transforming the most complex and multifaceted biological entity we know in a simple calculating machine.

Interview Professions of the future: the digital philosopher by Bruno Ruffilli 05 May 2022

The conceptual loan

For example, artificial intelligence scholars speak of “machine learning”, an expression coined (or made popular, the debate is open) by Arthur Samuel in 1959 to indicate “the development and study of statistical algorithms capable of learning from data and generalize to new data, and thus perform tasks without explicit instructions”. But this “learning” does not mean what neuroscientists and cognitive psychologists mean when they refer to the way humans or animals acquire new behaviors or mental contents, or modify existing ones, as a result of experiences in the environment. Similarly, in AI we speak of “hallucinations” to describe errors or deviations in a model’s output from well-founded and accurate representations of the input data. There is a big difference with our hallucinations, disturbing perceptual experiences that develop in the absence of external stimuli.

To explain this confusion we must take a step back and start from an idea of ​​Carl Schmitt, who observes how “all significant concepts of modern state theory are secularized theological concepts”. For example, political notions such as “sovereignty”, “state of exception”, “sovereign will”, “omnipotence of law” and “legitimacy” can be traced back to theological concepts: this conceptual borrowing has not eliminated the structure or influence of theological concepts, but recontextualized them within a secular framework. It is not only a historical observation, but also a severe criticism. Conceptual borrowing limits the critical evaluation of political concepts precisely because of their theological roots, from which they have not fully emancipated themselves, while power dynamics and decision-making processes in politics still reflect the structures established by religious thought.

Biennial Technology Artificial intelligence? Let’s touch wood by Barbara Caputo* 18 April 2024

See also  Going by sea in a sustainable way: ideas on display at the Innovation Village in Genoa

The words that are missing

These considerations can extend to other disciplines. When new sciences emerge, they lack a technical vocabulary to describe and communicate their phenomena, problems, hypotheses, observations, formulations, theories, etc. There is an urgent need to be precise, clear, coherent and concise; to agree on definitions and promote standardization. Gaps are filled by inventing new terms, using translations from Greek or Latin, or adopting and adapting technical expressions from other disciplines. Artificial intelligence developed very rapidly and needed to borrow its vocabulary from related fields: cybernetics, logic, computer science and information theory; and above all the sciences that study human and animal ways of acting and behavior and their biological bases. The phenomenon developed starting from Alan Turing, who had a decisive influence on the parallelism with human intelligence and behavior to explain how machines could imitate some aspects of biological cognition. But probably the most problematic borrowing occurred with the label that defines the entire field: “Artificial Intelligence”, created by the American scientist John McCarthy in the mid-1950s.

In addition to “learning”, used for “machine learning”, there are numerous biological and psychological terms in artificial intelligence; we remember, for example, “adaptation”, “computer vision”, “memory”. But there are also many terms with technical meanings that are little or not at all related to the meaning they have in their original scientific context. Take the case of “attention,” an extremely popular term recently introduced in machine learning. In BCS it generally refers to the processes of prioritizing relevant neural or psychological signals to guide adaptive behavior in the current context, and the noun is often accompanied by other qualifiers (e.g., selective, spatial, object-based, feature-based attention ). The meaning in machine learning is very different, as Wikipedia also testifies: “Attention is a mechanism, within neural networks, in particular transformer-based models, that “computes ‘soft’ weights for each word, more precisely for its incorporation, in the context”. It is a case of polysemy, if not homonymy: the scientific differences between the two concepts are significant and profound, the similarities superficial and negligible, yet the psychological and biological baggage exerts a semantic power that pushes towards greater anthropomorphism. The ability of AI systems to pay attention, learn, and hallucinate further fuels AI projects, research programs, and business strategies. Unfortunately, but not surprisingly, this leads to recurring “AI winters” (Floridi 2020).

On the other hand, cognitive science and neuroscience have borrowed technical and quantifiable constructs from information theory and computer science, framing the brain and mind as computational and information processing systems. For example, Ulric Neisser, in the text that marks the birth of Cognitive Psychology, claims that “the task of a psychologist who tries to understand human cognition is analogous to that of a man who tries to discover how a computer was programmed. In In particular, if the program appears to store and reuse information, it would like to know with what “routines” or “procedures” this occurs”. Here too the list of borrowed expressions is long: we talk about “architecture”, “capacity”, “encoding and decoding”, “sampling”, “signal/noise ratio”, “transmission” and so on.

See also  New discovery: The largest object in the early universe contained billions of stars | Early days | Galaxies | Gz9p3

500 Italian men and women who count in Artificial Intelligence by the Italian Tech editorial team 19 March 2024

The limits of analogy

In many ways, the parallelism has been successful, providing a scientific and empirical basis for exploring the properties and biological basis of the human mind. However, it can sometimes go too far, and lead to a reductionist and impoverished view, in which the subjective qualities of the mind are more eluded than understood. Thus, for example, the patterns of brain activity necessary for or related to psychological phenomena are considered sufficient explanations in themselves, the vivid and experiential contents of our mind are flattened into prolonged activations or functional states of groups of neurons, the moment of intentional choices reduced to activation levels that reach a decisional limit.

This situation generates confusion in those who are not experts and believe that AI is intelligent, in those who are experts but believe that AI will create superintelligent systems, and in those who do not bother to know the topic and exploit its dark sides to his interests, often financial. Part of the credence enjoyed by the science fiction image of AI comes from an anthropomorphic interpretation of computational systems, but also from a very superficial and merely computational understanding of the mind.

What can be done to address such a conceptual mess? Probably nothing in terms of language reform: AI and BCS will continue to use their terms, no matter how misleading they may be, how much resources they waste, and how much damage they may cause in the wrong hands or contexts. AI will still describe a computer as an artificial brain with mental attributes, while cognitive and brain sciences will continue to flatten the brain and mind as if it were a biological computer.

See also  Moto X50 Ultra’s new flagship comes with AI capabilities!Official trailer released

Colloquium Brigitta Muntendorf, a work for voices created with artificial intelligence by Bruno Ruffilli at the Biennale Musica 21 October 2023

The lesson of horsepower

However, it is the history of the language itself that gives us reason to hope. Greater understanding and more facts shape the meaning of words and improve their use. We still use expressions like “the sun rises” and “the sun sets”, even if no one believes that the sun goes anywhere in relation to our planet: the geocentric model has long been abandoned, the language has retained the expressions but has updated the meanings.

We close this article with an analogy that offers reasons for optimism. In the late eighteenth century, during the Industrial Revolution, Scottish inventor James Watt was instrumental in the development of the steam engine. To attract new customers, he had to demonstrate how the engine surpassed the work of horses. He then measured the work done by draft horses in the coal mines. He observed that a mine horse could turn a mill wheel once per minute, lifting to the height of one foot about 33,000 pounds, and therefore defined the standard unit of one horsepower as the displacement of 550 pounds per second. . The conceptual borrowing worked, and the term “horsepower” (HP) was universally adopted to measure steam engine power. Today it remains the standard unit for indicating the mechanical power of an engine, but of course no one is looking for hooves and manes between the cylinders. One day, if we are lucky, people will consider AI like HP and stop looking for cognitive or psychological properties in computing and computing systems.

*Centre for Digital Ethics, Yale University, USA, and Department of Legal Studies, University of Bologna.
** Wu Tsai Institute and Department of Psychology, Yale University, USA

Translation and summary by Bruno Ruffilli. The original article, longer and in English, appeared in the magazine Minds and Machines 34, 5 (2024), the full version is available here.

Interview Cory Doctorow and the theory of enshittification: why technology can only get worse by Bruno Ruffilli 23 April 2024

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy