Home » The conceptual fallacy of using “made by” when talking about AI

The conceptual fallacy of using “made by” when talking about AI

by admin
The conceptual fallacy of using “made by” when talking about AI

More and more often the media and casual observers of the AI ​​phenomenon refer to the results obtained through these tools as if they were the product of an autonomous will. The last case in chronological order is that of ChatGPT, a very boring natural language processor used by curious and millenarians to get answers to epochal questions, as if it were a Pythia or the Oracle of Delphi. Inevitable reactions were scandalized by the fact that in the past a chatbot “had an opinion” heretical on the responsibilities of epochal tragedies such as Nazism and the self-reassuring ones on the fact that now this is no longer the case. In the background, concerns, which have now become commonplace, about AI that one day will dominate humans if it is not stopped in time with “special laws”.

However, with all due respect to those who persist in living the illusion of living in the Steel Abyss, perhaps in the company of the Bicentennial Man and a Replicant, Dall-e, Stable Diffusion, ChatGPT and all the variations on the theme are only more sophisticated chisels, colors and brushes and this remains: tools. They do not have “consciousness”, “awareness” and “free will” but function, contrary to popular belief, according to the decision and will of those who built them. Therefore, Dall-e cannot be used to generate certain contents or programmed to interpret certain words. Version 2.0 of Stable Diffusion has been limited due to legal claims by copyright holders, concerned about the (alleged) ease of producing “similar” content to protected content. ChatGPT provides Pontius-Pilatesque answers of an essentially compiling nature because it has been —admittedly— “trained” to avoid controversies.

See also  It Takes Two sells 16 million units, Hazelight previews new project

The fact that these software work with a considerable degree of operational autonomy producing “intelligent” results is nothing new. Any product is “intelligent” because it is designed and built to be. To understand this, just read the simple and immortal words that Bruno Munari dedicated to designing in “From what is born what”. A fork is no less “intelligent” than an AI platform; on the contrary, it is certainly more so, because unlike an AI, it effectively achieves the purpose for which it was built and at decidedly lower costs. Unlike the fork, AI can function autonomously, but this does not mean —and this is the point— that from “object” it becomes “subject”, as happens instead by improperly using the “from” instead of the “with ”.

“Made by” and “made with” are two radically different concepts: the first describes a human action, while the second is used in relation to tools. The Pietà is carved “by” Michelangelo “with” hammers and chisels. The Last Supper is painted “by” Leonardo “with” brushes and colors. The Betrothed are written “by” Alessandro Manzoni “with” pen and paper. The strip on the vicissitudes of an AI is made “by” me “with” Dall-e.

Using the “from” instead of the “with” when talking about text-to-image, chatbots or AI-based music and sound generators is technically wrong and therefore false, and as classical logic teaches, from falsehood, whatever —any conclusion can be deduced from a falsehood. To say that a sound, an image or a text are made “by” a software implies the risk —actually the certainty, considering the positions expressed by legislators and experts— of confusing the creator of an intellectual expression with the tools he uses and therefore to consider the tool as a “subject” holder of autonomous “will” and “rights”. Then there is, in the use of the “da”, the more or less unexpressed desire to control the demon through “technological exorcisms” which convey the perception of being holders of superior esoteric knowledge, when in the end all that one is done was write some text in a form and press a button.

See also  Sky: how to cancel your subscription

Using the “with” puts things back in order because it respects roles and —above all— responsibilities, doing justice to the alleged “legal issues” which in reality are just as non-existent as the “subjectivity of AI”. A text-to-image or a chatbot does not “own” the copyright on the results that, as tools, they have produced. Similarly, a platform that manages the safety of a vehicle is not “responsible” for the “choices” because the match always remains in the hands of whoever designed, built and implemented it. And, preempting the objection, if you can’t control how this platform works, then you very simply shouldn’t use it.

From the (apparently) irrelevant difference in the use of “from” and “with” we therefore arrive at the real nature of the problem afflicting the diffusion of AI: the attempt to remove responsibility for the consequences and damages caused by the product . Root(lizz)ing the belief that things are made “by” an AI and not “with” an AI means taking the weight of responsibility off the shoulders of those who should bear it and offloading it onto an inanimate object which, as such , cannot have will, guilt but, above all, rights.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy