Home » Someone stop the software houses – La Stampa

Someone stop the software houses – La Stampa

by admin
Someone stop the software houses – La Stampa

Yesterday’s news of Miami’s flight disruption “due to a computer glitch” in the airport’s radar system is neither the first nor the last incident caused by design errors, bugs, or misdirected interactions with some other component of a sistem. From the explosion of the Ariane V vector in 1996, to the sensational “Year 2000” bug, to the Boeing 737 Max disaster of 2019 to the incredible case of the “ghost measure” generated by the digital platform of the State Council in November 2022, history is full Of accidents caused by software malfunctions. More correctly, however, these events were not caused “by” the software but by those who thought, created and made it work. Once again, however, even in the case of the Miami radars it is always “the fault of the computer”, as if these machines had a life of their own and were essentially uncontrollable. Evidently this is not true, but the narrative is by now so consolidated as to have obscured a trivial consideration of common sense: software is a “product”, whoever builds it should do so following safety rules, and be legally responsible for the consequences caused by the ” artifact”.

While homes, cars, appliances and work tools are strictly regulated, programs are not. Apart from generic requirements such as those of the regulation on the protection of personal data, cumbersome and useless “certifications” for “national security” and little else, building software is basically free of liability. Therefore, life, safety and the protection of people’s rights are entrusted to products that are largely out of control. If there’s a problem, industry marketers say, it’s a feature, not a bug; in any case just wait for the next version to make it disappear (in theory). Meanwhile, nobody pays.

See also  Tesla Optimus robot: new video shows updates

Some sharp jurist will certainly point out that the EU and Italy consider the software a work of ingenuity and that, therefore, one cannot speak of product liability. That may be the case, but I have never read on the back cover of the Divine Comedy that “you will enjoy this book for 90 days from reading the first page”, on the plate identifying a Caravaggio that “we do not guarantee that this painting will satisfy the needs of all visitors” or again on the Bitches Brew booklet that “listening to this disc is not permitted in mission critical environments”.

So, applying the Duck Test, if software is made like a product, functions like a product, and is used like a product, then it is a product. The sooner lawmakers realize this, the better for everyone. Unfortunately, this purely factual consideration has already escaped notice in the Community context. Parliament and Commission, prey to sensational technical and cultural misunderstanding on the nature of “artificial intelligence”, are stubbornly moving towards the approval of a regulation that differentiates the responsibility for the use of “smart” software from that for the use of “stupid” software. In other words, it is as if only the first were able to cause damage while the second, all in all, is fine: in the end, how do you blame a “stupid”? Also in this case the facts contradict the formulation of the political choice of the EU: “stupid” software are also extremely sophisticated objects, capable of causing incalculable damage. Therefore, differentiating liability regimes according to technological choices and not consequences is simply wrong. What matters is, in other words, whether a software is developed in a way that reduces the risk of causing damage or not. The “how” is completely irrelevant.

See also  The Pixel Watch 2 will be unveiled in October

Even before the forthcoming regulation on AI, for years the processing of personal data (including very banal lists of e-mail addresses) was subjected to a liability regime similar to that of the management of nuclear power plants (the so-called “liability for dangerous”). Instead, the activity of the software developer has been exempt from any liability thanks to the shield, even penal, guaranteed by the copyright law. Although, in fact, the software houses treat the programs as if they were manufactured goods, they are very rigorous in using the right to secrecy guaranteed by the legislation on intellectual property to escape not only and not so much responsibility for the damages caused by their products, but by higher costs that the development of secure software would imply.

These considerations potentially uncover a Pandora’s box.

As long as the damages caused by software malfunctions are the result of human errors, there is always liability for negligence (which is practically impossible to prove, is another matter). If, however, the placing on the market of defective “products” were the result of unscrupulous commercial strategies, the scenario would be totally different. We would be dealing with conduct of criminal relevance which would justify the infliction of very severe penalties also and above all against top management and not some obscure “senior software engineer”.

Only access to the source codes and development documentation would allow us to understand if there is and where the error is, but above all if we are faced with an error or the deliberate choice to release a dangerous product. Since, however, this right of access can be legitimately denied to those who have suffered damage, it is clear that a claim for compensation will hardly be accepted. If, then, we consider that software is made up of hundreds of thousands or even millions of lines of code, even if we could have access to it and be able to afford to bear the related costs, it would be difficult to obtain useful information to assert one’s right.

See also  Ornithopter from Dune: LEGO presents new set

A practical and cost-free solution would be to apply the liability for dangerous activity initially envisaged (and then eliminated) for the processing of personal data to software development as well. By inverting the so-called “burden of proof”, one only has to prove that he has suffered damage due to software, while it is up to the producer, pardon the “author” to demonstrate that he has done everything possible to avoid the damage.

Someone could probably consider this proposal as unacceptable because it “would slow down innovation” or “block the digital transition”. In some ways, it could even be true, if the idea of ​​innovation that one has in mind is to flood the market with malfunctioning products and reduce those who use them to unwitting testers or, more often, guinea pigs, without even wanting to bear the adverse effects. If, on the other hand, progress means improving the quality of life of as many people as possible, then what is often presented as “innovation” is just greed to be satisfied at any cost, including the danger of human life.

Practicing this peculiar vision of progress is not necessarily a problem, the important thing is that, as they say in Rome, let me understand.

Cyber ​​security

What is spoofing, and how to defend yourself

by Giuditta Mosca


You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy