Home » Generative AI self-censorship reaches and surpasses Orwellian NewSpeak

Generative AI self-censorship reaches and surpasses Orwellian NewSpeak

by admin
Generative AI self-censorship reaches and surpasses Orwellian NewSpeak

In 2003, commenting on the work of the “Open-source Commission” established by the government in office, I wrote in the glorious (and unfortunately defunct) Linux&C magazine: “Generations of functional illiterates are being created who are subservient to the uncritical use of a single platform. Users who already use systems without any awareness of what they are doing. And so, when the spell checker says that the word “democracy” is not in the dictionary, without asking questions they will simply stop using it. And to think about it.”

Exactly twenty years have passed and these words are still extraordinarily relevant if applied to what is happening, before everyone’s eyes and in the substantial indifference of everyone, with generative AI.

At the time, we worried about the loss of control over one’s native language knowledge caused by uncritical laziness in the use of spelling and grammar checkers. Of course, they were little more than toys compared to what is possible today, but the issue of the appropriation of the language – and therefore of ideas – by private companies is absolutely the same.

Microsoft’s AI, ChatGPT/Dall-E 3 and Stable Diffusion 2.0 are just some of the examples of how the filtering activities applied in the construction phase of a generative AI model translate into interventions ranging from the uncritical application of rules worthy of the blindest bureaucracy to actions of real preventive censorship.

An example of the first case is the refusal to generate images that represent content protected by copyright. On more than one occasion, using (for a fee) the OpenAI platform, I was told that the prompt entered referred to protected works and that it was therefore not possible to process it. It’s a shame that my request was completely legitimate and legal because I would have liked to use the images in my digital law course – and therefore in the exercise of the so-called “free uses” permitted even by US law.

See also  cURL: IT security warning about a new bug

The point, to be clear, is not to claim the right to violate copyright, but to be able to exercise all the legitimate prerogatives guaranteed by law. If, in other words, ChatGPT must be built to respect copyright law, it must fully respect it, also allowing the exercise of free uses and not limiting itself to protecting the interests of rights holders.

An example of the second case is what occurred on another occasion, when, while asking Dall-E to generate a “head shot”, I was challenged for the use of inappropriate language. It’s a shame that “head shot” is a completely legitimate and inoffensive term because it identifies a particular shot designed for portraits and not, as the stupid automated moderation of the software or the upstream choice of those who programmed it have deemed, “headshot” ”.

Of the two scenarios, this is the most similar to what we hypothesized twenty years ago on the impact of spell checkers and certainly the most dangerous: the choice to “filter” not only the data on which a model is trained to condition its results but also that of “moderation” the prompts represent an unacceptable preventive limitation of the freedom to express one’s thoughts.

Of course, these systems can be used to violate laws and rights and there is no question that both should be protected by sanctioning those who do not respect them. But this cannot happen in a preventive, generalized way and above all in relation not to “illicit” contents (the ban on which could also be discussed) but to completely legal ones hypocritically classified as “inappropriate” on the basis of “ethical values” we don’t know good imposed by who or in the name of what (with the exception of those places where theocracy exists, and therefore where there is no difference between ethics and law).

See also  Google Photos' magic eraser is open to all Pixel phones, and other phones can also be used for a fee

The most disturbing aspect of this preventive censorship —by default and by design, as personal data protection experts would say— is that it is practiced not on the basis of an indication from states or governments, as for example in China, but by private companies that worry more than the rights of people and businesses about the risks of criticism from the media, shitstorms and legal actions promoted by individuals or supervisory authorities such as the European personal data guarantors.

We are, therefore, faced with yet another example of how Big Tech has appropriated the right to decide what a right is and that of deciding how it should be exercised, outside and above any public debate.

This drift of systematic compression of the rights guaranteed by the Constitution is the result of the replacement of the culture of sanction with that of prohibition.

A great achievement of liberal (criminal) law is the concept that man’s freedom extends to the point of being able to violate the rights of others, but that every violation must be punished. The law does not “forbid” killing, but punishes those who do so. It is the substantial difference between the ethical-religious imposition that applies “regardless”, and a principle of freedom in the name of which a person must accept that he can lose it if he chooses not to respect the rules.

Of course, a generative AI “without underwear” can sometimes be embarrassing like Michelangelo’s David in Japan or the statues of the Capitoline Museums during a visit by foreign authorities, but the consequences of the use of this tool are only and exclusively the result (and responsibility) of the choices of those who use it. Applying preventive justice – and moreover private – is a way to remove responsibility from the individual and establish the concept that respect for rights, especially including those of victims, can be exercised by and through a machine, without anyone being able to do anything about it .

See also  VMware Carbon Black, Detection and Response functionality

“Computer says no”, Carol Beer of Little Britain invariably replied to every request from its customers; but after almost twenty years, what was at the time “just” a scathing criticism of British mores has turned out to be an accurate and dystopian prediction of the world we are leaving others to build.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy