Home » Who should put violet glasses on AI?

Who should put violet glasses on AI?

by admin
Who should put violet glasses on AI?

In a world where glimpses of a questionable past still remain, certain literature resonates strongly, prompting us to reflect on our own behaviors. A clear example is “Charlotte’s Violet Diary” by the Spanish Gemma Lienas Massot.

This iconic work introduced the concept of violet glasses, a great metaphor that serves to adopt a gender perspective, capable of revealing even the most subtle discriminations that we face every day. Through these glasses, Carlota encourages us to detect the injustices, inequalities and gender biases with which we live, and she proposes to observe things beyond the superficial.

Let us now imagine applying this same critical perspective to technology, particularly artificial intelligence (AI). Who is responsible, then, to put the ‘violet glasses’ on it to ensure that it does not perpetuate the prejudices that are ingrained in our society?

Authoritarians don’t like this

The practice of professional and critical journalism is a fundamental pillar of democracy. That is why it bothers those who believe they are the owners of the truth.

We know that companies like OpenAI, Microsoft and Google are focused on offering systems capable of satisfying our desires, which can range from writing a book to generating images or composing music. However, in their development, they lack an ethical evaluation and robust quality assurance stage that allows them to detect biased responses before making them available.

As such, we see how ChatGPT is highly likely to generate descriptions of women as kind, helpful, and emotional; and describe men as intelligent, leaders and assertive. Midjourney also generates images of women that are more sexualized or stereotyped than those of men; o MuseNet creates music with electronic and percussive sounds when asked to compose with masculine gender; and soft piano melodies when it is feminine.

Faced with this situation, States are focused on regulating this class of products. Government actions today seek to achieve a balance between technological advancement and the protection of people from the potential risks generated by these applications.

See also  Panerai takes customers to the base of the Italian Navy

For example, the recently passed Artificial Intelligence Law (AI Act) of the European Union represents a significant advance as it prohibits, among other things, gender discrimination in the design, development and eventual use of this software, implementing measures ranging from the creation of a control authority to the application of economic sanctions to guarantee transparency and accountability.

Although standards could be considered the best defense mechanism we have, the fight against gender bias in AI requires a multidisciplinary and collaborative approach, where we work jointly and simultaneously to implement effective solutions.

The task of warning these types of situations cannot fall to a single guardian.

Although the greatest burden must be borne by those who develop these systems, ensuring the use of ‘violet glasses’ must be a shared responsibility between various sectors of society, including people, since the validation of content is not always an issue. technical but also ethical.

In this way, it is obvious that companies must adopt responsible development practices, using transparent tools that include audits before launching their products on the market. Only in this way can we effectively prevent these software from perpetuating or amplifying negative biases unchecked.

At the same time, the scientific, academic and non-governmental organizations also have to contribute to the generation of more effective methods to detect and correct this type of anomalies in algorithms. Their work is crucial to advancing the understanding and solution of these problems that cannot be easily verified by people.

Finally, as mentioned, the latter also have to play an important role, not only by denouncing this type of unwanted responses, demanding transparency, asking for explainability and requesting accountability but also in practice, simply avoiding being clients of biased services.

In short, the challenge of putting ‘purple glasses’ on AI is not a simple task, nor something that anyone can do individually. The question that opens this column invites us to reflect on a mission that corresponds to the entire society. In the end, technology only reflects our behavior through the data it is fed. If this data is full of prejudices and biases, only collective efforts will allow us to identify and counteract these distortions.

See also  Liu Haoran Responds to the Details Behind the Foot Picking and Smell Hands Exposure-Yichun News-Dongbei.com

*Lawyer expert in new technologies.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy