Home » The automation of our emotions – the Republic

The automation of our emotions – the Republic

by admin

Happiness, sadness, disgust, anger, surprise and fear. According to the emotion recognition market, these are the six human emotional states worthy of being recognized in variable percentages by software. All others are secondary, and result from the combination of these primary states.

All over the world we hear more and more often about emotion AI e affective computing to indicate those technologies, subsets of artificial intelligence, programmed to measure, understand and identify our emotions and categorize our behavior. These are often techniques of computer vision e deep learning able to read images, facial expressions, gaze direction, gestures and voice. This can also be integrated with the reading of heart rate, body temperature and breathing, for an even more detailed response.

Can we still talk about dystopias and make constant references to sci-fi literature? No, and indeed, it only makes us naive in the face of what has already been happening for years. Unfortunately, we have become accustomed to hearing about biometric identification, the recognition of probably the most personal data we have: those that translate our faces, our gaze and our body identity into mathematical language. Their importance (and need to defend them) derives from the fact that they identify us so particularly as individuals that we could almost compare them to DNA.

Precisely because of their preciousness, they are now processed by intelligent software capable of recognizing them and connecting them to other data concerning us, such as personal data. More and more these software are sold promoting the idea that they can infer our gender, our sexual, political and religious orientation based on a scan of our face. The most famous technology that exploits our biometric data is facial recognition, the protagonist of many scandals that have emerged in recent years and of as many scientific researches that demonstrate the high rates of error when used on black people, migrants, LGBTQ + and women. Very often these are people who are denied rights due to errors by these software, news that many of us are used to reading every day.

Lately I happen to feel defeated by what I study: continuous discrimination and enormous consequences on people’s lives due to decisions that some human being has decided to relegate to a machine. The pandemic and the rise of control and surveillance technologies at work, schools and many other spaces have exacerbated these trends. Not only in the United States or China, but also in the Netherlands, the United Kingdom, France, and ultimately the awareness that the proposed European regulation on artificial intelligence will by no means be sufficient to stem these risks. It happens to think that any critical effort is useless and that we are faced with something really unstoppable, because the improvements are infinitely less than the risks.

See also  Xbox Fans registered achievement points in exchange for limited edition purchases "Forza Horizon 5" was launched and joined Game Pass simultaneously

But how is it possible that we are discussing such an invasive practice for our freedoms? Because we’re not really discussing it. As often happens, innovation and technological changes are not a subject of public debate and are not considered aspects of extreme urgency, but only something to be passively received. Automated emotion recognition has been around for years (one of the first startups to offer it, Cogito, was founded in 2007), and unlike more well-known biometric applications such as facial recognition – which identifies our bodies – it claims to infer our inner emotional state. Its applications are increasingly integrated into critical aspects of daily life: in addition to advertising (as in the case of the famous banners of Piccadilly Circus in London), law enforcement and judicial authorities, there are many schools and universities that have monitored the behavior of students in online classes also in Italy, as reported by Privacy Network.

Who pays for this innovation? A few days ago, a BBC article pointed out how a sophisticated camera system equipped with artificial intelligence and facial recognition was tested on Xinjiang Uyghurs to reveal their emotional states. The Chinese province is often defined as a surveillance laboratory, where the Uighur people are systematically oppressed thanks to a control ecosystem that is among the most sophisticated in the world. Some people are forced in front of these cameras to identify their emotions and every micro expression on their face, without considering how the coercive context can alter mental and emotional states.

In the last few weeks, I have been experimenting with some of these applications, developed by technology companies in Europe and the United States. In addition to software sold for marketing, recruiting and profiling purposes online, there are plenty of games and filters that keep track of our facial expressions, such as Snapchat filters or creating custom emojis. Trying both the Emojify online tool, created to engage in a discussion on the topic, and Affdexme, the mobile application of the famous Affectiva, I found that my typical expressions do not easily fit into the six main emotions, and that the “neutral” component is always very high. I also found that I couldn’t mimic surprise, my favorite emotion: I spent at least an hour trying to fake the expression I would normally do but there was no way. A few days later I tried again and immediately recognized her.

See also  Too cold (and hot) is bad for the heart: more deaths from strokes and heart attacks

The fact of not being able to properly recognize our emotions and equally claim to be able to analyze even micro expressions immediately refers to phrenology or physiognomy, pseudoscientific disciplines that claim to derive (erroneous) hypotheses on internal states from external appearances. The one perpetrated by these software, and especially by those who market them, is a reductionist theory of knowledge: Occam’s razor is applied to any aspect of our life by treating every phenomenon as a physical object. Our identities are increasingly simplified, and technology continues to be governed through the pure categories of determinism and neutrality, thus alienating it from our experiences and giving the illusion of not being able to control it.

Those who have long been involved in automated emotion recognition have argued that this is a natural and necessary evolution of biometric technology. If human beings communicate with each other by looking and adapting responses based on the emotional feedback received, then a machine to interact effectively with us must also be able to determine how we are and how we react to inputs. The stated purpose is to improve interaction with our devices and make them more sensitive to our wishes and needs. In recent years, I happened to talk to people who deal with marketing and communication that if they could they would apply these technologies instantly, without asking too many problems about their effectiveness. The industry ofemotion AI, which last year was worth 19.5 billion, should reach 37.1 billion dollars by 2026. This is happening despite the fact that technology has found a weak consensus among social actors, as also shown by a study carried out in the United Kingdom between 2015 and 2018 which showed that only 8% of people agreed to have data on emotions linked to personal information.

The main problem is precisely their effectiveness, since they seem to be based on essentially wrong assumptions: the theories of the psychologist Paul Elkman, who in the 1960s argued for the first time that all human beings exhibit a small number of universal, innate and intercultural. But some authors in 2019, after analyzing several studies on emotional recognition technologies, concluded that there is a lack of scientific evidence to be able to “confidently deduce happiness from a smile, anger from a frowning face, or sadness from a pout” , and that emotions are expressed in many different ways also based on the cultural context of reference. These theories are used today to make decisions about people, regardless of how emotions – just like technologies – are actually built and co-produced based on the societies in which we live. The tendency not to consider this aspect is very dangerous, and should push us to reject the idea that these tools can decide on our opportunities or take place in our public and private spaces.

See also  Being single for a long time increases the risk of heart disease

Legislative efforts in this regard are not reassuring. Current legislation does not require consent to capture data on emotions in public places that are not personal (and therefore capable of identifying us as individuals), and the recent proposal for a European regulation on artificial intelligence does not explicitly speak of emotional recognition, unless is not combined with a real-time remote biometric identification. In addition to striving for Europe to take these shortcomings into account and introduce them into the final text, we cannot wait for the AI ​​Act to regulate these technologies. Just think of what the situation was two years ago to realize how many things could change before the adoption of the law: would we ever have thought that facial recognition would enter our lives as European citizens so forcefully? In this regard, there is a campaign that we can sign up to ask to ban biometric identification not only in real time, take back control over our faces and start really discussing it. The coalition of Reclaim Your Face in Italy it is carrying out numerous activities to raise awareness of these technologies, because timing in the defense of our data is everything.

National pressure is needed to put these issues on the political agenda and get the authorities to step in to set clear limits and prohibitions (already proposed in the House), and there is a need to apply moral pressure to organizations as well as legislators. Once we give permission and access to our feelings in an automated way, there will be nothing we could keep to ourselves. Subjectivity will be completely transported into the terrain of manipulative objectification, and the risk does not concern us only as individuals, but also as a community: it is the generalizations that we have to think about, the binary divisions typical of machines that promise to make us perfectly distinguishable and enclose us in categories. . And of that, I’m sure, we’ll have to talk again soon.

.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy