Home » The need for balance in facial recognition

The need for balance in facial recognition

by admin

In the world we live in we are surrounded by sensors capable of detecting our movements, our geographical position, some of our vital parameters and from this information obtain data relating to our behavior, our propensity to purchase certain products, our state of health.

In some cases we ourselves want this data to be collected, because, for example, we need to know our geographical location to let some app on our smartphone use it, or it is useful for a wearable device to monitor our heartbeat. heart rate during a training session.

In other cases, however, we are led to transfer some of our data in exchange for access to services that we perceive as free, but which actually use that data to create our commercial profile and direct advertising that is more suitable for us and therefore with a greater possibility that it will convert in the future into a purchase of the product or service that is advertised. This is the case of social networks and many services that we do not pay directly with our credit card, but that we actually nurture by allowing them to track and analyze our browsing on the network for commercial purposes.

However, there is a third way that is totally out of our control: it is the situation that we could suffer when some devices managed by others, basically the cameras, totally without our knowledge try to recognize us, analyze our behavior and draw conclusions about our actions. or on our potential intentions by associating all this information to the images of our face, and therefore potentially to our identity, and storing everything within a much more extensive and complex behavioral profile than what we usually grant to social networks.

See also  And now technology changes access to culture

It is a casual use of artificial intelligence and machine learning algorithms that are usually used as a biometric authentication tool, for example to unlock our smartphone, to allow us to enter areas with controlled access or to authorize an online payment. In these cases we are faced with perfectly lawful uses and, above all, in the total availability of the user. When this happens, however, without his knowledge or without his being fully aware of what is happening, things can be much more complex and, in certain contexts, even very dangerous.

To better understand the difference let’s try to think of the security cameras used in a shopping center. If the goal is to detect, even automatically, some suspicious behaviors and report them to the security staff so that they can check that everything is in order, then there are no particular problems because there is no personal identification. If, on the other hand, all the faces of those who previously tried to steal in that shopping center are stored in a database and you try to recognize that face among the people entering so as not to allow access, then it can be very more thorny. Firstly, because the algorithms are not perfect and could identify as a “thief” a person who simply looks like any face in the knowledge base, or they could prevent access to those who have actually committed a theft in the past, but who have paid his debt to justice.

Further expanding the scenario, let’s try to think of these tools when they were used for population control. With these technologies, any government could decide to analyze the footage of the security cameras of a city, getting to know in detail the behaviors and movements of a large part of the population and having a list of all the participants in a certain protest demonstration or the attendance of people with a certain political orientation. Going further, algorithms could be created (and in some cases has been done) capable of recognizing individuals belonging to certain ethnic groups or minorities, with the aim of carrying out stricter controls or preventing them from accessing certain places, all on an ethnic basis. .

See also  Domestic robotics to come

However, these are technologies that can have noble and very useful uses for the community, think for example of the possibility of automatically identifying missing persons or helping doctors to identify some rare diseases early. They should not therefore be demonized for the mere fact of having the potential to be used in a harmful way, because otherwise the same reasoning could be done for any other technology, starting with the hammer we have in the toolbox.

What is needed is the awareness, on the part of the legislator, of the potential and risks inherent in the use of these technologies and a balance in deciding which fields of application are ethically acceptable and which, on the contrary, are to be considered inadmissible.

As often happens, the problem is not technological, but linked to the impact that a certain technology can generate on the world around us.

.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy