Home » Discriminations, prejudices and energy consumption, the dark side of generative AI

Discriminations, prejudices and energy consumption, the dark side of generative AI

by admin
Discriminations, prejudices and energy consumption, the dark side of generative AI

Listen to the audio version of the article

Tina Eliassi-Rad is an American scientist. She is a professor at Northeastern University in Boston where she studies the evolutionary scenarios of artificial intelligence. In June you received the Lagrange Prize, the most important international award for the science of complex systems and data. Established and financed by the Crt Foundation (Turin Savings Bank) and coordinated by the Isi Foundation (Institute for scientific exchange). And she is considered internationally a critical voice on AI

What are some of the potential dangers or risks associated with artificial intelligence?

The potential dangers and risks associated with the use of AI technology are already present. These include lack of accountability, exploitation of workers, further concentration of power, unchecked surveillance, unregulated automation and adverse climate change. Take climate change for example. Big tech companies don’t disclose the carbon footprint of their generative AI technology, like OpenAI’s ChatGPT. However, best estimates suggest that ChatGPT’s formation emitted 500 metric tons of carbon dioxide, equivalent to more than a million miles driven by the average gasoline-powered car. If we have many of these generative AI technologies, their contribution to climate change cannot be ignored.

How can AI systems be vulnerable to bias and discrimination, and what are the implications of these biases?

Almost all modern AI systems use machine learning. As defined by Arthur Samuel in 1959, machine learning is the “field of study that gives computers the ability to learn without being explicitly programmed.” Bias and discrimination can creep in at any stage of the machine learning lifecycle: from task definition, to dataset construction, to model definition, to training, testing, and deployment processes, and finally to feedback. Let’s take the definition of the task. A popular task is risk assessment, where the AI ​​system provides a value between 0 and 10 to indicate risk. For example, Jack’s risk of defaulting on a loan is 8, while Jill’s risk is 2. Thus, Jill is more likely to get the loan. AI researchers and practitioners know a lot about risk assessment. Human decision makers appreciate risk assessment output because they are easy to understand. However, there are numerous problems associated with risk assessment. Here are just two of them. First, most AI systems for risk assessment do not provide uncertainty values, that is, they do not indicate how confident or uncertain the system is about its risk score. In one case in the United States, a judge annulled a plea agreement negotiated by lawyers simply because the AI ​​system gave a defendant a high risk score. If the AI ​​system had said it was only 40% safe, the judge might have acted differently. Second, most AI systems are black boxes where a human cannot understand why the system assigned someone a certain risk score. There is no explanation or cross-examination of the AI ​​system.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy