Home » The Big Five warning of AI-risk before it was cool

The Big Five warning of AI-risk before it was cool

by admin

A supermarket chain in New Zealand offers an AI-based app that customers can use to generate new recipes from different ingredients. When humans started experimenting with the software a few days ago, they found that the artificial intelligence not only generated fun glue sandwiches and bug spray baked potatoes, but also a recipe that created chlorine gas, a deadly chemical warfare agent. Disappointed by its customers’ willingness to experiment, the supermarket was forced to add a warning.

On the one hand, this example shows that AI software is only as good as its input: Anyone who asks a chatbot about chlorine gas will receive chlorine gas, depending on how much the software has been “aligned” with fine-tuning methods, i.e. adapted to human values. This is not an example of the much-vaunted hallucinations in which an AI generates nonsense, but simply an AI system doing exactly what it is supposed to – in this case a recipe based on water, bleach and ammonia – but which is not adapted to its task.

On the other hand, this example also shows that AI software is not yet ready for mainstream use. Especially an AI-based chatbot for the preparation of food and drinks must be subject to strict controls and generally comply with the requirements for food production. Offering a chatbot for fun and thereby taking the risk that children, for example, mix up a funny poison gas cocktail is irresponsible.

Another not-so-dangerous example I tweeted this morning: Google seriously generates an answer that a toddler would do better when asked about African countries beginning with K: There are no African countries beginning with the letter K and Kenya is the closest since it starts with a K begins, but does not begin with a K. The kicker: Google’s AI has this confusion learned from a dialog with OpenAI’s ChatGPTan example of the so-called Model Autophagy Disorder, according to which AI systems degrade the more AI output is included in their training data.

Such systems are simply not (yet) ready for use by the general public, and I haven’t even started here about the risks to privacy and information security due to prompt injection and other hacking methods. But these dangers, which emanate from the premature use of AI systems, are by no means the only ones.

See also  Hitachi Energy Relion: New Vulnerability! BIOS/firmware affected

The text I piqed in Rolling Stone Magazine presents five women who have been warning of inherent biases in algorithmic systems and their impact on disadvantaged population groups for years. Joy Buolamwini, Timnit Gebru, Safiya Noble, Rumman Chowdhury and Seeta Peña Gangadharan have been ridiculed, fired and marginalized from the usual Silicon Valley tech circles for their warnings about a technology that is not yet really ready for widespread, high-profile use.

Her work goes beyond the much-discussed doomsday scenarios, which are often just as abstract as they are nonsensical, and years ago she shed light on the effects of algorithms on bureaucratic mechanisms that are already effective today. And that’s exactly why they are the “Big Five warning of AI-risk before it was cool” for me.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy