Home » In the crisis labyrinth: five challenges that we must address

In the crisis labyrinth: five challenges that we must address

by admin
In the crisis labyrinth: five challenges that we must address

The year 2024 is still young, but it already casts big shadows ahead: More than two and a half billion people will vote in 2024 on what should happen next in their country and in the world. Above all, the US elections are the focus of interest, as Donald Trump could become president again. But Russia and Ukraine are also electing a new president; the EU, India and Indonesia new parliaments; Saxony, Thuringia and Brandenburg have a new state parliament.

Advertisement

What happens politically this year could have a significant impact on technology developments: the outcome of the elections will determine which technologies will be promoted, regulated or banned in the future and how. On the other hand, technology – through social media and artificial intelligence – also has a major influence on people’s voting decisions. This feedback makes the situation particularly explosive: it provides great leverage to change things for better or worse.

The great strength of a technical development is often also its weakness. To exaggerate somewhat, one could also say: the more successful, the more risky. Because only what is sufficiently effective can also become existentially dangerous. And the more closely the advantages and disadvantages are intertwined, the more difficult it becomes to address the risks without risking the opportunities.

A prime example of this is social media. Their glory and their misery exist for the same reasons: the insistence on freedom. More specifically, the freedom of its early pioneers – wealthy white men from California who rarely have to experience the dark side of unbridled platforms for themselves. The freedom of some is therefore the lack of freedom of others. This effect has already driven millions of former Twitter users into digital exile. But with Twitter or If new, better platforms benefit from the Twitter exodus, this is positive for them, but relatively irrelevant for society: the digital landscape remains fragmented. And it is questionable whether it will ever grow back together.

See also  Energy - Municipalities are pushing for more flexible schedules for heat networks

Artificial intelligence faces a similar dilemma. It seems to be particularly dangerous under two conditions: when it works particularly well or particularly poorly. Democratic voters in New Hampshire were able to convince themselves of the first case. They received an automated phone call in mid-January asking them not to vote in the primary. The voice sounded like President Joe Biden – but it was generated by an AI. Someone wanted to manipulate the elections this way. A radio presenter had to witness the second case. ChatGPT falsely hallucinated a connection with sexual abuse and the news service MSN spread the report unchecked. These examples demonstrate that combating AI hallucinations is easier said than done.

Now one could argue that the whole thing is a human problem, not a technical one. After all, it is still people who are responsible for whether they follow the suggestions of machines or not. But it’s not that easy.

The crises of our time are coming together: war, global warming, environmental problems and technological developments. It seems like a labyrinth where the way out just doesn’t seem to come into view. The current edition at least tries to bring some order. Highlights from the magazine:

This is particularly evident in war, where soldiers have to make decisions about life and death in unclear situations under great time pressure. Under these circumstances, can they even afford to forego the help of an AI? Finally, in aviation there are numerous cases in which the pilots’ refusal to listen to a machine led to a catastrophe.

See also  WD Blue SN580 NVMe SSD, storage per content creator

Things will hardly work without AI in the future, but with it it won’t really work either – because such “hallucinations” may not be able to be prevented with the current architecture of large language models.

The situation is comparable with PFAS (per- and polyfluorinated alkyl substances), which include the widely used fluoroplastics. Much like AIs, they are successful for the same reason they are so dangerous. In this case, what this means specifically is that they hardly interact with other substances. This makes them practically indispensable in rain jackets, frying pans or cable insulation, but also indestructible in nature. They have been released largely unregulated for around 70 years and at some point seep into the groundwater, from which they can hardly be removed. A year ago, the EU – as a global pioneer – presented a proposal to ban the entire PFAS family. But what will come of this will likely depend largely on the upcoming EU parliamentary elections.

Politics is also the decisive factor in another major challenge, even if it initially sounds like just a calculation exercise for engineers: How much electricity storage will we need in the future to bridge dark doldrums in winter? There are enough scenarios for technical solutions. However, they assume that all decisions are made rationally. “We know from our experience that this is not the case,” says Michael Sterner, professor at the OTH Regensburg.

But people, it is often said, grow with their tasks. It’s now about time for that – which is why we would like to point out the new issue of MIT Technology Review, which deals with the crises listed and other challenges.

See also  Following the United States, the International Energy Agency decided to release oil reserves again within a month, and international oil prices fell immediately – yqqlm

(grh)

To home page

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy