Home » What does this have to do with effective altruism?

What does this have to do with effective altruism?

by admin
What does this have to do with effective altruism?

Behind Silicon Valley’s favorite hyper-rational philosophy lies a totalitarian worldview that contradicts democracy and human dignity.

What do the now convicted crypto billionaire Sam Bankman Fried, a large part of the workforce of the chat GPT company Open AI and the initiators of the open letter calling for a six-month pause in the development of artificial intelligence (AI) have in common?

Like many Silicon Valley giants, they adhere to effective altruism: a philosophical movement that appeals to reason and ethics. It is gaining influence in the USA and is also specifically targeting young talent in Switzerland. It’s high time to take a closer look at their ideas. Because what sounds innocent or even reasonable at first glance is actually a totalitarian worldview that undermines democracy.

Intuition and empathy often contradict reason

Let’s start at the beginning. In 1972, the philosopher Peter Singer set up a thought experiment: Imagine you are walking along a pond and see a child in danger of drowning. Do you save it, or hesitate because you might ruin your clothes?

Of course, no one would doubt that the child can be saved. Singer then asks: What about all the children who die of hunger or malaria? He thinks that if we buy expensive shoes instead of donating money for them, it is as bad as letting the child drown. Because why should proximity and distance play a role in morality?

Singer is a radical representative of the ethical current of utilitarianism. The basic idea: Whether an action is good or bad depends solely on its consequences: whether buying shoes or lying – it’s not about the activity itself, but about its effects, more precisely, whether it increases happiness in the world as a whole or reduced.

His thought experiment is intended to show that intuition and empathy are not always reliable guides to ethics. He calls for more consideration, more reason.

In the 2000s, young philosophers took up the idea and popularized it. The school of thought particularly appeals to young people who love mathematics and facts. They start discussing in online forums: How should one live in order to achieve the maximum amount of good? Who should you give your donations to in order to achieve maximum impact? Effective altruism is born.

It’s better to get rich and donate than to save lives by hand

What began as an attempt to think unbiasedly about good and bad and to question gut feelings soon turns into a race of subtleties in which the main issue seems to be the superiority of one’s own arguments.

See also  Cheaper storage options available for Xbox Series S/X -

Supporters love to point out that people often donate to the “wrong” things: anyone who donates to research into rare diseases, for example, is irrational. Because there the money has little chance of making a difference. Mosquito nets for people in malaria areas are often cited as the best investment: Nowhere else would there be so many lives saved for so little money.

Online discussions give rise to local groups and offshoot associations. Take GiveWell, a Silicon Valley nonprofit that publishes lists of the most cost-effective charities and estimates that it has diverted more than $1 billion in donations.

With stands at universities, online seminars and retreats lasting several days, the organization specifically addresses young people who want to do good but don’t really know how. Advice on the most charitable career choice is particularly popular. This is not about working for Doctors Without Borders or inventing new medicines. It is more effective to earn a lot of money and donate a large part of it, because this saves more lives than a doctor or researcher can.

Artificial intelligence beats climate change and hunger

Megalomania, charity and reducing questions to a mathematical equation: effective altruism is tailor-made for Silicon Valley. No wonder it is thriving there.

However, the fact that more and more tech nerds are having a say is changing the discourse. A new movement is spreading that makes this calculation completely absurd: “long-termism”. You could translate it as “long-termism”. What is meant are considerations for the very long term.

Another thought experiment shows what this means. Compare three scenarios. In the first, 100 percent of humanity died from a nuclear war, in the second, 99 percent, and in the third, there is eternal peace. Is the difference between the first and the second greater or the second and the third?

Long-termists would say: The difference between 100 and 99 percent extinction is greater, because the one percent can revive humanity. And all these future lives outweigh the well-being of people in the present.

If you take this seriously, you should invest in bunkers for one percent of humanity instead of mosquito nets, not to mention diplomacy and peace efforts. And effective altruists actually draw such conclusions.

See also  Pioneer of AI security in the age of the AI ​​Act

Except that in the horror scenario they replaced nuclear war with an uprising by the autonomous AI. Preventing this is now the movement’s top priority. On their list of the most charitable professions at the moment, number one is “researching AI safety” and number two is “governing AI to minimize catastrophic impacts”.

Climate change, world hunger, rare diseases and malaria: dangers that “only” cause great suffering and do not wipe out all of humanity in one fell swoop do not come into the cost-benefit calculation of effective altruists against the risk of a murderous superintelligence at. Although, and this must be emphasized these days, it is not at all clear at the moment whether such a superintelligence can even be built, let alone that it will happen in the next few years.

None of this would be a problem if effective altruists weren’t slowly becoming the dominant group in the tech scene. Until recently, the chat-GPT company Open AI had effective altruists on the board of directors to monitor the company’s ethical direction. Influential research institutes are closely linked to the organization. The Future of Life Institute at MIT in Boston was behind the prominent letter that called for a pause in AI research in the spring.

This has consequences: When it comes to the risks of AI, there is no question of whether the programs work reliably, for example whether their statements about the suitability of job candidates are really correct. Or about how AI can be used for surveillance, or whether it is based on stolen data. Instead, we talk about existential risks that could emerge from an imaginary machine.

There is no algorithm for what is best for everyone

Can you blame the utilitarian Peter Singer for all this? No. He himself has even explicitly distanced himself from the long-term faction of the movement. And yet the calculation of long-term risks only makes visible the fundamental problems with this ethic.

The most fundamental problem is the overestimation of their followers’ self-confidence. They believe they are the only rational thinkers, believing they are superior to those who follow conventional morality.

Let’s compare the different causes you can donate to: basic research versus mosquito nets. One could add: wells, school education, aid supplies for Ukraine. Different people will choose to donate to different causes. Not everyone will figure out how their money will most effectively save lives. People have different motivations.

It has to do with human dignity and democracy that people make decisions based on what affects them personally. This right is at the core of liberalism.

See also  How to Use Siri to Read Web Pages on iPhone: iOS 17 Listening to Webpage Skills

For the effective altruists, however, it is irrationality that must be overcome. They are technocrats who believe the fate of the world is best in their hands. After all, they rely on calculations and probabilities more radically than anyone else. However, they overlook the complexity of the world.

The fact that those affected have a say is a strength of democracy

Coming back to the mosquito net, in fact it has not been proven at all that the mosquito net protects the most lives of all humanitarian interventions. It is simply an intervention that has received particular scrutiny from economists.

Mosquito nets and death from malaria: This can be easily quantified. But what about school education? What about freedom? Is there value in making decisions about yourself? Do people in malaria areas have a right to have a say in whether they want to buy a net or something else?

If you leave the bird’s eye view and talk about individual lives, then it becomes clear: There is no one magic formula that optimizes the well-being of everyone. Effective altruists would do well if they were humble enough to see this.

Wanting to help others is good. And it’s a good idea to question your intuitions when doing so. This is the great achievement of thinkers like Peter Singer. But if you don’t recognize that you only know a fraction of the world, if you turn mental games into moral rules, then you drift into a technocratic-totalitarian worldview.

What effective altruists don’t understand: It is not a tragic disadvantage of democracy that those affected have a say and contribute their “unobjective” perspectives. It is her strength.

Even when it comes to the future of AI, those who will live with it must have a say. The ideas of the bright minds at Open AI and elsewhere are not enough.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy