Home » “We humans like to follow orders, even those from a machine.”

“We humans like to follow orders, even those from a machine.”

by admin
“We humans like to follow orders, even those from a machine.”

Ruth Chang, a philosopher at the University of Oxford, studies how to make the right decisions in life, big and small – and what happens when we start outsourcing them to machines.

Illustration Simon Tanner / NZZ

Professor Chang, as a philosopher you research decisions. Algorithms are increasingly taking this away from us. They show us the way using Google Maps, suggest partners, and formulate texts. Is this a problem?

Sometimes algorithms are very helpful. If ten companies need to be sorted according to their profitability, it would be great if an AI could do it. It can process all the information and create an accurate list and I don’t have to waste my time doing it. Things are different when it comes to dropping a bomb or not. When making a decision like this, you can’t simply calculate the better alternative. And I believe that a lot of decisions in our lives work like this. Whether it’s about careers or who to date or spend life with.

You can’t calculate the result because you don’t know what consequences the bomb and your choice of partner will have?

Not just because of that. My thesis is that we often think about such comparisons incorrectly. Most people think of decisions as if they were to take a small scale out of their pocket and put all the pros and cons on it. Then there are three options: one side goes up, or the other side goes up, or the scale remains in balance, then the alternatives are interchangeable, you could also flip a coin. But there is a fourth option.

And that would be?

Sometimes the scales stay in motion: sometimes one is at the bottom, then the other. The alternatives are not equivalent. If you slightly improve one option, it is still not clear what to choose. You couldn’t just flip a coin either, because the alternatives are not interchangeable, but qualitatively different. I say: They are “en par”.

Can you give an example?

Let’s take romantic relationships. It’s simply not true that there is one person who is best for you as a life partner. Many possible partners are equal. Adam may have some advantages, Bob may have others. But depending on what you choose, you will lead a qualitatively different life. Not better or worse, but different. And that makes the decision difficult.

And then how do you fall them?

The solution is to stand behind a decision. Let’s say you choose Adam and forget about Bob. By making a commitment to Adam, you are making him the right choice. Life is not just about following orders from the world. Sometimes the world tells you that avocado toast is a better breakfast than leftover pizza. But sometimes the world tells us that the options are “en par” – that we are faced with a difficult choice. Through these difficult decisions we become the authors of our lives.

To person

Ruth Chang – Philosopher and Professor of Law at Oxford University

Ruth Chang grew up in the USA and studied law and philosophy. After working as a lawyer, she decided to pursue a life as a philosopher. Among other things, she addresses the question of how values ​​and norms arise and takes a philosophical look at ethics, love and commitment. Her TED talk “How to Make Difficult Decisions” has been viewed more than 9 million times.

See also  HONOR 70 Lite arrives in Italy for €269 • Techzilla

This commitment is complicated today by endless options, whether shopping online or on dating apps. This overwhelms many people. . .

You describe the choice paradox: When there are too many alternatives, we feel like there is too much information, I can’t make a decision. We can respond to this by reducing the amount of information so that the selection becomes manageable. Let’s say you want to buy snacks for your children. An American supermarket might have 500 types of cookies, 38 types of Oreo alone. You can now weigh up price and sugar content or which snacks children think are cool. This is overwhelming. But when you think about what it’s actually about, it becomes easier. You want a snack that your kids will enjoy and won’t kill them. There are 20 acceptably healthy snacks left. It doesn’t matter what you choose, you can flip a coin.

But making a commitment is about something different.

Then it’s not about flipping a coin. Getting behind a choice changes the way you see the world. You always wanted a Lamborghini. Then you get married and have children. Your commitment to your family life makes the Lamborghini look different – not as attractive as it once was.

However, if there was an algorithm for cookie and partner decision-making, it would certainly be popular. Do we long to outsource responsibility, like a company that calls in consultants when it needs to fire people?

You can certainly use algorithms as supposed experts who are put forward if you don’t want to make yourself unpopular by making a delicate decision. That is the obvious case. But there may also be a deeper problem with responsibility behind this. Then when it comes to the kind of difficult decision I was talking about. When it comes to what kind of person, what kind of company you want to be. Such questions cannot be outsourced to people or technology. Especially when it is not clear which values ​​should underlie the decision.

See also  Checkers Capital acquires the majority of Altea Federation: the 'Next Level' development plan is born

Only when AI leaves difficult decisions to humans will it preserve our values, says Ruth Chang with conviction.

Maurice Weiss

AI systems are already being used in application processes. In the past, people were hired based on gut feeling. A case of technology making things better?

When a system is made well, we can let it make decisions. But it is necessary to correct this error that there are only the alternatives “better”, “worse” and “same”. The “en par” case is also needed. If two candidates are good, but in completely different ways, the machine would have to stop and say: “This is a difficult case. There needs to be a value judgment that a person has to make.” One often speaks of the “human in the loop”. It is the most promising way to align AI with our values.

Experiments show that people tend to confirm decisions made by machines, even against their better judgment.

We’ve all heard about the tourists who drove into a lake because the GPS system told them to. We humans are herd animals. We are pretty lazy and like to follow orders, even those from a machine. A doctor who has to make decisions under pressure would probably be grateful for a machine to help her. But the world isn’t structured that way – there isn’t always a right answer to the question of which patient should receive the valuable kidney. In the most interesting cases of human life, the options are “en par”. Therefore, AI systems should be designed in such a way that they require an active decision in difficult cases.

How can this work in practice? Israel, for example, uses an AI system that selects suitable targets for bombs. People have to confirm that. It’s hard to imagine weapon manufacturers building in a feature that asks soldiers whether the decision fits their values.

In the USA there is a rule according to which war machines are allowed to make suggestions, but a human should make the decision. My model goes even further. In difficult cases, the machine should be able to say: “There is no objectively better option here. Human lives are at stake. You have to think about the decision and take responsibility.”

Is it even possible to program an algorithm that doesn’t simply optimize, but takes such complex value questions into account?

You need this fourth option “en par”. I’ve been waiting for someone to program something like this for a long time. Now someone is finally taking care of it: my husband. He is a world-renowned philosophical logician. Our model combines machine learning with classic rule-based programming.

We talked about how difficult decisions are. How realistic is it that people would even want a tool like this?

See also  Respawn's Apex Legends team suffers layoffs

It will be hard work. First, you need the mathematical model. Then you have to convince people that it’s worth the effort, that it’s worth the cost to develop AI that consults humans in difficult cases. Because that will slow things down a lot. But I think it is the only path to an AI that is consistent with our ethical values, to an AI that is safe and where humans remain in control. We are fooling ourselves if we think we can have the fiver and the Weggli.

At what point does it actually become a problem to make decisions? Weapons are one thing. But when I write a text with AI, it makes a lot of small decisions for me and maybe changes the framing. . .

You are a journalist. It can be a difficult decision for you whether to describe a person as “aggressive” or “angry.” If you use Chat-GPT, it will force an adjective on you, although it is a difficult decision. For me the alternatives are perhaps interchangeable. But you are a writer. It is absolutely crucial whether you choose one or the other. It creates your identity: Are you someone who writes “aggressively” or “angry” in this context? Likewise, the decision between two haircuts may be irrelevant to me, I can let myself go. But a model creates her identity by choosing a haircut.

So you can’t tell our readers which AI products can make decisions for us and which can’t?

Each individual cannot decide that anyway. AI companies build products that are popular and that inevitably become part of our everyday lives. It was the same with social networks. When I tried Facebook, I thought: What a waste of time! But now I miss a crucial part of life if I don’t use any of these platforms. AI will change our lives, and regulation is lagging behind. If we don’t want AI to undermine human values ​​and capabilities, we must devote everything we have to AI’s most important open problem: How can we reconcile AI’s values ​​with our own? So far we haven’t cracked this nut. Designing AI to take difficult decisions into account – often the most important decisions in human lives – is a necessary and fundamental first step.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy