DYNOMIGHT ABOUT RSS SUBSTACK

Pragmatic reasons to believe in formal ethics

Pragmatic reasons to believe in formal ethics

Updated Mar 2021

Here’s a “low-brow” take on ethics that’s worth taking seriously:

Ethics isn’t going to save the world. We don’t need more “calculations” about the right thing to do. We need people to stop doing obviously wrong stuff. Ethics is boring and irrelevant to everyday life. Stop the obsessive navel-gazing and go engage with the real world.

There’s a lot that’s right about this: In practice our decisions aren’t usually influenced by ethics, but by habits and incentives.

Say you’re walking across a park and you consider a shortcut. You ask: Will you hurt the grass? Are there insects? Will you hurt them? What’s the moral weight of an insect’s life? How does it compare against a small convenience for you? Will other people follow you? Are you responsible if they do?

Living like this would be paralyzing. Almost all the time, we use habits or heuristics to make decisions, not ethics.

Even when people do use ethics, we often spend it on problems that just aren’t that important. I mourn the hours I’ve spent on which plastics are recyclable. (It turns out: none!) It’s easy to get sucked into an argument about how some corporation named a product.

And of course, lots of people are jerks and just don’t care about ethics. Most of the time, ethics don’t much influence behavior.

What matters is incentives. We are bad at reasoning, but good at taking care of ourselves. That’s because it doesn’t hurt when you get ethics wrong. If you want to solve climate change or animal welfare or whatever, don’t preach at people – make it so no one needs to think about ethics. Apply a tax, put up a fence, create legal penalties. Let everyone follow the scent of what’s good for us instead of futilely hoping people will both figure out what’s best and then actually do it.

If you have to choose between living in

  1. a society with a mediocre theory of ethics but well-crafted incentives, or
  2. a society with an enlightened theory of ethics but poor incentives,

then I suggest you choose society #1.

Ethics are sometimes the cause of disagreements

Everyone knows that flights emit a lot of carbon. It’s also obvious that business seats take up more space than economy seats. A study took the emissions of an Airbus A380 flight from Abu-Dhabi to London and assigned them to passengers in proportion to the area of their seats. They got these numbers:

Mode of travel Carbon Emitted per person
Business class 2,760 lbs CO₂
Economy class 520 lbs CO₂

I’ve sometimes suggested that if we’re worried about CO₂ emissions, maybe we should avoid business class. Most people resist this argument. These conversations go like this:

Dynomight: Switching two long-haul round-trip flights from business to economy saves almost as much carbon as not driving for a year (10,000 lbs CO₂ in the US on average).

Other Person: That doesn’t make sense. The planes are already flying and already have a fixed configuration of seats.

Dynomight: Well… true… but business-class seats exist only because people buy them.

Other Person: If I don’t buy a business-class seat, someone else will anyway. Or they’ll upgrade someone from economy.

Dynomight: But surely, on the margin, buying business-class seats creates an incentive for airlines to make more of them?

Other Person: Suppose you’re right. Even if I on an individual level refuse to buy these seats, that won’t be enough to change the way airlines configure planes.

Forget who is right. Why do these conversations reach a standstill? The facts aren’t in dispute. Our values are usually similar too, in that both people care equally about climate change.

I think the conflict is due to different ethical systems. There are subtle philosophical issues here. It’s true that the planes are already flying, and it’s true that that one person won’t change the way airlines do business. Maybe just not flying in business is pointless, and you should boycott airlines that sell business seats at all. Maybe individual action on these issues is pointless, and you should spend your effort lobbying for a carbon tax or something.

As long as we just keep talking about planes and seat sizes and carbon, these conversations will never converge. The only way is to step back and state how you define “right” and how it leads to your conclusions. Ethical reasoning is a third opportunity for disagreement even when people agree on facts and values.

Where do habits come from?

You can’t live your life constantly thinking about ethics. But you can step back once a year and consider: Do you want to change how you interact with loved ones? Volunteer? Donate money? Recycle? Participate in political action? Ethics are important when designing habits.

Is it better to spend more time reading to your kids or to help a campaign to improve soil quality? Commonsense ethics simply has no answer, because these choices make the world better in such different ways.

But these choices matter. History has shown over and that if you want to improve something, you should first measure it. Your life and time are finite. Different choices really do have enormously different impacts. But there’s no way to compare these choices without something close to a fully-realized ethical theory.

Where do incentives come from?

Even more importantly we need ethics when creating incentives. Lower speed limits save lives but cost time. Regulating pesticides in produce makes them more expensive, which might decrease how much people eat. Aggressively approving medical treatments makes them available earlier but has higher risk. Closing schools during a pandemic saves lives but hurts children’s future prospects. Ugly tradeoffs are everywhere and we can’t hide from them.

Policy choices are implicitly choices among ethical theories.

Say you want to participate in democratic government, but you only believe in commonsense ethics. The problem is that there are many complex issues. You can’t examine more than a small fraction in detail. And even for the issues you do look at, you’ll often conflict with others, since different people have different intuitions for these types of difficult tradeoffs.

But ethics scale. You can participate in a single conversation about what ethical system society should adopt. If you have a formula to calculate “how good” a given world-state is and trust that policymakers are always applying that formula, then you don’t need to inspect every random policy. If you don’t trust policymakers, it’s still a much easier to check if The Formula is being applied correctly, rather than every issues starting from a blank slate.

This is why public health has invented concepts like disability adjusted life years and quality adjusted life years. These aren’t egghead concepts designed to complicate things. It’s simply impossible to make policy decisions in most real cases without some theoretical foundation.

Commonsense ethics have a bad track record

Not that long ago, most people believed that women shouldn’t vote, homosexuality should be illegal, and cannabis users should be in jail. In 1959, 96% of Americans disapproved of interracial marriage. Not long before that, many people believed slavery was acceptable. It’s easy for us to believe monstrous things.

Given this, you have to wonder: Won’t people in the future be horrified by some of our beliefs? Unless this is the moment when we finally got everything right, the answer is yes. You have to worry what those beliefs are.

But people in the past weren’t all the same. There are lots of examples of people questioning the beliefs we now find so appalling. What set these people apart? I suspect it wasn’t as much that they tried harder to be good but that they tried harder to think about goodness systematically.

Calibration

Putting numerical values on lives feels a little cold-blooded. In movies you often see a character yell “You can’t put a number on human life!” as music swells in the background. This reflects an understandable concern that a formal ethical system might lead to terrible conclusions in real life.

While this is understandable, I think it’s backwards. In my fantasies, these conversations would go like this:

Egghead: If we blow up the dam that will save around 1000 quality adjusted life years. Let’s do it.

Superhero: Damn you, egghead! Who are you to decide what a human life is worth?

Egghead: I’m not!

Superhero: You just said…

Egghead: Everyone puts numbers on human lives all the time. Every time you pay more for a safer car or take a more dangerous but better paying job, you are doing that. I’m just averaging the choices real people make all the time.

Superhero: Oh. Well… Screw it, blow up the dam, I guess.

This is the right way to think about ethics. We don’t come up with a formal ethical system and then derive the consequences for everyday life. No, we look at the choices in everyday life we think are clear and derive an ethical system to generalize these.

Now, there’s a tension between (a) deriving an ethical system to generalize from commonsense ethics and (b) hoping that ethical system will reveal flaws in your commonsense ethics. This is a real problem that I don’t know how to fully resolve. On the other hand, a child might have this set of commonsense ethics:

  • It’s bad to hit my parents.
  • It’s bad to hit my friends.
  • It’s bad to hit my teacher.
  • It’s good to hit Kevin, I hate that guy.

Summary

While formal ethics have limited use for everyday decisions, they still have several practical uses:

  • Conflicts: They can help explain or resolve certain conflicting beliefs.
  • Personal choices: They can help when choosing priorities and habits.
  • Scalability: Ethics scale. Many related tradeoffs show up repeatedly when choosing policies so it’s worth trying to resolve them once and for all.
  • Time Robustness: They make use more future-proof against beliefs that future generations will see as wrong.

new dynomight every thursday
except when not

(or try substack or rss) ×
Consciousness will slip through our fingers

Can technology explain subjective experience?

I guess life makes sense: For some reason there’s a universe and that universe has lots of atoms bouncing around and sometimes they bounce into patterns that copy themselves and then those patterns go to war for billions of years...

Reasons after nonpersons

AI will make some old-school philosophical thought experiments seem much more relevant

The year is 2029 and you’ve finally done it. You’ve written the first AI program that is so good—so damn good—that no one can deny that it thinks and has subjective experiences. Surprisingly, the world doesn’t end. Everything seems fine....

The hard problem of feelings

Or: Why you like burrritos

Here's something weird. At least, I think it's weird. The hard problem of consciousness is why it feels like something to be alive. Physics does a good job of explaining everything that happens in terms of fields and atoms and...

Why it's bad to kill Grandma

Commonsense morality is an OK-ish utilitarianism

1. In college, I had a friend who was into debate competitions. One weekend, the debate club funded him to go to a nearby city for a tournament. When I asked him how it went, he said: Oh, I didn’t...

Reasons and Persons: The case against the self

A review of the thought experiments in Reasons and Persons by Derek Parfit (part three)

You want to go to Mars. There’s a machine that will scan and destroy all the matter in your body, send the locations of every atom to Mars, and then recreate it. You worry: Does this transport you, or does...

Reasons and Persons: Watch theories eat themselves

A review of the thought experiments in Reasons and Persons by Derek Parfit (part one)

You live with a group of utterly rational and self-interested people on an island, gathering coconuts to survive. Tired of working so hard, Alice builds a machine and implants it in her brain. This machine leaves her rational except when...

Are ethics all a lie?

If we only believe in ethics because of evolution, does that mean ethics are a lie?

Some people claim ethics aren’t practical. Others make a grim philosophical argument:

It's hard to use utility maximization to justify creating new sentient beings

The ethical theory of Utilitarianism applies to many situations, but runs into problems when choices might create new beings.

Cedric and Bertrand want to see a movie. Bertrand wants to see Muscled Duded Blow Stuff Up. Cedric wants to see Quiet Remembrances: Time as Allegory. There’s also Middlebrow Space Fantasy. They are rational but not selfish - they care...