DYNOMIGHT ABOUT RSS SUBSTACK

Shorts for June

Shorts for June

Jun 2022

Here’s a collection of a few disconnected follow-ups plus some questions thrown into the void.

Contra me on teaching

A couple of months back, I took issue with Parrhesia’s proposal to make final exams worth 100% of the final grade, on the grounds that it wouldn’t work in practice.

You might think that that you can drop homework scores from the course grade. But just try it. Here’s what will happen:

  1. Like most other humans, your students will be lazy and fallible.
  2. So many of them will procrastinate and not do the homework.
  3. So they won’t learn anything.
  4. So they will get a terrible grade on the final.
  5. And then they will blame you for not forcing them to do the homework.

In a thoughtful post, Geoffrey Challen agrees that using only a final wouldn’t work, but goes further:

At this point many educators will resort to the argument that, well, students just need to learn how to self-regulate! It’s this kind of exasperation that seems to underlie the tone of responses like Dynomight’s. A single 100% final exam would be great. But these lousy students can’t self-regulate, and so we can’t have nice things.

Amusingly, the same educators making this argument are frequently the same ones you find scrambling at the last minute to meet paper and grant deadlines, complete required university trainings, submit promotional materials, prepare for meetings, and so on. Procrastination is our human reality, and pretty much every functional workplace finds structural approaches to help people work steadily, incrementally, and in healthy ways toward long-term goals—weekly check-ins, daily stand-up meetings, milestones and sprints, and all kinds of other workspace-specific variants. No sane organization would give a junior employee a big project and say: “Good luck, see you in four months!” So why do we expect this from students?

This is a strong argument. But then, there’s another argument I saw several times:

You claim it would never work to make finals work 100% of the grade. But here in country X that is exactly what we do, and it works fine.

I sense an implied, “If the theory you followed brought you to this, of what use was the theory?”

This argument is also very strong. I’m not sure how to reconcile the two.

Contra me on hot in-laws

In another post, I suggested that it was weird that you value hotness in the person you marry so much more than your parents value hotness (in the person you marry). After all, you like hotness because evolution made you that way to help you reproduce your genes. But don’t your parents have the same priorities?

I considered two possibilities:

  1. It might be strategic conflict deriving from the fact that you share twice as many genes with your children as with your nieces and nephews, while your parents value all their grandchildren equally. This could lead your parents to place more value on status relative to hotness.
  2. It could just be the boring and obvious answer that your parents are older and wiser, and that as people age, they give different advice about everything.

My conclusion was: ¯_(ツ)_/¯

Once you remember that evolution is tuning our behaviors for their effects on the margin, everything gets very complicated and it’s impossible to link our preferences and behaviors to the strengths of evolutionary pressure.

Despite this non-conclusion, there have been some rebuttals. Sluug argues “It’s just because I’m me and they’re someone else”:

I think it’s pretty simple: evolution has given me an adaptation that delivers a direct hedonic reward for performing certain behaviors. This is because that behavior was a good proxy for reproductive fitness in the evolutionary environment: eating sugar, looking cool, and mating with someone hot were almost always good ways to maximize my reproductive success. My relatives, in contrast, receive no hedonic reward when I do these things, and so they advise me against doing them under the assumption that they are not in fact in my long term best interest.

The argument is that people systematically give different advice to everyone about everything, and this phenomenon is too general to be explained by anything specific about parents and hot in-laws.

Scott Alexander gives a related explanation, with more emphasis on the dynamics of evolution:

Here’s a paper on mate choice in nonhuman primates. Suitors place surprisingly little emphasis on mates’ appearance, but there is some evidence that they do consider body size. They also consider a potential mate’s position in the dominance hierarchy. So even here, we have our two categories of positive traits: attractiveness and status. On the other hand, there’s no evidence at all that these animals’ parents play much of a role.

So: suitors’ mate choice depends on innate, evolutionarily well-established software. Parents’ mate choice depends on - well, it’s not clear. I tend to think that a few million years between primates without parental mate choice and the current day might not be enough time to give people really good innate parental mate-choice instincts.

So suitors’ mate-choice instincts are probably very finely-honed, specific drives and instincts. There’s some deep animal if-then statement saying that if someone has a youthful-looking face, they’re probably healthy and fertile and you should be more willing to mate with them.

Parents are probably going off of something like a vague desire that their children and grandchildren do well, without any supporting software. That means they have to use their reason to figure out how this cashes out in the real world.

You disagree with your parents because neither you nor your parents are pursing the goal of “maximize reproductive fitness”. What you’re actually doing is following a big bag of heuristics that evolution cobbled together because they’re vaguely correlated with reproductive fitness.

Under this explanation, when there’s a divergence between what you and your parents do, you yawn, because why would you be surprised that some highly-imperfect heuristics are in conflict?

I’m not convinced this explanation is right, but it’s every bit as plausible as the two explanations I gave.

Also—and bearing in mind that evolutionary psychology is the mind-killer—I wanted to respond to a comment I’ve seen many times around all these posts. Something like this:

Uh, I want to marry someone hot because then I get to have sex with them? And my parents don’t care because they don’t get to have sex with them? Hello?

Of course, that explanation is correct! But it’s also missing the point. Why do you want to have sex with someone hot? Presumably, because that serves evolution’s aims. So, imagine a world where your parents were equally enthusiastic about you having sex with hot people—they were constantly giving you tips on seduction and pressuring you to work out more and go on more dates. After all, that would appear to serve evolution’s aims just as well. So why isn’t that our world? That’s the question that’s being addressed here.

Models of medical diagnostics

Last month I complained about some of the logic being used to reduce diagnostic testing. It’s totally legit to avoid a test because it’s expensive or dangerous or painful, or even (let’s say) just because you’re worried it will stress out a patient. But it’s crazy to skip a test because you’re worried that a “false positive” could lead to harmful downstream procedures.

My argument is very simple: With optimal decisions, having more information can only help on average.

I’ve seen three different models for why optimal decisions might not be in the cards.

  1. Maybe doctors are bad at Bayesian reasoning.

    This is possible.

  2. Maybe patients are bad at Bayesian reasoning.

    Say you’re a doctor, and a healthy patient asks you to do a CT scan for lung cancer. You know that the patient is at low risk, but there’s a decent chance that the scan will show a small mass. In theory, the correct thing might be to do the scan, and only biopsy if a large mass occurs. But maybe the patient will insist on a biopsy even for a small mass. However, they’ll be happy to take your advice if you tell them not to do the scan at all. So that’s what you do.

  3. Maybe doctors and patients have different interests.

    Usually this involves lawsuits and malpractice insurance.

The last explanation is the most common. It might be right, but hold on a second:

Suppose you do a CT scan, it shows a small mass, you advise against a biopsy, and it later turns out that it was actually cancer. Then maybe the patient will sue. Makes sense. That’s a risk that would push you to go ahead with the biopsy.

But why isn’t the risk 2-sided? Say a patient asks you for a CT scan, you advise them not to do one, then it turns out they had lung cancer, and that a scan would have shown a large mass. Why don’t they sue you in that case, too?

The best explanation seems to be that you’re much more open to lawsuits when you depart from “standard practices”. That is, if you’re expected to do a biopsy but don’t, then you risk being held negligent. But if you advise against a non-expected CT scan, no one will sue you even if your advice was bad.

So, essentially, when doctors think about how to optimize the system, they naturally think about the care points where there’s more freedom to change the causal path without exposing yourself to legal risk.

It’s a plausible story, but… disquieting.

Some things I’ve been wondering about recently

Here are a few questions for you, you gorgeous and brilliant person.

Did Mark Twain ever eat pizza?

These seem to be the facts:

  • Wheat and cows are indigenous to Europe.
  • Tomatoes are indigenous to the Americas. They were first introduced to Italy in around 1550 and slowly grew in popularity over the centuries. By the late 1700s, peasants in Naples began to put them on top of their flat breads.
  • The margherita was created in Naples sometime between 1796 and 1810. It was (re)named in honor of Queen Margherita in 1889.
  • The pizza was a common attraction for elites on Grand Tours in Italy.
  • Mark Twain was born in 1836 in Missouri. He mostly lived in Europe from 1891 to 1900, including Berlin, Paris, Florence, London, and Vienna. He died in 1910.

How much did pizza in 1891-1900 resemble what we think of as pizza today? How widespread was it? Would it have been notable enough for Twain to write about it if he had tried it? Did he write about it?

(Why does this matter? I’m not sure, but it does.)

What’s the shape of the (hardness, intelligence) curve?

To me, the strongest argument that we should be worried about rogue AI is food.

We evolved under very strong selection pressure to consume as few calories as possible—it’s better to have a smaller brain if that stops you from starving to death. That’s worrying because we will provide artificial AIs with vast amounts of energy, meaning they might easily flash past us.

From a conversation with SMTM, here’s my question: Flies seem pretty dumb, mice are smarter, dogs are even smarter, and people are smarter again. Is there any way to quantify “how hard” it is to create each of these intelligences, as well as “how intelligent” they are?

Just to be clear, neither of these answers should involve counting neurons or dendritic connections. When I talk about “how hard”, I want a measure of how hard it was for evolution to make them, not how hard it is for them to exist now. And when I’m talking about intelligence, it should be some measure of capabilities, not “hardware”.

(I am aware of Ajeya Cotra’s excellent report on biological anchors which goes some way in this direction, but I’m wondering if there’s anything else in this direction.)

Theories of the industrial revolution.

Is there a comprehensive list of theories for the industrial revolution somewhere? Why did it happen in England/Scotland when it did, rather than hundreds of years earlier/later in some other part of the world? I’m not looking to resolve which theory is right, just looking for a list of all the standard hypotheses.

A game to make a game.

Checkers, chess, go, sudoku, shogi, poker. Computers seem to be better at humans at almost everything.

So: Is there anything left? Is it possible to deliberately design a game to maximize the performance of humans relative to computers?

It would be tedious to exactly define this, but I want a “simple” “symbolic” game. In particular, we shouldn’t beat the computers using our amazing eyes/ears or our even more amazing hands. We should beat the computers only by “thinking”.

Comments at reddit, substack.

new dynomight every thursday
except when not

(or try substack or rss)