DYNOMIGHT ABOUT RSS SUBSTACK

Historical analogies for large language models

Historical analogies for large language models

Dec 2022

How will large language models (LLMs) change the world?

No one knows. With such uncertainty, a good exercise is to look for historical analogies—to think about other technologies and ask what would happen if LLMs played out the same way.

I like to keep things concrete, so I’ll discuss the impact of LLMs on writing. But most of this would also apply to the impact of LLMs on other fields, as well as other AI technologies like AI art/music/video/code.

1. The ice trade and freezers

We used to harvest huge amounts of natural ice and ship them long distances. The first machines to make ice were dangerous and expensive and made lousy ice. Then the machines became good and nobody harvests natural ice anymore.

In this analogy, LLMs are bad at first and don’t have much impact. Then they improve to match and then exceed human performance and human writing mostly disappears.

2. Chess humans and chess AIs

Chess AIs far exceed humans, yet human chess is more popular than ever.

In this analogy, LLMs become better than humans at writing. But so what? We like human writing. Text isn’t interesting if it’s not part of a (para)social relationship. Browser makers build in LLM detectors to warn people if something wasn’t written by a human. Authors do use LLMs to train at writing, and sometimes there’s lots of intrigue when authors accuse each other of cheating using LLMs.

(By the way: In case you’re worried, there is no “written using LLMs lol” rug-pull coming at the end of this post. I wouldn’t endanger our parasocial relationship like that.)

3. Farmers and tractors

In 1800, half of the people in rich countries were farmers. After technology made farmers more productive, we weren’t interested in eating 50× more food per capita (and we weren’t that interested in expensive ingredients and eating white truffle saffron wagyu caviar burgers for every meal) so today only 1-4% of people are farmers.

In this analogy, LLMs act as a tool to allow one “writer” to produce vastly more output. But we don’t want to read 50× more stuff and don’t particularly care about “better” writing, so most writers have to shift to other jobs to make supply match demand.

4. Horses and railroads

At first, trains increased demand for horses, because vastly more stuff was moving around over land, and horses were still needed to get stuff to and from train stations.

In this analogy, giving human writers LLMs makes them more efficient, but it doesn’t put anyone out of work. Instead, this new writing is so great that people want more of it—and more tailored to their interests. Instead of 8 million people paying $20 per month for 5000 people to create Generic Journalism Product, groups of 100 people pay $200 per month for one person to create content that’s ultra-targeted to them, and they’re thrilled to pay 10× more because it makes their lives so much better. Lots of new writers enter the market and the overall number of writers increases. Then LLMs get even better and everyone is fired.

5. Swords and guns

First, guns replaced bows because guns need less training. Then guns became better than skilled archers. Then they replaced spears. Then infantry switched to guns with mini-swords on the ends. Then they dropped the mini-swords. These shifts weren’t driven by “productivity” so much as the fact that you had to switch to guns since you knew your adversaries were going to.

In this analogy, LLMs first replace humans for low-skill writing and then gradually take over more domains. For a while, the best writing uses a mixture of LLMs and human writing, but eventually humans stop being useful. Anyone who tries to resist this shift is outcompeted by better content from others who embrace LLMs.

6. Swordfighting and fencing

Sword fighting has an incredible depth of skill—you need talent, discipline, fitness, years of training, and maybe even philosophy. Many people think it’s worth mastering all this even though swords are now irrelevant to combat, so they practice it as a sport.

In this analogy, LLMs become better than humans at writing. But it’s still widely understood that learning to write is good for you (maybe writing is “the best way to think”) so people send their kids to writing camp and humble-brag about writing in their free time. But in the end, most writing was done for the mundane purpose of making text exist and when this is no longer valuable, most people stop doing it.

Intermission

I was informed that this post was kind of depressing but I couldn’t think of any good jokes so I’m resorting to brute force and deploying this picture of DOG WITH BENEDICT CUMBERBATCH:

dog with benedict cumberbatch

Thus fortified, let’s continue.

7. Artisanal goods and mass production

Mass production made suit/cars/teapots cheaper and more plentiful. But Bentleys are still made by hand—artisanal goods are still seen as higher quality and higher status.

In this analogy, LLMs make writing vastly cheaper and more plentiful. But they never quite reach the quality of the best human writers. Most writing was always emails and TPS reports and these all shift to LLMs. But the New Yorker sticks to human writing and because copying information is free, most of what people read still comes from humans.

8. Site-built homes and pre-manufactured homes

We can build homes in factories, with all benefits of mass production. But this is only used for the lowest end of the market. Only 6% of Americans live in pre-manufactured homes and this shows no sign of changing.

In this analogy, LLMs make text cheaper. But for some reason (social? technical? regulatory?) AI writing is seen as vastly inferior and doesn’t capture a significant part of the market.

9. Painting and photography

While cameras are better than painters at reproducing real-world scenes, they didn’t make painters obsolete, because paintings remain a status symbol and painters shifted to non-representational art.

In this analogy, LLMs replace humans for much of what we currently think of as “writing”, except for people who want to flaunt that they can afford hand-made writing. But then human writers figure out that they can do certain things better than LLMs. Those things become high status and we all convince ourselves that we totally coincidentally happen to prefer the high-status things and can’t stand the low-status things and so human writers do OK.

10. Feet and Segways

First, there was walking. Then the Segway came to CHANGE THE NATURE OF HUMAN TRANSPORT. Twenty years later, there is still walking, plus occasionally low-key alternatives like electric scooters.

In this analogy, LLMs work fine but just aren’t worth the trouble in most cases and society doesn’t evolve to integrate them. Domain-specific LLMs are used for some applications, but we start to associate “general” LLMs with tourists and mall cops. George W. Bush falls off an LLM on vacation and everyone loses their minds.

11. Gull-wing and scissor doors

In the 1950s and 1960s automakers introduced doors that open vertically. These are better at getting out of the way, make it easier to park in tight spaces, and are less hazardous to cyclists. But they need more vertical clearance and, if you’re in an accident and your car flips over, they can literally kill you.

In this analogy, LLMs literally kill some people, and then we stop using them.

12. Sex and pornography

[Description of current situation redacted.]

In this analogy, people consume a ton of AI writing. But it doesn’t seem like a “real” substitute for human writing, so while human writing becomes a bit less popular it stabilizes at a high level.

13. Human calculators and electronic calculators

Originally a “computer” was a human who did calculations.

In this analogy, LLMs are an obvious win and everyone uses them. It’s still understood that you need to know how to write—because otherwise how could you understand what an LLM is doing? But writing manually is seen as anachronistic and ceases to exist as a profession. Still, only a tiny fraction of writing is done by “writers”, so everyone else adopts LLMs as another productivity tool, and soon we’ve forgotten that we ever needed humans to do these things.

Thoughts

This exercise made me even less sure about what’s going to happen. But it helped me clarify the reasons for uncertainty. There is of course the obvious question of how good LLMs will get, and how fast. But to predict the impact of LLMs we also need to understand:

  • Will LLMs act more as competitors or complements to human writing?
  • How will people react to LLMs? Maybe LLMs will write amazing novels and people will love them. Or, maybe, people just can’t stand the idea of reading something written by an AI.
  • If people decide they don’t like LLMs, to what degree are countermeasures possible? Can we build machine learning models to detect LLM-generated text? Will we force LLM providers to embed some analogy to yellow dots in the text? Can we create a certification process to prove that text was created by a human? (You could record a video of yourself writing the entire book, but how do you certify the video?)

Beyond all that, I wonder to what degree these analogies are useful. One big difference between writing to these other domains is that once writing is created, it can be copied at near-zero cost. The closest historical analogy for this seems to be the printing press disrupting hand copying of books, or maybe computers disrupting paper books. But it’s also possible that this shift is something fundamentally new and won’t play out like any of these analogies suggest.

Comments at substack.

new dynomight every thursday
except when not

(or try substack or rss) ×
Litter and circles of internalization

Why not tax businesses for the misdeeds of their customers?

Say you're a dictator fed up with your citizens' littering. What can you do? Option 1: Make littering illegal. This is obvious, but it's clearly no magic bullet since everyone's already done it and yet littering hasn't perished from the...

Do economies tend to converge or diverge?

History suggests that poor countries have the same growth as rich countries on average, just with much higher variance.

I used to have a mental model that economic growth was about: 1. Figuring out clever ways to do stuff. 2. Doing it. Rich countries are at the technological frontier, so they have to do both of these things at...

Sales tax creates more unnecessary pain than value added tax

Explains through a model of selling coconuts while value added taxes distort the economy less than sales taxes.

It turns out that sales tax has a huge, gigantic, terrible flaw: It punishes specialized businesses. A value added tax (VAT) has no such problems.

Why I'm skeptical of universal basic income

Common arguments for universal basic income are about liberty or automation of jobs. The "real" argument is much more mundane.

Universal basic income (UBI) is an odd duck. Proponents range from futurists to libertarians to social democrats. Why this weird range of people?

Comparative advantage and when to blow up your island

Explains comparative advantage through trades of coconuts and bananas. Also, why it might be smart to blow up your island.

Economists say free trade is good because of “comparative advantage”. But what is comparative advantage? Why is it good? This is sometimes considered an arcane part of economics. (Wikipedia defines it using “autarky”.) But it’s really a very simple idea....