Can Machines Think?

Can machines think? Sure, but do you want the solutions they come up with? Consider Terminator. SkyNet decided that the solution to the problem was to kill all humans.

Richard Feynman was asked this question in 1985. He had some interesting ideas.

  • They won’t think like human beings
  • They might be more intelligent, depending on your definition

He makes the further interesting point that we only think the machine is valuable if it could beat every human at the task. Why don’t we value the machine that could merely beat most humans? Why does Deep Blue have to beat the top chess grandmaster if it can beat everyone else?

This entire video is interesting, but the good stuff starts at about 10:50:

What happens if we don’t tell the computer how to solve the problem? He talks about Traveller Trillion Credit Squadron, which Douglas Lenat’s AI beat so hard they almost cancelled it.

Lenat was studying AI at Stanford, and had a program, Eurisko, that could figure out heuristics to optimally solve some problems. He wasn’t interested in Traveller except to apply Eurisko to it.

Eurisko was creating concepts on its own. It was distilling thousands of experiences into the judgmental, almost intuitional, knowledge that constitutes expertise -rules that can’t always be proved logically, that can’t guarantee a correct answer, but that are reliable guides to the way the world works, a means of cutting through complexity and chaos.

So, given a certain number of resources, what was the optimal navy he could create to fight other navies?

For the 1981 tournament, instead of making a fancy navy, he made a immobile, heavily armed one. There wasn’t enough firepower to sink these big ships that were able to armor themselves with all the resources they saved by not being mobile.

They then changed the rules for 1982 so that ships had to be mobile. So, he made a navy of scores of small, lightly-armored and lightly-armed but highly mobile ships. His ships were easily defeated, but he had so many of them that he outlasted his opponent. In the third year they threatened to cancel the tournament if he participated.

As an aside, some of what I’ve read is that the players were dismayed that the outcomes of their simulations were practically pre-determined and took no skill. The actual game was a foregone conclusion once you pitted two known navies against each other. And, knowing the best navies robbed the game of any of its meaning. That is, the game is fun because we are stupid and ignorant, not because we are clever.

One of Lenat’s strategies was used by General Paul K. Van Riper when he played the Iraqi side of the real-life war games in the Millennium Challenge 2002. Van Riper basically overwhelmed the American navy by launching all his missiles at once when the navy entered the Persian Gulf. Van Riper won so hard that they had to nerf him so America could win (and hence, invade Iraq confidently). He quit the exercise over it.

Feynman goes on to describe other surprising heuristics that Lenat came across. Lenat was comparing his own solutions to computer solutions for various unspecified problems. In one run, the machine would win because its heuristic was to ignore Lenat’s heuristics and give them no credit. In another heuristic, numbered 693, the winning strategy was to take credit for every successful heuristic. Indeed, one of the Eurisko’s Traveller solutions was to change the rules.

As much as I like War Games, the movie with Matthew Broderick, I never believed that WOPR would conclude that the only way not to win would be to not play. It would have come up with something very unexpected (and not having a nuclear war was never in question in a movie with Ally Sheedy).

Feynman notes this earlier in the same talk. Computers accomplish goals, but they don’t know what our real intentions are. For example, we like to watch sports and we like our favorite team to win, but we like to watch a good game even more. Any sporting match, where the outcome is obvious and unavoidable, is also boring. It’s why Traveller threatened to cancel itself.

This is also the reason Monopoly is eventually boring. It’s a game designed to illustrate the Pareto effect (the “Landlord’s Game”), in which small advantages, even accidental ones, have outsized consequences. There’s usually a point in the game where nobody has yet won but its apparent that it’s over. Once the resources have been distributed, the rest of the game is effectively moot. That was the point of the game, after all!

But even then, professional sports are getting boring to the point that you need to invent deeper and more obscure things to bet on. It’s not interesting enough to win, but you have to beat the spread. Or, the outcome isn’t interesting, so punters bet on no balls in cricket, through which they can fix the game without changing the outcome.

So, how are we going to score ineffable qualities that make things worth doing? Feynman basically says we won’t. Computers will do the things they are really good at, and humans will do the things we are really good at.

Further reading