Thursday, October 8, 2009

Who's afraid of AI?

"Judgement Day is inevitable," said the terminator with its robotic Austrian accent. The machines are destined to take over the world. Why? A cynic would say that's it's all plot device to justify making yet another sequel to the "Terminator" films. But I think there's more to it.

Modern science fiction is replete with stories of machines becoming "self-aware," "intelligent," or whatever passes for "alive." Why is this theme so popular? It's not peculiar to our time. Other times and cultures have stories about creatures made by humans, which come alive, and which turn on their creators, and wreak destruction.

Is this truly possible? Should we be afraid? Is there a danger that machines will take over? I don't think so.

We all have experienced programs with bugs, but that's not what I'm talking about. Efforts to simulate the subtler human skills have run into huge obstacles. "Strong AI" dreams of making a system that can reach or exceed the intellectual abilities of a human being. We're not there yet. For example, anyone who has used Google's translation service can see the limitations of that tool. Even simply opening our eyes and recognizing objects, something that we take for granted, remains extremely difficult for a computer: to get reasonable results, you have to simplify the scene using simple lighting or vastly reducing the kinds of objects shown to the computer's camera. Reading hand-written text remains "an active area of research," which means we can't do it.

I teach computer programming, and I could say that computer programs carry a certain amount of "intelligence." However, that intelligence comes entirely from the deliberate efforts of the programmer. Every bit of "smarts" has to be put into the program by explicit design. In fact, nothing breeds more fear in a programmer's heart than distributing your carefully-crafted artifice to users, because they will quickly run into all the real-world aspects that you as a programmer forgot to write into your program. Sometimes the programmer's fear is hidden behind a veil of contempt: "it's hard to make a program idiot-proof, because idiots are so inventive." This contempt hides the fact that one man's inventiveness is another man's different set of assumptions. Programmers can't anticipate all the ways users will interact with their software, but we tend to blame our own lack of imagination on the user. For this reason, the best way to create a user interface for your program is to watch actual users interacting with it, ask them questions, take some of their answers seriously (but not all of them), and modify the program to incorporate what you learned.

However, I'm not simply trying to prove that there are many ill-conceived user interfaces in existence. Surely that needs no proof. My point is that software is fragile. It's very hard to write a program that can reliably work for many kinds of users. Making a program that many people can use with ease is like creating a person, an entity that is aware of the other creatures that will interact with it, and something that can adjust to its environment. All the computer programs I've seen are singularly "stupid" in this sense. Any apparent "intelligence" is the result of careful efforts by programmers, and every responsive behavior has to be put into the software with deliberate, conscious effort.

For this reason I have no fear that my computer will suddenly come alive and become "sentient," "artificially intelligent," or achieve any of those subtle qualities which we take so much for granted. If creating something that is easy to use is so difficult, how much harder must it be to make something that learns, hates, and eventually comes to wish me harm?