by Mark Bernstein
Unnerved by an IBM publicity stunt pitting a computer program against a human "champion" in a television game show, the New York Times ran John Markoff's coverage of the contest beneath the headline, "A Fight to Win the Future: Computers vs. Humans." Markoff is worried that natural language understanding will throw legions of workers out of work. Agricultural and industrial automation were one thing, but won't this new technology be the last straw?
Of course it won't. We've experienced this anxiety before -- not just in the industrial revolution, but when we invented books and signage. We could employ people to stand on street corners and help people find their Aunt Martha's house. We don't do that anymore; we put up signs and we teach everyone to read them. The Romans did this too, though sometimes they made things tricky for outsiders who would need to identify themselves and get help. Today, we have street names and systematic addresses and postal districts and zip codes -- Anthony Trollope did a lot of the original work for this -- so now you can find Aunt Martha yourself, and so can her letter carrier.
When books were invented, people worried a lot about the philosophers and tutors who would be out of work. Who needs a teacher when you can run down to the library and look things up in Old Pliny? Medieval scholars worried that, if students were permitted to use books, they would confuse themselves and their classmates. They did; progress has its costs.
Today, much thinking about thinking machines is muddied. We're accustomed to assume that anything that talks back to us is partly human. If a program asks its users whether it has done a good job, even computer scientists tend to give it higher scores than they report if someone else asks; we know the computer has no feelings to be hurt, but everyone wants to be polite. Chemists once thought there were two entirely different kinds of matter -- stuff from living things, which was "organic," and everything else, which was "inorganic." This idea was called vitalism and nobody believes it anymore -- except when it comes to machine thinking.
Markoff does make a nice point about two strands of computer research: artificial intelligence, epitomized by John McCarthy, and tool-making, epitomized by Doug Engelbart. But Engelbart's goal of "designing a computing system that would instead 'bootstrap' the human intelligence of small groups of scientists and engineers" was always a personal vision, and I think most of Engelbart's followers and supporters have always edited out that embarrassing word "small." Why not help everyone?
We've made machines that answer questions for a very long time. We need better ways to get answers and better answers. When we decided that it was self-evident that all men are created equal, lots of people were worried: Won't this be the end of civilized life? Who will make dinner and clean up the mess?
I think we can manage.
Mark Bernstein is chief scientist at Eastgate Systems, where he crafts software for new ways of reading and writing.