When one uses a sophisticated computer program such as a tax preparation package, one is
bound to attribute some intelligence to the computer. The computer asks sensible questions
and makes computations that we find a mental challenge. After all, if doing one’s taxes were
easy, we wouldn’t need a computer to do it for us.
As programmers, however, we know that all this apparent intelligence is an illusion.
Human programmers have carefully “coached” the software in all possible scenarios, and it
simply replays the actions and decisions that were programmed into it.
Would it be possible to write computer programs that are genuinely intelligent in some
sense? From the earliest days of computing, there was a sense that the human brain might be
nothing but an immense computer, and that it might well be feasible to program computers
to imitate some processes of human thought. Serious research into artificial intelligence
began in the mid-1950s, and the first twenty years brought some impressive successes. Programs
that play chess—surely an activity that appears to require remarkable intellectual
powers—have become so good that they now routinely beat all but the best human players.
As far back as 1975, an expert-system program called Mycin gained fame for being better in
diagnosing meningitis in patients than the average physician.
However, there were serious setbacks as well. From 1982 to 1992, the Japanese government
embarked on a massive research project, funded at over 40 billion Japanese yen. It was
known as the Fifth-Generation Project. Its goal was to develop new hardware and software
to improve the performance of expert system software greatly. At its outset, the project created
great fear in other countries that the Japanese computer industry was about to become
the undisputed leader in the field. However, the end results were disappointing and did little
to bring artificial intelligence applications to market.
From the very outset, one of the stated goals of the AI community was to produce software
that could translate text from one language to another, for example from English to
Russian. That undertaking proved to be enormously complicated. Human language appears
to be much more subtle and interwoven with the human experience than had originally been
thought. Even the grammar-checking tools that come with word-processing programs today
are more of a gimmick than a useful tool, and analyzing grammar is just the first step in
translating sentences.
The CYC (from encyclopedia) project, started by Douglas Lenat in 1984, tries to codify
the implicit assumptions that underlie human speech and writing. The team members started
out analyzing news articles and asked themselves what unmentioned facts are necessary to
actually understand the sentences. For example, consider the sentence “Last fall she enrolled
in Michigan State”. The reader automatically realizes that “fall” is not related to falling down
in this context, but refers to the season. While there is a state of Michigan, here Michigan
State denotes the university. A priori, a computer program has none of this knowledge. The
goal of the CYC project is to extract and store the requisite facts—that is, (1) people enroll in
universities; (2) Michigan is a state; (3) many states have universities named X State University,
often abbreviated as X State; (4) most people enroll in a university in the fall. By 1995,
the project had codified about 100,000 common-sense concepts and about a million facts of
knowledge relating them. Even this massive amount of data has not proven sufficient for
useful applications.
In recent years, artificial intelligence technology has seen substantial advances. One of the
most astounding examples is the outcome of a series of “grand challenges” for autonomous
vehicles by the Defense Advanced Research Projects Agency (DARPA). Competitors were
invited to submit computer-controlled vehicles which had to complete obstacle courses,
without a human driver or remote control. The first event, in 2004, was a disappointment,
with none of the entrants finishing the route. In 2005, five vehicles completed a grueling 212
km course in the Mojave desert. Stanford’s Stanley came in first, with an average speed of 30
km/h. In 2007, DARPA moved the competition to an “urban” environment, an abandoned
air force base. Vehicles had to be able to interact with each other, following California traffic
laws. As Stanford’s Sebastian Thrun explained: “In the last Grand Challenge, it didn't really
matter whether an obstacle was a rock or a bush, because either way you’d just drive around
it. The current challenge is to move from just sensing the environment to understanding the
environment.”