In 1956, John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon organized the Dartmouth Summer Research Conference on Artificial Intelligence. The conference proposal said:
An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.
Fifty-one years later, if we go by Jonathan Schaeffer and his team of researchers at the University of Alberta (www.cs.ualberta.ca/~chinook), we know which playerman or machinehas a win in checkers.
Okay, that wasn't fair. It took more than a summer, but significant advances have been made on all of the targeted problems. Despite a seemingly genetic propensity to overpromise, the field of artificial intelligence has accomplished a lot in the past five decades. On the occasion of the 22nd annual AAAI conference this past July, we thought it appropriate to reflect on AI's 51-year history and check in with some experts about the state of AI in 2007.
Hype and History
When, in 1981, Avron Barr, Edward Feigenbaum, and Paul Cohen published their mutivolume The Handbook of Artificial Intelligence, they defined the field as "the part of computer science concerned with designing intelligent computer systems, that is, systems that exhibit the characteristics we associate with intelligence in human behavior"; Dennis Merritt's definition, "the art and science of making computers do interesting things that are not in their nature," articulated in Dr. Dobb's AI Expert Newsletter, is more or less the same idea minus some of the anthrocentism. John McCarthy, who has creator's rights to the field, defined it as "the science and engineering of making intelligent machines."
Any of these definitions would do fine to capture the field today, based on the kinds of research presented this July.
Back in 1981 Feigenbaum et al. reckoned that AI was already 25-years old, dating it to that Dartmouth conference. By age 25, AI was a gangly and arrogant youth, yearning for a maturity that was nowhere evident. If in 1956 the themes were natural language processing, abstractions/concepts, problem-solving, and machine learning, by 1981 the focus wasn't that different: Natural-language processing, cognitive models/logic, planning and problem solving, vision (in robotics), and machine learning; plus core methods of search and knowledge representation. Feigenbaum et al. showcased applications in medicine, chemistry, education, and other sciences.
That was the heyday of AI. "The early 1980s were... the last opportunity to survey the whole field.... AI was already growing rapidly, like our ubiquitous search trees..." (Barr, Cohen, and Feigenbaum, The Handbook of Artificial Intelligence, Volume 4, 1989.) And largely that meant it was the heyday of the symbolist approach.