Individual People, Places, and Programs

Names Associated with Significant Milestones in AI

AITopics > History | AI Overview > Individual People, Places, and Programs

See Also:

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z


See AITopics section on Aaron


Augusta Ada Byron, Lady Lovelace (1815-1852) → Ada Programming Language

  • Ada Byron, Lady Lovelace, An Analyst and Metaphysician (abstract). Betty Alexandra Toole. IEEE Annals of the History of Computing (Fall 1996) Vol. 18, No. 3; pp. 4-12. "The computer revolution also began with a woman, Augusta Ada Byron, Lady Lovelace, who wrote an article in 1843 that not only gave us descriptive, analytical, contextual, and metaphysical information about the Analytical Engine but also the first 'program.'"
  • Ada Byron, Lady Lovelace. A biography by Dr. Betty Toole. From the Biographies of Women Mathematicians Web Site at Agnes Scott College.
  • Ada Home: The Web Site for Ada. "Since March 1994 this server provides a home to users and potential users of Ada, a modern programming language designed to support sound software engineering principles and practices."
  • "Charles Babbage & Ada Byron (Lady Lovelace) worked on programmable mechanical calculating machines." - Brief History of AI.
  • August 24, 2003: The curious afterlife of Ada Lovelace. By Victoria James. The Japan Times. "Recent years, a century and a half after her death in November 1852 at the age of 36, have witnessed a fierce (and often mudslinging) battle over Ada Lovelace's reputation. ... Now, just as the fuss is dying down in the United States and Britain, the movie that set it all off has come to Japan. 'Conceiving Ada,' directed by Lynne Hershmann Leeson, a professor at the University of California, Davis...."
    • Author's postscript: "Those intrigued by Ada should read 'The Difference Engine' (1990) by William Gibson and Bruce Sterling. This 'what if' account of Victorian England explores what would have happened if Babbage's engines had met with acceptance in his lifetime, ushering in the computer age a century early. Ada, allowed to cheat her early death, becomes the first prophet of artificial intelligence."


The Isaac Asimov Home Page contains a comprehensive collection of resources pertaining to Isaac Asimov (1920-1992), the quintessential author, who in his lifetime wrote over 500 books that enlightened, entertained, and spanned the realm of human knowledge.

Asimov's Three Laws of Robotics -- separate page with discussions of the famous "three laws of robotics".


Biographical Information for Charles Babbage from the Science Museum, London. The Museum houses the world's first complete Difference Engine No. 2, built in 1991 from plans drawn up by Charles Babbage in the 19th century. In 2000 the machine's printer was built and added to the Engine. The Difference Engine No. 2 is surrounded by other Babbage displays, including a portion of the unfinished Analytical Mill and half the great man's brain!

"The Charles Babbage Institute is an historical archives and research center of the University of Minnesota. CBI is dedicated to promoting study of the history of information technology and information processing and their impact on society." Don't miss their collection of oral histories.


Thomas Bayes ( 1702 - 1761) → Bayes Theorem

  • 18th-century theory is new force in computing. By Michael Kanellos. CNET (February 18, 2003). "Thomas Bayes, one of the leading mathematical lights in computing today, differs from most of his colleagues: He has argued that the existence of God can be derived from equations. His most important paper was published by someone else. And he's been dead for 241 years. Yet the 18th-century clergyman's theories on probability have become a major part of the mathematical foundations of application development. Search giant Google and Autonomy, a company that sells information retrieval tools, both employ Bayesian principles to provide likely (but technically never exact) results to data searches. Researchers are also using Bayesian models to determine correlations between specific symptoms and diseases, create personal robots, and develop artificially intelligent devices that 'think' by doing what data and experience tell them to do."
  • A biography by J. J. O'Connor and E. F. Robertson of the School of Mathematics and Statistics, University of St Andrews, Scotland.
  • Bayesian logic, a whatis definition from TechTarget. "Bayes first proposed his theorem in his 1763 work (published two years after his death in 1761), An Essay Towards Solving a Problem in the Doctrine of Chances. Bayes' theorem provided, for the first time, a mathematical method that could be used to calculate, given occurrences in prior trials, the likelihood of a target occurrence in future trials. According to Bayesian logic, the only way to quantify a situation with an uncertain outcome is through determining its probability."
  • Bayesian Inference - computer applications. From Wikipedia, the free encyclopedia.
  • See our Uncertainty / Probability and Machine Learning pages.

Bletchley Park

Site of important code-breaking work by the British in WW-II, where Alan Turing did classified work on the Enigma Machine.


George Boole (1815 - 1864) → Boolean algebra, Boolean Logic

  • George Boole. By J. J. O'Connor and E. F. Robertson, School of Mathematics and Statistics University of St. Andrews, Scotland. "Boole approached logic in a new way reducing it to a simple algebra, incorporating logic into mathematics. He pointed out the analogy between algebraic symbols and those that represent logical forms. It began the algebra of logic called Boolean algebra which now finds application in computer construction, switching circuits etc."
  • The Isaac Newton of logic - It was 150 years ago that George Boole published his classic The Laws of Thought, in which he outlined concepts that form the underpinnings of the modern high-speed computer. By Siobhan Roberts. The Globe and Mail (March 27, 2004; page F9).
  • The Calculus of Logic. By George Boole. Cambridge and Dublin Mathematical Journal Vol. III (1848), pp. 183-98. (Transcribed by D.R. Wilkins, School of Mathematics Trinity College, Dublin.)

Carnegie-Mellon University (CMU)

Newell, Allen. 1984. Introduction to the COMTEX Microfiche Edition of Reports on Artificial Intelligence from Carnegie-Mellon University. AI Magazine5 (3): Fall 1984, 35-39.

Carnegie Mellon University Libraries Computer Science Archives. "Artificial intelligence and computer science are two of the strongest research areas within the Carnegie Mellon University Archives. Collections include university records pertaining to integration of computers into the academic sector and papers of internationally influential faculty members. The Archives is the repository for the papers of Professors Allen Newell and Herbert A. Simon." The archive also contains the Pamela McCorduck Collection and the Arie Nicolaas (Nico) Habermann Collection.

  • Also see their online exhibit, Mind Models: Artificial Intelligence Discovery At Carnegie Mellon. "For a half century, Carnegie Mellon University has been a leader in the research and design of artificial intelligence (AI), the creation of 'thinking machines'. Many of Carnegie Mellon's achievements came from pioneering work by professors Herbert A. Simon and Allen Newell. This on-line exhibit explores their work using digital surrogates of reports, papers, and downloadable video clips."

Celebrities of Cognitive Science

Links to biographical information on many individuals in AI.

Chess-Playing Programs

Mastering the Game: A History of Computer Chess by the Computer History Museum. Explanations of the history of computer chess from the museum's exhibit.

Also see AITopics Chess section and section on The Turk.

The Computer Conservation Society (UK)

The Computer Conservation Society (UK). "The Society was formed in 1989 as an initiative between the British Computer Society and the Science Museum of London. It was a time when the computer industry had existed for about half a century, and when many people had spent a professional lifetime in the industry. The industry had matured, but was still poised for ever greater technological and social changes as it had been from its beginnings in the 1940s. It was time to take stock and reflect on the extraordinary developments to date, and in particular, to be concerned that many of the pioneering people and hardware and software were fast disappearing."

The Computer History Museum (US)

Computer History Museum, "the world's largest and most significant history museum for preserving and presenting the computing revolution and its impact on the human experience. It allows you to discover how computing became the amplifier for our minds and changed the way we work, live and play."


Connectionism "is a theory of information processing ... [that relies] on parallel processing of sub-symbols, using statistical properties instead of logical rules to transform information. Connectionists base their models upon the known neurophysiology of the brain..." From A Brief History of Connectionism by David Medler (downloadable PDF file at

The CYC Project

The CYC Project is a long-term effort to represent common-sense knoweldge and build a very large knowledge base of facts and relations.

Deep Blue

See our Chess page.

De Morgan

"Augustus De Morgan ... was arguably the premier English mathematician of the early Victorian period. As the first professor of mathematics at the London University (later University College London [UCL]), he was by all accounts an inspired mathematical teacher; his students included many of the younger generation's most powerful intellects, among them J. J. Sylvester, Stanley Jevons, Isaac Todhunter, and, outside of the classroom, Ada Lovelace. ... Mathematicians still remember him for his contributions to the foundations of algebra, for recognizing the power of mathematical induction, for De Morgan's laws, for the logic of relations, for posing the four‐color problem, and, always, for the wonderful Victorian wit that permeated absolutely all of his writings." from Joan L. Richards, “This Compendious Language”: Mathematics in the World of Augustus De Morgan, Isis , Vol. 102, No. 3 (September 2011), pp. 506-510.


Lederberg's Dendritic Algorithm for generating chemical graphs. Allso the name of the first expert system which was developed at Stanford by Lderberg, Feigenbaum, Buchanan, Sutherland, and others to solve chemical structure identification problems in organic chemistry. See Joshua Lederberg's 1987 paper "How Dendral was Conceived and Born".


The History of Artificial Intelligence at Edinburgh University: a Perspective. By Jim Howe (June 2007 revision). "The Department of Artificial Intelligence can trace its origins to a small research group established in a flat at 4 Hope Park Square in 1963 by Donald Michie, then Reader in Surgical Science. During the Second World War, through his membership of Max Newman's code-breaking group at Bletchley Park, Michie had been introduced to computing and had come to believe in the possibility of building machines that could think and learn. By the early 1960s, the time appeared to be ripe to embark on this endeavour."



Eliza Doolittle → Eliza', the chatterbot.

  • "ELIZA was developed between 1964 and 1966 by Joseph Weizenbaum at MIT as part of the MAC timesharing project. Weizenbaum chose the name ELIZA after Eliza Doolittle, the protagonist of G.B. Shaw’s play Pygmalion. Like this character, the program is to some extent able to improve its apparent communicative skill by recycling a user’s responses, though without actually developing a deeper knowledge of meaning and reality."
  • Talk to her - Artificial intelligence vs. human stupidity. By Victoria James. The Japan Times (November 23, 2003). "The earliest chatterbot programs ever written say more about the human condition than they do about the nature of computer intelligence. The first, ELIZA -- or Dr. Eliza, as 'she' was known -- had the persona of a Rogerian psychotherapist. Her successor, perhaps the inspiration for Marvin, the 'paranoid android' of Douglas Adams' anarchic 'The Hitchhiker's Guide to the Galaxy' novels, was named PARRY and was programmed to display the behavioral hallmarks of a paranoid schizophrenic. ... [Joseph] Weizenbaum recognized that Alan Turing's 'Imitation Game' test of computer intelligence required merely that the computer simulate intelligence, so he used some simple semantic tricks to create the desired effect. (It's no coincidence that his program shares the name of Eliza Doolittle, the erstwhile heroine of George Bernard Shaw's 'Pygmalion,' a flower girl trained up to act like a lady in a perfect example of an 'imitation game.') ... In 1994, the term 'chatterbot' was established in the AI lexicon by Michael Mauldin of Carnegie Mellon University, in his account of entering the Loebner contest."
  • Visit our collection of chatterbots . . . and find out more about Alan Turing.
  • And see Joseph Weizenbaum's obituaries.


"Ligtning Strikes Mathematics" by Alan Rose, Popular Science (April, 1946), pp.83-85. Early detailed description, with photos, of the Eniac, "the worlds first all-electronic computer".


Kurt Gödel, mathematician, perhaps best known for Gödel’s Theorem explained in a review of Gödel, Escher, Bach: An Eternal Golden Braid by Douglas R. Hofstadter, Basic Books (1979) from the NY Review of Books, "The Dream of Mind and Machine", by Edward Rothstein (December 6, 1979). " has become almost essential to gain some understanding of Gödel’s theorem, it is not only called upon in discussions of philosophy of mathematics, but also in essays on science, music, and literature. "


Information Processing Language, IPL. IPL-V was widely used at CMU (then Carnegie Institute of Technology) for early work in AI.


Sir James Lighthill chaired a commission in the UK that panned AI and created a funding crisis for several years. A debate with noted AI researchers in 1973 did not change his mind. See the Lighthill Controversy Debate at the Royal Institution with Professor Sir James Lighthill, Professor Donald Michie, Professor Richard Gregory and Professor John McCarthy. BBC TV (June 1973) / video available in several formats from AIAI, The University of Edinburgh's Artificial Intelligence Applications Institute.


History of Lisp: Web site devoted to the history of the language.

McCarthy, John. 1978. History of LISP. In History of Programming Languages: Proceedings of the ACM SIGPLAN Conference, 1978, ed. Wexenblatt, R. L., 173-197. New York: Academic Press, 1981.

Logic Programming

Middle History of Logic Programming, by Carl Hewitt.


In Britain in the early 1800s, automation in the textile mills put many people out of work. Gangs of angry men started destroying the machinery. One of the leaders of the rampaging mobs was said to be Ned Ludd, which is why opponents of technological change are referred to as "Luddites".

For a scholarly treatment of the Luddites, see Sale, Kirkpatrick. 1996. Rebels Against the Future: The Luddites and their War on the Industrial Revolution. Reading, MA: Addison-Wesley Publishing Co. Providing first a fascinating history of the early 1800's Luddite movement, the author describes transformative changes wrought by technology and computerization in our time, and claims that, contrary to popular belief, technology is neither neutral nor subservient to humankind.


John McCarthy

  • Watch John McCarthy explain how AI got its name in the video: Lighthill Controversy Debate at the Royal Institution.
    • Getting machines to think like us. Newsmaker interview with John McCarthy. By Jonathan Skillings. CNET (July 3, 2006). "In 1956, a group of computer scientists gathered at Dartmouth College to delve into a brand-new topic: artificial intelligence. ... It was [John] McCarthy who put the name 'artificial intelligence' to the field of study, just ahead of the conference. With Dartmouth hosting a 50th anniversary conference this month, McCarthy--now a professor emeritus at Stanford University--spoke with CNET about the early expectations for AI, the accomplishments since then and what remains to be done.
    • Resources from the The Charles Babbage Institute (CBI), University of Minnesota:
    • ...and check out:
      • AI@50, the Dartmouth Artificial Intelligence Conference: The Next Fifty Years (AI@50) to honor the fiftieth anniversary of the 1956 Dartmouth Summer Research Project.
        • Related podcast: Golden Anniversary For AI. Dartmouth News: Views from the Green (May 5, 2006). "The field of artificial intelligence was officially named 50 years ago by Dartmouth Professor John McCarthy when he convened the 1956 Dartmouth Summer Research Project on Artificial Intelligence. In this podcast, philosophy professor Jim Moor discusses the history of AI and some of the philosophical questions he's been thinking about. He also talks about this summers's AI@50 conference, which will be held July 13-15 at Dartmouth."
      • 50 Years - Artificial Intelligence Symposium at KI 2006 (17 June 2006, Bremen). "This year, the Artificial Intelligence community celebrates the golden anniversary of the 1956 Dartmouth Conference that marks the beginning of AI as a research field. This symposium will take stock of the promises and achievements of AI and looks ahead to the next 50 years. The meeting will be moderated by Wilfried Brauer (TU München / U Bremen)." Talks available for download include:
        • Marvin Minsky, "Father of AI“ organizer of the 1956 Dartmouth conference: 1956-1966 How did it all begin? - Issues then and now;
        • Hiroshi Ishiguro, constructs the most humanlike robots around University of Osaka: 2006-2056 Projects and Vision in Robotics;
        • Bernd Reuse, Welcome Address by the Federal Ministry of Education and Research:, 30 Years of Funding for Research into Artificial Intelligence in Germany;
        • Aaron Sloman, Professor of Artificial Intelligence and Cognitive Science, The University of Birmingham (UK): 1966-1976 Fundamental questions;
        • Wolfgang Wahlster, AI pioneer in natural language interaction, winner of many awards, including the innovation prize 2001 of the President of the Federal Republic of Germany: 1986-1996 Three Decades of Human Language Technology in Germany.


Donald Michie

The very early days. An interview (available in PDF, Quicktime, and Realmedia) with Donald Michie, Professor Emeritus at the University of Edinburgh, and currently a visitor at NSW University of Technology. "Interested in AI from 1942, Donald Michie conceived, founded and directed the UK's first AI laboratory at Edinburgh, and has since been active in AI projects around the World. ... His talk will cover the period from 1942, when Alan Turing was a colleague at Bletchley Park, up to 1965, when the Edinburgh AI laboratory was truly launched. He will cover the theories, the practice, the personalities and the politics, and on past form may be expected to do so without pulling any punches." This is just one of the 4 presentations given at the October 2002 seminar, Artificial Intelligence - Recollections of the Pioneers. The four are:

  • Donald Michie, Professor Emeritus at the University of Edinburgh, and currently a visitor at NSW University of Technology: The very early days.
  • Jim Doran, Essex University Edinburgh & Essex: the past & the future for AI.
  • Aaron Sloman, Birmingham University: AI and the study of mind.
  • Austin Tate, AIAI, University of Edinburgh: Putting AI to Use.


Marvin Minsky Interview with Marvin Minsky (November 3, 2010). PBS NOVA interview touching on Minsky's views on AI.

In Honor of Marvin Minsky’s Contributions on his 80th Birthday. Downloadable PDF from AI Magazine Volume 28 Number 4 (2007). Tributes by numerous AI scientists recalling Minsky's influential work.

Discover Interview: Marvin Minsky . Discover Magazine (January 2007). by Susan Kruglinski. "The legendary pioneer of artificial intelligence ponders the brain, bashes neuroscience, and lays out a plan for superhuman robot servants."


Minsky, Marvin. 1983. Introduction to the COMTEX Microfiche Edition of the Early MIT Artificial Intelligence Memos. AI Magazine 4(1): Spring 1983, 19-22.

MIT Laboratory for Computer Science's timeline of major milestones.

[ | Recovering MIT's AI Film History]] - Early Artificial Intelligence Research Caught on Film. "Here you will find a chronology of some of AI's most influential projects and how they worked. It is intended for both non-scientists and those ready to continue experimentation and research tomorrow. Included is a taste of who the main players have been, concepts they and their projects have explored and how the goals of AI have evolved and changed over time. Many will be surprised that some of what we now consider standard tools like search engines, spell check and spam filters are all outcroppings of AI research."

  • In addition to the "treasure trove of ninety-six film reels found in the old MIT TechSquare," be sure to also check out:
    • Oral Histories (broken link) - informal film interviews were conducted during the AAAI 50th Anniversary Celebration (Fellows Symposium) and during the summer of 2006. Participants: Ron Brachman, Danny Bobrow, Jim Hendler, Nils Nilsson, Ben Kuipers, Manuela Veloso, Ed Feigenbaum, Randall Davis, Harry Barrow, Bruce Buchanan, Alan Bundy, Jon Doyle, Drew McDermott, Ryszard Michalski, Chuck Rich, Edwina Rissland, Candace Sidner, Reid Simmons, Gerry Sussman, Beverly Woolf, Peter Szolovits, Henry Kautz, Bart Selman, Bill Swartout, Pat Winston, Rod Brooks, and Marvin Minsky.


The suffix for a common class of antimicrobial drugs, e.g., vancomycin, used to treat bacterial infections. The computer program MYCIN deals with the problem of diagnosing and recommending therapy for severe infections. The name 'EMYCIN' was used to name the Essential-MYCIN "shell", i.e., the inference engine, explanation system, representation framework and other programming tools without any of MYCIN's medicine-specific knowledge. See the book Rule Based Expert Systems for more.


Allen Newell

In Pursuit of Mind: The Research of Allen Newell. By John E. Laird and Paul S. Rosenbloom. AI Magazine 13(4): Winter 1992, 17-45. "Allen Newell was one of the founders and truly great scientists of AI. His contributions included foundational concepts and ground-breaking systems. His career was defined by the pursuit of a single, fundamental issue: the nature of the human mind. This article traces his pursuit from his early work on search and list processing in systems such as the LOGIC THEORIST and the GENERAL PROBLEM SOLVER; through his work on problem spaces, human problem solving, and production systems; through his final work on unified theories of cognition and SOAR." Included within the article is a remembrance of Allen Newell written by Herbert Simon.

Allen Newell (March 19, 1927 - July 19, 1992): A Biographical Memoir by Herbert A. Simon (1997). From the National Academy of Sciences' collection of Biographical Memoirs. "Work was pursued simultaneously on a programming language that would be adequate for implementing the design, leading to the invention of the Information Processing Languages (IPLs), the first list-processing languages for computers. ... To achieve this flexibility and generality the IPLs introduced many ideas that have become fundamental for computer science in general, including lists, associations, schemas (frames), dynamic memory allocation, data types, recursion, associative retrieval, functions as arguments, and generators (streams). The IPL-V Manual (Newell, 1961), exploiting the closed subroutine structure of the language, advocated a programming strategy that years later would be reinvented independently as structured programming--mainly top-down programming that avoided go-to's. LISP, developed by John McCarthy in 1958, which embedded these list-processing ideas in the lambda calculus, improved their syntax and incorporated a 'garbage collector' to recover unused memory, soon became the standard programming language for artificial intelligence (AI)."

• Allen Newell & Herb Simon

Over the holidays 50 years ago, two scientists hatched artificial intelligence. By Byron Spice. (January 2, 2006). "Fifty years ago, Herbert A. Simon and Allen Newell had a Christmas break story that would top them all. 'Over the Christmas holiday,' Dr. Simon famously blurted to one of his classes at Carnegie Institute of Technology, 'Al Newell and I invented a thinking machine.' It was another way of saying that they had invented artificial intelligence -- in fact, the only way of saying it in the winter of 1955-56 because no one had gotten around to inventing the term 'artificial intelligence.'"


Pandemonium, the capital of Hell in John Milton's Paradise Lost   →  Oliver Selfridge's classic paper, Pandaemonium (a/k/a Pandemonium)

  • "The word Pandemonium can be either upper or lower case. The uncapitalized term names 'a tumult or wild uproar,' while the capitalized version refers to 'the infernal regions, or to the capital of Hell in John Milton's Paradise Lost.' When Milton coined Pandemonium for his epic poem, he combined the Greek pan meaning 'all, or every,' with the Latin daemonium, or 'evil spirit.'" - from Merriam-Webster's Word for the Wise, broadcast of September 5, 2001: "We recently heard from a fellow interested in the story behind the word pandemonium...."
  • "Walt Bunch believes the term [demon / daemon] comes from the demons in Oliver Selfridge's paper 'Pandemonium', MIT 1958, which was named after the capital of Hell in Milton's 'Paradise Lost'. Selfridge likened neural cells firing in response to input patterns to the chaos of millions of demons shrieking in Pandemonium." - from the definition of "demon" in FOLDOC.
    • "Demons (parts of programs) are particularly common in AI programs. For example, a knowledge-manipulation program might implement inference rules as demons." - from the definition of "demon" in FOLDOC.
  • "Pandemonium - Model of feature detection developed by Selfridge (1959): originally to recognise Morse code patterns, but later developed by Lindsay and Norman (1972) into a bottom-up theory of letter recognition. Quite apart from its usual meaning, the word 'pandemonium' refers to the dwelling-place of all the demons. Selfridge made use of both meanings of the word in his model, in which neuronsor neural clusters 'shriek' to indicate the presence of particular features of the perceived stimulus." - from the Psybox Online Dictionary's definition of "Pandemonium"
  • Agents: from Pandemonium to ... whither? - Oliver Selfridge. "His pandemonium paper of 1958 is recognized as the beginning of breakthroughs in several fields." - Fifth Annual New Paradigms for Using Computers Workshop, IBM Almaden Research Center.
  • "Pandemonium consists of four separate layers: each layer is composed of 'demons' specialized for specific tasks. The bottom layer consists of data or image demons that store and pass on the data. The third layer is composed of computational demons that perform complicated computations on the data and then pass the results up to the next level. The second layer is composed of cognitive demons who weight the evidence from the computational demons and 'shriek' the amount of evidence up to the top layer of the network. The more evidence that is accumulated, the louder the shriek. At the top layer of the network is the decision demon, who simply listens for the loudest 'shriek' from the cognitive demons, and then decides what was presented to the network. - from , by David A. Medler (downloadable PDF file).
  • Also see: Agents (including Agent Architecture), Cognitive Science, Vision, Neural Networks and Multi-Agent Systems


Seymour Papert

Works by Seymour Papert, Ph.D.. Bibliography and online articles by influential educational psychologist, Seymour Papert. In the early 1960's, Papert came to MIT where, with Marvin Minsky, he founded the Artificial Intelligence Laboratory and co-authored their seminal work, Perceptrons (1970). Papert's Principle was enshrined in The Society of Mind by Marvin Minsky (Touchstone, 1988), p.102: "Some of the most crucial steps in mental growth are based not simply on acquiring new skills, but on acquiring new administrative ways to use what one already knows."


Charles Sanders Peirce

Charles S. Peirce Studies (Home page for biographical information as well as many online papers). Peirce was a 19th cen. polymath who influenced the development of logic, probability, epistemology and many other fields that touch on AI. "Who is the most original and the most versatile intellect that the Americas have so far produced? The answer "Charles S. Peirce" is uncontested, because any second would be so far behind as not to be worth nominating."


How Portugal Celebrated AI's 50th Anniversary. By Carlos Ramos. IEEE Intelligent Systems 21(4): July/August 2006, 86-88. This article describes the celebration and related museum exhibition, and also provides a brief history of AI in Portugal: "Luís Moniz Pereira is considered the father of AI in Portugal. In 1966, as an undergraduate student, he created the Center for Cybernetics Studies. ..."


The Robocup Home Page provides details and videos on Robocup Soccer and Robocup Rescue, plus information about the organization responsible for robot soccer tournaments.



Oliver Selfridge

Selfridge, Oliver G. (1993). The Gardens of Learning: A Vision for AI. AI Magazine 14(2): Summer 1993, 36-48. "I have watched AI since its beginnings ... In 1943, I was an undergraduate at the Massachusetts Institute of Technology (MIT) and met a man whom I was soon to be a roommate with. He was but three years older than I, and he was writing what I deem to be the first directed and solid piece of work in AI. His name was Walter Pitts, and he had teamed up with a neurophysiologist named Warren McCulloch, who was busy finding out how neurons worked (McCulloch and Pitts 1943)."

Semantic Web

"Tim Berners-Lee coined the vision of a "semantic web" in which background knowledge is stored on the meaning or content of web resources through the use of machine-processable metadata. The semantic web should be able to support automated services based on these descriptions of semantics. The semantic or "knowledge" web is seen as a key factor in finding a way out of the growing problems of traversing the expanding web space, where currently most web resources can only be found through syntactic matches (e.g., keyword search). " [Taken from The ETAI definition of the semantic web ]


Claude Shannon

Claude E. Shannon: Founder of Information Theory. By Graham P. Collins. Scientific American Explore (October 14, 2002). "Shannon's M.I.T. master's thesis in electrical engineering has been called the most important of the 20th century: in it the 22-year-old Shannon showed how the logical algebra of 19th-century mathematician George Boole could be implemented using electronic circuits of relays and switches. This most fundamental feature of digital computers' design -- the representation of 'true' and 'false' and '0' and '1' as open or closed switches, and the use of electronic logic gates to make decisions and to carry out arithmetic -- can be traced back to the insights in Shannon's thesis."



  • "SHRDLU is a program for understanding natural language, written by Terry Winograd at the M.I.T. Artificial Intelligence Laboratory in 1968-70."


Herbert Simon Short Autobiography of Herb Simon from the Nobel Prize biographies.

Stewart, Doug. Interview with Herbert Simon, (June 1994). Omni Magazine archives.

Software History Center

Software History Center. "The Software History Center within the Computer History Museum is dedicated to preserving the history of the software industry, one of the largest and most influential industries in the world today. The industry originated with the entrepreneurial computer software and services companies of the 1950s and 1960s, grew dramatically through the 1970s and 1980s to become a market force rivaling that of the computer hardware companies, and by the 1990s had become the supplier of technical know-how that transformed the way people worked, played and communicated every day of their lives. The SHC is working to preserve for future generations information about the companies, people, products, and events that shaped the evolution of this vital industry."

Stanford Artificial Intelligence Lab (SAIL)

Buchanan, Bruce G. (1983). Introduction to the COMTEX Microfiche Edition of Memos from the Stanford University Artificial Intelligence Laboratory. AI Magazine4 (4): Winter 1983, 37-41.

Reddy, Raj. (2006). . PowerPoint slides from a talk by R.Reddy emphasizing the importance of the research at SAIL in the 1960s.

Stanford Research Institute (SRI International)

Nilsson, Nils J. (1984). Introduction to the COMTEX Microfiche Edition of the SRI Artificial Intelligence Center: Technical Notes. AI Magazine 5(1): Spring 1984, 41-52.

SRI-AIC Timeline: "The Artificial Intelligence Center at SRI International has been a center of excellence and innovation for over thirty-five years. The Center has made many important contributions to the AI field over the years, and new advances continue to be made. This timeline shows just a few of the AI Center's major achievements and milestones."


A.M. Turing

Turing's Enduring Importance. By Simson L. Garfinkel, MIT Technology Review ( March/April 2012), Very short description of why Turing is such an important figure in the history of computing. "The path computing has taken wasn't inevitable. Even today's machines rely on a seminal insight from the scientist who cracked Nazi Germany's codes."

Turing Centenary Year Celebration 2012. Web site links to many relevant publications and external sites, as well as to descriptions of events around the world celebrating Turing's life.

Turing at 100. Special issue of Nature (Feb., 2012), one of the world's leading science journals, devoted to Turing's contributions.

The Turing Scrapbook contains many links to information about Turing.

Alan Turing and his machines - fresh insights into the enigma. By Matilda Battersby, The Independent (UK), (June 14, 2012). Article on the personal side of Turing, with interviews with his nephew and his assistant at Bletchley Park.

Turing's classic article (1950), COMPUTING MACHINERY AND INTELLIGENCE was reprinted in Computers and Thought.

Also see section on Turing's Test.

The Turk

Monster in a Box - The inside story of an ingenious chess-playing machine that thrilled crowds, terrified opponents, and won like clockwork. By Tom Standage. Wired (March 2002; Issue 10.03).

  • Also see his book, The Turk: The Life and Times of the Famous Eighteenth-Century Chess-Playing Machine.

The Grandmaster Hoax (March 28, 2012) Paris Daily Review, by Lincoln Michel. "Although the Turk lacked a mechanical mind, he is in one way the wooden grandfather of all our digital world. Charles Babbage, often known as the father of the computer, was defeated twice by the Turk. He knew it was a trick, but it also sparked the idea that machines could think intelligently."


Unimate: The Story of George Devol and the First Robotic Arm. By Rebecca J. Rosen, The Atlantic (August 16, 2011). Shows diagram from 1961 patent application; links to 1966 video of Devol demonstrating the Unimate arm on TV.


John Venn Survey of Venn Diagrams

Vision Systems

Some Famous Vision Systems maintained by CVOnline.

AAAI   Recent Changes   Edit   History   Print   Contact Us
Page last modified on June 26, 2012, at 12:59 PM