Philosophical Implications of AI

AITopics > Philosophy

Artificial Intelligence cannot avoid philosophy.

If a computer program is to behave intelligently in the real world, it must be provided with some kind of framework into which to fit particular facts it is told or discovers. This amounts to at least a fragment of some kind of philosophy, however naive.

- John McCarthy
Mathematical Logic in Artificial Intelligence. Daedalus 117(1): 297-310, Winter 1988

photo of John McCarthy
John McCarthy

Many traditional philosophical questions take new twists in the context of intelligent machines. For example: What is a mind? What is consciousness? Where do we draw the line on responsibility for actions when dealing with robots, computers, programming? Do human beings occupy a privileged place in the universe? "Is it reasonable to ascribe consciousness to a droll and well-mannered aunt, yet deny it in a robot that behaves like one?" "How do we acquire knowledge of the world? What do our languages tell us about our minds or the world? What is knowledge? What is a proof? What is art? Most of the sources listed below discuss issues in the philosophy of mind, epistemology, and the philosophy of language. Social and ethical considerations are certainly related, but these are listed separately, on the Social and Ethical Implications page. Similarly, many works of science fiction deal with philosophical and social issues, and these are listed separately, on the Science Fiction page.

Good Starting Places

What has AI in Common with Philosophy? John McCarthy, February 29, 1996. "AI needs many ideas that have hitherto been studied only by philosophers. This is because a robot, if it is to have human level intelligence and ability to learn from its experience, needs a general world view in which to organize facts. It turns out that many philosophical problems take new forms when thought about in terms of how to design a robot. Some approaches to philosophy are helpful and others are not. "

man pondering a skull

AI’s Half-Century. By Margaret A. Boden. AI Magazine 16(4): Winter 1995, 96-99. "Part of what it means to say that something is philosophically interesting is that it is highly controversial --- and AI is. ... What of consciousness? ... As though all these philosophical disputes weren’t enough, neo-Heideggerian murmur- ings are afoot. They threaten the fundamental assumptions of AI, for they reject the subject-object distinction presupposed by materialists and idealists alike, and deny the epistemological primacy of science."

This Week on Philosophy Talk - Artificial Intelligence (May 20, 2007 radio broadcast; audio available online). With Ken Taylor and John Perry of Stanford University. KALW, 91.7 FM, San Francisco. "At least some versions of artificial intelligence are attempts not merely to model human intelligence, but to make computers and robots that exhibit it: that have thoughts, use language, and even have free will. Does this make sense? What would it show us about human thinking and consciousness? Join John and Ken [and guest, Marvin Minsky] as they uncover the philosophical issues raised by artificial intelligence."

General Readings

The Real Transformers - Researchers are programming robots to learn in humanlike ways and show humanlike traits. Could this be the beginning of robot consciousness -- and of a better understanding of ourselves? By Robin Marantz Henig. The New York Times Sunday Magazine (July 29, 2007; cover story). "Robot consciousness is a tricky thing, according to Daniel Dennett, a Tufts philosopher and author of 'Consciousness Explained,' who was part of a team of experts that Rodney Brooks assembled in the early 1990s to consult on the Cog project. In a 1994 article in The Philosophical Transactions of the Royal Society of London, Dennett posed questions about whether it would ever be possible to build a conscious robot. His conclusion: 'Unlikely,' at least as long as we are talking about a robot that is 'conscious in just the way we human beings are.' But Dennett was willing to credit Cog with one piece of consciousness: the ability to be aware of its own internal states. ... Robot consciousness, it would seem, is related to two areas: robot learning (the ability to think, to reason, to create, to generalize, to improvise) and robot emotion (the ability to feel). Robot learning has already occurred, with baby steps, in robots like Cog and Leonardo, able to learn new skills that go beyond their initial capabilities. But what of emotion? ... "

The Turing Test . A Computerworld TechCast (April 5, 2007). Topics covered in this podcast include The Turing Test, consciousness, and Searle's Chinese Room.

Q&A Wth Zain Verjee. Transcript of show that aired February 4, 2002 on CNN International with participants Rodney Brooks, Rolf Pfeifer, John Searle, Doug Lenat, and Dick Stottler. "Searle: ... And I have no objections to artificial intelligence technology. Where I draw the line is when they say, well, now we have created a thinking machine, or we've created a conscious machine. Now, I'm glad to see Rolf doesn't say that, but an awful lot of people in AI do."

Creativity: The Mind, Machines, and Mathematics: Public Debate [video: approx. 1 hour]. Rodney Brooks moderates this November 30, 2006 debate between Ray Kurzweil and David Gelernter. From MIT World. "About the lecture: : Two of the sharpest minds in the computing arena spar gamely, but neither scores a knockdown in one of the oldest debates around: whether machines may someday achieve consciousness. (NB: Viewers may wish to brush up on the work of computer pioneer Alan Turing and philosopher John Searle in preparation for this video.)"

Machines with Minds [video: approx. 30 minutes]. An episode of The Next Big Thing Series, available from The Vega Science Trust. First aired on BBC in March 2002. The panel [Professor Aaron Sloman (University of Birmingham), Dr Amanda Sharkey (University of Sheffield), and Professor Igor Aleksander (Imperial College)] addresses questions such as: Can a machine be conscious?

Baroness Greenfield on artificial intelligence: Conscious Computers - Will computers ever be able to generate human consciousness? Video clip from Human v2.0 - Will the rise in computer intelligence change humanity forever? Horizon (television programme series). BBC Two (October 24, 2006).

The Ethics of Creating Consciousness. The Connection radio program hosted by Dick Gordon, with guests: Marvin Minsky, Brian Cantwell Smith, and Paul Davies. From WBUR Boston and NPR. June 13, 2005. "Next month, IBM is set to activate the most ambitious simulation of a human brain yet conceived. It's a model they say is accurate down to the molecule. No one claims the 'Blue Brain' project will be self-aware. But this project, and others like it, use electrical patterns in a silicon brain to simulate the electrical patterns in the human brain -- patterns which are intimately linked to thought. But if computer programs start generating these patterns -- these electrical 'thoughts' -- then what separates us from them? Traditionally human beings have reserved words like 'reasoning,' 'self-awareness,' and 'soul' as their exclusive property. But with the stirring of something akin to electronic consciousness -- some argue that human beings need to give up the ghost, and embrace the machine in all of us." Links to the broadcast are provided.

"It's a three-part question. What is consciousness? Can you put it in a machine? And if you did, how could you ever know for sure?"
- from Kenneth Chang's Can Robots Become Conscious?

Development of a Robot with a Sense of Self, by K. Kawamura, W. Dodd, P. Ratanaswasd and R. A. Gutierrez; Center for Intelligent Systems, Vanderbilt University. Presented at the 6th IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA), Espoo, Finland, June 27-30, 2005.

Philosophical Encounter. A symposium organized by Aaron Sloman at IJCAI-95, with speakers John McCarthy and Marvin Minsky. Two papers are available online:

  • A Philosophical Encounter. By Aaron Sloman, School of Computer Science and Cognitive Science Research Centre, University of Birmingham, UK.In Proceedings of the 14th International Joint Conference on AI, Montreal, August 1995.
  • Artificial Intelligence and Philosophy. By John McCarthy. ("The present version is somewhat improved. I would like to give better references to work by philosophers that I consider to have positively influenced AI research, but it may take some time to formulate this. By John McCarthy, Computer Science Department, Stanford University, Stanford, CA.
  • Also see: Aaron Sloman interviewed by Patrice Terrier for EACE Quarterly. August 1999; updated 11 July 2002. "Those who are ignorant of philosophy are doomed to reinvent it badly."

Spiritual Robots Symposium: Will Spiritual Robots Replace Humanity by 2100? A series in eleven parts. Made available online by Stanford's Symbolic Systems Program andTechNetCast. "In 1999, two distinguished computer scientists, Ray Kurzweil and Hans Moravec, came out independently with serious books that proclaimed that in the coming century, our own computational technology, marching to the exponential drum of Moore's Law and more general laws of bootstrapping, leapfrogging, positive-feedback progress, will outstrip us intellectually and spiritually, becoming not only deeply creative but deeply emotive, thus usurping from us humans our self-appointed position as 'the highest product of evolution'. Reasonable fact or complete fiction? Expert panel assembled by Doug Hofstadter explores the issue. With presentations by Frank Drake, Doug Hofstadter, John Holland, Bill Joy, Kevin Kelly, John Koza, Ray Kurzweil, Ralph Merkle and Hans Moravec" See/hear/read what they have to say via video, audio and text.

  • Also see our Ethics page for the articles related to the Bill Joy - Ray Kurzweil "dialogue".

"Growing impatient with me as I pressed [Cynthia Breazeal] for a definition of 'alive,' she said: 'Do you have to go to the bathroom and eat to be alive?'"
- from Programming the Post-Human [p.67]

Humans and their Machines. NPR Science Friday (April 26, 2002). "Researchers at the MIT Artificial Intelligence Lab are working to create robots as intelligent and sociable as humans. At the same time, medical advances are making humans more robot-like, with mechanical hearts and working artificial limbs. In this hour, we'll talk with the participants of the First Utah Symposium in Science and Literature about the relationship between humans and machines - and just what it means to be human." Listen to Ira Flatow, anchor of Talk Of The Nation: Science Friday, interview Rodney Brooks, Anne Foerst, and Richard Powers.

two men lost in thought Sentience: The next moral dilemma. By Richard Barry. ZDNet UK. (January 24, 2001). "If they are right, one day man will give life to a new race of intelligent sentient beings powered by artificial means. If we can, for argument's sake, agree that this is possible we should consider how a sentient artificial being would be received by man and by society. Would it be forced to exist like its automaton predecessors who have effectively been our slaves, or would it enjoy the same rights as the humans who created it, simply because of its intellect?"

AI and Philosophy. From of Chapter One (available online) of George F. Luger's textbook, Artificial Intelligence: Structures and Strategies for Complex Problem Solving, 5th Edition (Addison-Wesley; 2005). "In Section 1.1 we presented the philosophical, mathematical, and sociological roots of artificial intelligence. It is important to realize that modern AI is not just a product of this rich intellectual tradition but also contributes to it. For example, the questions that Turing posed about intelligent programs reflect back on our understanding of intelligence itself. What is intelligence, and how is it described? What is the nature of knowledge? Can knowledge be represented? How does knowledge in an application area relate to problem-solving skill in that domain? How does knowing what is true, Aristotle's theoria, relate to knowing how to perform, his praxis ? Answers proposed to these questions make up an important part of what AI researchers and designers do."

"I think many passionate researchers in artificial intelligence are fundamentally interested in the question of Who am I? Who are people? What are we? There's a sense of almost astonishment at the prospect that information processing or computation, if you take that perspective, could lead to this. Coupled with that is the possibility of the prospect of creating consciousnesses with computer programs, computing systems some day. ... Is it possible - - - is it possible that parts turning upon parts could generate this?" - Eric Horvitz on The Charlie Rose Show

What is Consciousness? This video program is part of the USC Presents...Closer To Truth series available from the ResearchChannel ("a non-profit organization founded in 1996 by a consortium of leading research universities, institutions and corporate research centers dedicated to creating a widely accessible voice for research through video and Internet channels"). Panelists for this August 8, 2004 program include "David Chalmers, professor of philosophy, co-head, Center for Consciousness, University of Arizona, [and]John Searle, professor of philosophy, University of California, Berkeley."

Proceedings of the Symposium on Next Generation Approaches to Machine Consciousness: Imagination, Development, Intersubjectivity, and Embodiment. AISB 2005. One of the many convention proceedings available from The Society for the Study of Artificial Intelligence and Simulation of Behaviour (SSAISB).

The Charlie Rose Show: A panel discussion about Artificial Intelligence (December 21, 2004), with Rodney Brooks (Director, MIT Artificial Intelligence Laboratory & Fujitsu Professor of Computer Science & Engineering, MIT), Eric Horvitz (Senior Researcher and Group Manager, Adaptive Systems & Interaction Group, Microsoft Research), and Ron Brachman (Director, Information Processing Technology Office, Defense Advanced Research Project Agency, and President, American Association for Artificial Intelligence). "Rose: What do you think has been the most important advance so far? Brachman: A lot of people will vary on that and I'm sure we all have different opinions. In some respects one of the - - - I think the elemental insights that was had at the very beginning of the field still holds up very strongly which is that you can take a computing machine that normally, you know, back in the old days we think of as crunching numbers, and put inside it a set of symbols that stand in representation for things out in the world, as if we were doing sort of mental images in our own heads, and actually with computation, starting with something that s very much like formal logic, you know, if-then-else kinds of things, but ultimately getting to be softer and fuzzier kinds of rules, and actually do computation inside, if you will, the mind of the machine, that begins to allow intelligent behavior. I think that crucial insight, which is pretty old in the field, is really in some respects one of the lynch pins to where we've gotten. ... Horvitz: I think many passionate researchers in artificial intelligence are fundamentally interested in the question of Who am I? Who are people? What are we? There's a sense of almost astonishment at the prospect that information processing or computation, if you take that perspective, could lead to this. Coupled with that is the possibility of the prospect of creating consciousnesses with computer programs, computing systems some day. It's not talked about very much at formal AI conferences, but it's something that drives some of us in terms of our curiosity and intrigue. I know personally speaking, this has been a core question in the back of my mind, if not the foreground, not on my lips typically, since I've been very young. This is this question about who am I. Rose: ... can we create it? Horvitz: Is it possible - - - is it possible that parts turning upon parts could generate this?"

Daniel Dennett. Interviewed by Harvey Blume. The Atlantic Unbound (December 9, 1998). "As posed by Alan Turing, the question of machine intelligence has become a central theme of our time -- and here, as elsewhere, Dennett brings analytic rigor to bear. To the question of whether machines can attain high-order intelligence, Dennett makes this provocative answer: 'The best reason for believing that robots might some day become conscious is that we human beings are conscious, and we are a sort of robot ourselves.' This is part of Dennett's campaign to overcome the mind-body split bequeathed to us by Descartes, who identified his existence with his self-consciousness (his Cogito) and believed that the thinking portion of the self was attached almost accidentally to the body. Like many in cognitive science, Dennett wants to show that mind and matter are not necessarily opposed."

  • Be sure to see our collection of Interviews for what others have to say about this.
  • Also see: The semantic engineer - Profile: Daniel Dennett. By Andrew Brown. The Guardian (April 17, 2004). "It was at Oxford, too, that he first became interested in computers and the brain. The Oxford philosopher John Lucas had published a paper - still famous - arguing that Gödel's theorem disproved any theory that humans must be machines, and that human thought could be completely simulated on a computer. This is the position Dennett became famous for attacking. ... He's famous among philosophers as an extreme proponent of robot consciousness, who will argue that even thermostats have beliefs about the world. ... 'Conscious robot is not an oxymoron -or maybe it was, but it's not going to be for much longer. How much longer? I don' t know. Turing [50 years ago] said 50 years, and he was slightly wrong, but the popular imagination is already full with conscious robots.'"

It's the thought that counts. By Dylan Evans. Guardian (October 6, 2001). "Will machines ever be able to think for themselves? "There are those, however, who argue that the Turing test is, in fact, too difficult: not only does a machine have to be able to think, they say, but it also has to be able to think like a human. Unless we assume, chauvinistically, that human thought is the only kind there is, we shall have to admit that a machine might be able to think and yet still fail the test - it might simply be thinking in a non-human-like way. To illustrate this point, the philosopher Robert French tells the following story. ..."

Consciousness, Agents and the Knowledge Game. By Luciano Floridi. Minds and Machines 15(3-4): 415-444 (2005). The full text preprint of this article can be accessed from the author's University of Oxford homepage. Abstract: "This paper has three goals. The first is to introduce the 'knowledge game', a new, simple and yet powerful tool for analysing some intriguing philosophical questions. The second is to apply the knowledge game as an informative test to discriminate between conscious (human) and conscious-less agents (zombies and robots), depending on which version of the game they can win. And the third is to use a version of the knowledge game to provide an answer to Dretske’s question 'how do you know you are not a zombie?'."

Robot: Child of God. By Anne Foerst. "Sometimes computers act as if they are possessed -- does that mean they may have souls? Probably not right now, but Anne Foerst explores the possibility of soulful robots. Originally published March 2000 as a chapter in the book "God for the 21st Century." Published on May 9, 2001." Excerpt: "In the light of this understanding of human specialness, I would have a hard time not to assign personhood to a creature possessing the appropriate degree of complexity. If a being is understood as a partner and friend, it seems hard to take this attribute of value, assigned to it by its friends, away."

Constructions of the Mind--Artificial Intelligence and the Humanities. A special issue of the Stanford Humanities Review 4(2): Spring 1995. Stefano Franchi and Guven Guzeldere, editors. From the Table of Contents, you may link to several full-text articles.

Review of The Philosophy of Artificial Intelligence, edited by Margaret A. Boden (1990). Reviewed by Lee A. Gladwin. AI Magazine 14(2): Summer 1993, 67-68.

On what GEB is really all about (twenty years later). By Douglas Hofstadter. Available as a from the resource collection for his February 2006 Stanford Presidential Lecture, Analogy as Core, Core as Analogy, "In a word, GEB is a very personal attempt to say how it is that animate beings can come out of inanimate matter. What is a self, and how can a self come out of stuff that is as selfless as a stone or a puddle? What is an 'I' and why are such things found (at least so far) only in association with, as poet Russell Edson once wonderfully phrased it, 'teetering bulbs of dread and dream' -- that is, only in association with certain kinds of gooey lumps encased in hard protective shells mounted atop mobile pedestals that roam the world on pairs of slightly fuzzy, jointed stilts?"

  • GEB = Hofstadter, Douglas R., Gödel, Escher, Bach: an Eternal Golden Braid, NY: Basic Books, 1979. [Also: GEB, 20th Anniversary Edition: With a New Preface by the Author, NY: Basic Books, 1999.]

Kiss me, you human. Robot Kismet can walk, talk, and make logical decisions. What's the next step in the quest for artificial intelligence? By Stephen Humphries.The Christian Science Monitor. (June 28, 2001) "It's the astonishing growth in real-world artificial-intelligence technology that is forcing thinkers, theologians, philosophers, and the public to reexamine some age-old fundamental philosophical questions with a new vigor and urgency. Is it possible to replicate human consciousness in machines? If so, then what does that tell us about consciousness? What does it mean to be human?"

God Is the Machine. In the beginning there was 0. and then there was 1. A mind-bending meditation on the transcendent power of digital computation. By Kevin Kelly. Wired Magazine (December 2002).

Philosophical Roots. By Raymond Kurzweil (1990). Chapter Two of the book: The Age of the Intelligent Machine, ed. Kurzweil, Raymond, 23-100. Cambridge, MA: The MIT Press.

  • Also see the Will Machines Become Conscious? collection of articles at "'Suppose we scan someone's brain and reinstate the resulting 'mind file' into a suitable computing medium,' asks Raymond Kurzweil. 'Will the entity that emerges from such an operation be conscious?' Asking that question is a good way to start an argument, which is exactly what we intend to do right here."

The Soul of the Ultimate Machine. By John Markoff. The New York Times, December 10, 2000: Section 3, Page 1. "The astrophysicist Larry Smarr talks about what he calls 'the emerging planetary supercomputer.' The Internet, he explains, is evolving into a single vast computer. The big question is 'Will it become self -aware?'"

A Conversation between a Human Computer and a Materialist Philosopher. By Blaine Mathieu. From Ray Kurzweil's book, The Age of Intelligent Machines, published in 1990. "There are few questions more mysterious and thought provoking than whether a nonhuman machine could ever be considered truly human in any important sense of the word. Let us jump ahead a few decades and imagine, for a moment, that all the problems of creating a truly intelligent machine have been solved. How would two 'people,' a philosopher and a computer, handle some of the physical, emotional, and moral issues of such a creation?"

Some Philosophical Problems from the Standpoint of Artificial Intelligence. By John McCarthy and Patrick J. Hayes. 1969. In Machine Intelligence 4, ed. Meltzer, B., D. Michie and M. Swann, 463-502. Edinburgh, Scotland: Edinburgh University Press. An online version is available at John McCarthy's web site.

What has AI in Common with Philosophy? By John McCarthy. "AI needs many ideas that have hitherto been studied only by philosophers. This is because a robot, if it is to have human level intelligence and ability to learn from its experience, needs a general world view in which to organize facts. It turns out that many philosophical problems take new forms when thought about in terms of how to design a robot. Some approaches to philosophy are helpful and others are not."

Programs of the Mind. Review by Gary Marcus. Science Magazine (June 4, 2004; subscription required). "Eric Baum's What Is Thought? [MIT Press, Cambridge, MA, 2004], consciously patterned after [Erwin] Schrödinger's book [What Is Life?], represents a computer scientist's look at the mind. Baum is an unrepentant physicalist. He announces from the outset that he believes that the mind can be understood as a computer program. Much as Schrödinger aimed to ground the understanding of life in well-understood principles of physics, Baum aims to ground the understanding of thought in well-understood principles of computation. In a book that is admirable as much for its candor as its ambition, Baum lays out much of what is special about the mind by taking readers on a guided tour of the successes and failures in the two fields closest to his own research: artificial intelligence and neural networks. ... Advocates of what the philosopher John Haugeland famously characterized as GOFAI (good old-fashioned artificial intelligence) create hand-crafted intricate models that are often powerful yet too brittle to be used in the real world. ... At the opposite extreme are researchers working within the field of neural networks, most of whom eschew built-in structure almost entirely and rely instead on statistical techniques that extract regularities from the world on the basis of massive experience."

The rise of 'Digital People' - Tales about artificial beings have sparked fascination and fear for centuries; now the tales are turning into reality. Excerpt from "Digital People: From Bionic Humans to Androids" by Sidney Perkowitz, the Charles Howard Candler professor of physics at Emory University. MSNBC Science News (July 13, 2004). "There is, however, considerable debate about the possibility of achieving the centerpiece of a complete artificial being, artificial intelligence arising from a humanly constructed brain that functions like a natural human one. Could such a creation operate intelligently in the real world? Could it be truly self-directed? And could it be consciously aware of its own internal state, as we are? These deep questions might never be entirely settled. We hardly know ourselves if we are creatures of free will, and consciousness remains a complex phenomenon, remarkably resistant to scientific definition and analysis. One attraction of the study of artificial creatures is the light it focuses on us: To create artificial minds and bodies, we must first better understand ourselves. While consciousness in a robot is intriguing to discuss, many researchers believe it is not a prerequisite for an effective artificial being. In his 'Behavior-Based Robotics,' roboticist Ronald Arkin of the Georgia Institute of Technology argues that 'consciousness may be overrated,' and notes that 'most roboticists are more than happy to leave these debates on consciousness to those with more philosophical leanings.' For many applications, it is enough that the being seems alive or seems human, and irrelevant whether it feels so. ... And yet ... there is the dream and the breathtaking possibility that humanity can actually develop the technology to create qualitatively new kinds of beings. These might take the form of fully artificial, yet fully living, intelligent, and conscious creatures -- perhaps humanlike, perhaps not. Or they might take the form of a race of 'new humans'; that is, bionic or cyborgian people who have been enormously augmented and extended physically, mentally, and emotionally."

Jeff Hawkins: Q&A. Interviewed by Jason Pontin. Technology Review (October 13, 2005). "Jeff Hawkins, the chief technology officer of Palm, was the founder of Palm Computing, where he invented the PalmPilot, and also the founder of HandSpring, where he invented the Treo. But Palm and creating mobile devices are only a part-time job for Hawkins. His true passion is neuroscience. Now, after many years of research and meditation, he has proposed an all-encompassing theory of the mammalian neocortex. 'Hierarchical Temporal Memory' (HTM) claims to explain how our brains discover, infer, and predict patterns in the phenomenal world. JP: Is the higher consciousness -- what philosophers sometimes call 'self-consciousness' -- a byproduct of HTM? JH: Yes. I think I understand what consciousness is now. There are two elements to consciousness. First, there is the element of consciousness where we can say, 'I am here now.' This is akin to a declarative memory where you can actively recall doing something. Riding a bike cannot be recalled by declarative memory, because I can't remember how I balanced on a bike. But if I ask, 'Am I talking to Jason?' I can answer 'Yes.' So I like to propose a thought experiment: if I erase declarative memory, what happens to consciousness? I think it vanishes. But there is another element to consciousness: what philosophers and neuroscientists call 'qualia:' the feeling of being alive. ..."

Artificial Intelligence and Philosophy. From Aaron Sloman. "This was a lecture to first year AI students at Birmingham, Dec 11th 2001, on AI and Philosophy, explaining how AI relates to philosophy and in some ways improves on philosophy. It was repeated December 2002, December 2003, October 2004, each time changing a little. It introduces ideas about ontology, architectures, virtual machines and how these can help transform some old philosophical debates."

The Computer Revolution in Philosophy. Philosophy, science and models of mind. By Aaron Sloman. [Originally published in 1978 by Harvester Press and Humanities Press. Though the book is now out of print, it has been made available online by the author.] From the Preface: "And computing is more important than computers: programming languages, computational theories and concepts these are what computing is about, not transistors, logic gates or flashing lights. Computers are pieces of machinery which permit the development of computing as pencil and paper permit the development of writing. In both cases the physical form of the medium used is not very important, provided that it can perform the required functions. Computing can change our ways of thinking about many things, mathematics, biology, engineering, administrative procedures, and many more. But my main concern is that it can change our thinking about ourselves: giving us new models, metaphors, and other thinking tools to aid our efforts to fathom the mysteries of the human mind and heart. The new discipline of Artificial Intelligence is the branch of computing most directly concerned with this revolution. By giving us new, deeper, insights into some of our inner processes, it changes our thinking about ourselves. It therefore changes some of our inner processes, and so changes what we are, like all social, technological and intellectual revolutions."

At one with the universe - Do androids dream of electric sheep? Colin Tudge in London examines definitions of consciousness and artificial intelligence. The Age (February 10, 2003). "There are three points of view. The first, which can be traced back to the founder of modern computing, Alan Turing, and is embraced by the Oxford physiologist Colin Blakemore, is pragmatic. Turing pointed out that it is impossible to know whether other human beings are conscious. Because we feel conscious, we assume other people must be like us. But this can only be an inference. But suppose we made a computer - a robot - that could make whimsical jokes and pass the sandwiches without being asked.... [U]ntil now, three main views have prevailed. One is the 'dualism' of Rene Descartes, which says the universe has two components - matter and mind. The second is the modern orthodox idea - that only matter 'exists', and that mind (including consciousness) is just an 'epiphenomenon'; something that seems to emerge when matter is suitably organised. The third is reflected most starkly in the idealist philosophy of Bishop Berkeley; that only thought is real, and matter is an illusion. But the emerging modern view says that matter and consciousness are not separate entities, as Descartes supposed, but complementary aspects of the universe. Both exist, but neither is primary. Each is the obverse of the other, like two sides of a coin." (It is also this article from which the question toward the top of this page was excerpted.)

Growing Up in the Age of Intelligent Machines: Reconstructions of the Psychological and Reconsiderations of the Human. By Sherry Turkl. From Ray Kurzweil's book, The Age of Intelligent Machines(1990). "Thus, the presence of intelligent machines in the culture provokes a new philosophy in everyday life. Its questions are not so different than the ones posed by professionals: If the mind is (at least in some ways) a machine, who is the actor? Where is intention when there is program? Where is responsibility, spirit, soul? In my research on popular attitudes toward artificial intelligence I have found that the answers being proposed are not very different either. Faced with smart objects, both professional and lay philosophers are moved to catalog principles of human uniqueness."

The Age of Intelligent Machines: Can Computers Think? By Mitchell Waldrop. From Ray Kurzweil's book, The Age of Intelligent Machines (1990). "The complexities of the mind mirror the challenges of Artificial Intelligence. This article discusses the nature of thought itself -- can it be replicated in a machine?" Among the topics covered are: Can a Machine Be Aware?, The Chinese Room, and, Science as a Message of Hope.

Edison's Eve - A Magical History of the Quest for Mechanical Life. By Gaby Wood. Anchor Trade Paperback (July 2003). Author Q & A: "Q: You begin your 'Magical History of the Quest for Mechanical Life' at a very specific place and time: with the story of the philosopher Rene Descartes sailing to Sweden in the mid-17th-century, in the company of an android. Why this moment? A: Although people have tried to construct mechanical simulations of human and animal life for millennia (from Plato’s contemporary, Archytas of Tarentum, to Albertus Magnus, a 13th-century Dominican monk), I wanted to show that it was only really during the Enlightenment that these attempts became more than practical enterprises: they were philosophical experiments as well. Descartes was an immediate precursor to the philosophers of the 18th century who were preoccupied with the question of whether humans were born with a soul, or were merely very complex machines. In their quest for an answer to this question, they built machines in the image of men and women, thinking: if men are just machines, then does a mechanically-constructed man amount to a human being? Rather than being a craft, in other words, the art of mechanics became, in that period, a way of thought. The objects made by the mechanicians of the Enlightenment were puzzles, riddles, concrete attempts to answer conceptual problems: Who are we? What are we made of? What makes us human? Can we be replicated artificially? These questions, which we are still trying to answer today -- at MIT’s Artificial Intelligence lab, at the cloning clinic of Severino Antinori -- were first crystallized by Descartes and his followers."

AI and Philosophy: How Can You Know the Dancer from the Dance? By Linda World. IEEE Intelligent Systems (July/August 2005; Vol. 20 (4): 84 - 85). Excerpt from the abstract: "Aaron Sloman was teaching philosophy at the University of Sussex in 1969, when he met Max Clowes. Clowes had done pioneering work in computer image interpretation. Now, he was asking Sloman to drop the way he learned to do philosophy at Oxford and to start studying artificial intelligence instead. Nine years later, Sloman published The Computer Revolution in Philosophy...."

  • The full text of the article is available for a limited promotional period. Here's an excerpt: "Sloman sees 'a deep continuity' between AI and very old problems in philosophy. Philosophy needs AI to progress in its study of difficult questions about the nature of mind. AI needs philosophy to clarify its requirements analyses."

Mind and Body: Rene Descartes to William James. By Robert H. Wozniak. "The common sense view of mind and body is that they interact. Our perceptions, thoughts, intentions, volitions, and anxieties directly affect our bodies and our actions. States of the brain and nervous system, in turn, generate our states of mind. Unfortunately, the common sense notion appears to involve a contradiction."

Tutorial Notes & Slides

Philosophical Foundations: Some Key Questions 17th International Joint Conference on Artificial Intelligence (4th - 10th August, 2001. Seattle, Washington, USA) PRESENTED BY Aaron Sloman and Matthias Scheutz, The University of Birmingham and Notre Dame University. "This tutorial, presented by tutors with deep knowledge and experience both in philosophy and in AI, will introduce a range of philosophical questions relevant to the goals and methodology of AI as science and AI as engineering, including the contribution of AI to the study of mind, and some unresolved questions about the nature of computational systems and how they relate to physical systems. ...The questions to be discussed include a selection from the following:

  • What is computation? How does computation extend our ontology?
  • What are virtual machines and how are they related to physical machines?
  • What is a virtual machine architecture?
  • Can the relations between virtual and physical machines as understood by computer scientists and software engineers shed light on philosophical questions about the relations between minds and brains?
  • What sorts of virtual machines can biological evolution produce?
  • What sorts of architectures can support our concepts of mentality?
  • Under what conditions might it be reasonable to describe a machine as conscious, or as having emotions?
  • What are representations, and how many varieties are there?
  • What sorts of ontologies are required by different organisms or machines?"

Related Resources

AI on the Web: Philosophy and the Future. A resource companion to Stuart Russell and Peter Norvig's "Artificial Intelligence: A Modern Approach."

International Association of Computing and Philosophy. The IACAP exists to promote scholarly dialogue and research on all aspects of the computational and informational turn, and on the use of information and communication technologies in the service of philosophy. Keynote speakers from past conferences include many distinguished philosophers and scientists, many of whom have websites of their own.

Machine Consciousness Website. Maintained by Owen Holland and Magdalena Kogutowska. "The last decade has been marked by a rapid increase in the number of people interested in the scientific study of consciousness. Most such activity has been directed towards the understanding of the processes underlying consciousness in humans, and most research has taken place within psychology and neuroscience. However, in the last few years a third strand has emerged: the study, mainly by engineers and computer scientists, of how it might be possible to build conscious machines. We believe that the best name for this new enterprise is 'machine consciousness', and this web site is intended to serve as a focus and information source for anyone involved in the area or wishing to find out more about it."

  • Also listen to this interview with Owen Holland from Talking Robots (January 19, 2007): Owen Holland - Robot Consciousness: "In this episode we interview Owen Holland about his rediscovery of the first autonomous robot ever built, his research in artificial consciousness and his life-size 'anthropomimetic' humanoid robot which closely copies human muscular and skeletal structure. Owen Holland is a professor at the University of Essex...."

North American Computing and Philosophy [CAP] Conference: 2006 | 2005 | 2003 | 2002 | 2001

Online Papers on Consciousness. Compiled by David Chalmers, Professor of Philosophy and Associate Director of the Center for Consciousness Studies at the University of Arizona, this well organized site offers links to 698 online papers. WOW!

The Blurring Test [In Development]. "The Blurring Test playfully explores the increasingly blurred lines between humans and machines. For decades, the Turing test for Artificial Intelligence has forced computers to mimic humans. But why let humans off the hook? This project turns the test on its head by creating various challenges for humans to prove their humanity... to computers and to themselves." Visit Web Lab's site and converse with the chatterbot, MR MIND.

  • "Can you claim that your 'human' attributes will forever be exclusively human? MR MIND asks you to take a close look at the changing boundaries between humans and machines; his cause is your understanding. The Blurring Test is about human progress: Someday it might be important to convince our computers (and each other) that we are human." -from the Introduction
  • also check out the article, Being Real, by Judith Donath

Other References Offline

Articles from Newspapers, Journals and Magazines

Abrahamson, Joseph R. 1994. Mind, Evolution and Computers. AI Magazine 15(1): Spring 1994, 19-22. "Science deals with knowledge of the material world based on objective reality. It is under constant attack by those who need magic, that is, concepts based on imagination and desire, with no basis in objective reality. A convenient target for such people is speculation on the machinery and method of operation of the human mind, questions that are still obscure in 1994. In The Emperor's New Mind, Roger Penrose attempts to look beyond objective reality for possible answers, using, in his argument, the theory that computers will never be able to duplicate the human experience. This article attempts to show where Penrose is in error by reviewing the evolution of men and computers and, based on this review, speculates about where computers might and might not imitate human perception. It then warns against the dangers of passive acceptance when respected scientists venture into the occult."

Aleksander, Igor. 2003. I, computer. New Scientist (July 19, 2003). "Will there come a day when a machine declares itself to be conscious? An increasing number of laboratories around the world are trying to design such a machine. Their efforts are not only revealing how to build artificial beings, they are also illuminating how consciousness arises in living beings too. At least, that's how those of us doing this research see it. Others are not convinced. Generally speaking, people believe that consciousness has to do with life, evolution and humanity, whereas a machine is a lifeless thing designed by a limited mind and has no inherent feeling or humanity. So it is hardly surprising that the idea of a conscious machine strikes some people as an oxymoron." At the end of the article, he lists his five axioms of consciousness: a sense of place, imagination, directed attention, planning, and decision/emotion.

Brean, Joseph. Scientist says you can be a person without being human - Sussing out a 'partner species.' National Post (October 11, 2002). "Watching this scene on video in a conference hall at the University of Waterloo, Canada's top engineering school, it is easy to believe robots are the way of the future. It involves a far greater leap of faith to believe Anne Foerst, who is trying to convince the audience that robots are the people of the future. Dr. Foerst, a Lutheran minister and computer scientist who helped build Kismet, believes it is only a matter of time before robots have souls. ... In developing a theory of personhood that includes robots, Dr. Foerst is slowly reconciling her religious beliefs with her scientific theories, and teasing out the religious implications of playing God with science. She believes building robots in our image will transfer to them the gift we received by being built in God's image. They won't be human, she says, but they will be persons. After all, she says, 'God was not intending to build gods.' ... Among the computer scientists and religious scholars who came to hear Dr. Foerst's talks at the University of Waterloo, there was a clear consensus that what sets us apart from robots is the nature of our intelligence. Whereas today's robots run through their 'mental' operations with brute force, the human brain is more intuitive and adept at taking logical shortcuts. This supposed difference clouds a key similarity, Dr. Foerst says, and this similarity is at the heart of her work. She argues that intelligence depends on the body; the mind does not exist, nor did it evolve, separately from the limbs and muscles it controls. This kind of thinking puts her in a camp that broke away from the Cartesian idea that we are minds that have bodies, and replaced it with the notion that we are simply thinking bodies. The insight had a profound effect on robotics."

Chang, Kenneth. Can Robots Become Conscious? #14 of the 25 of the most provocative questions facing science. The New York Times (November 11, 2003; no fee reg. req'd.). "It's a three-part question. What is consciousness? Can you put it in a machine? And if you did, how could you ever know for sure? ... The field of artificial intelligence started out with dreams of making thinking -- and possibly conscious -- machines, but to date, its achievements have been modest. No one has yet produced a computer program that can pass the Turing test. ... But with the continuing gains in computing power, many believe that the original goals of artificial intelligence will be attainable within a few decades. ... To Dr. [Hans] Moravec, if it acts conscious, it is. To ask more is pointless. Dr. [David] Chalmers regards consciousness as an ineffable trait, and it may be useless to try to pin it down."

Dennett, Daniel C. 1988. When Philosophers Encounter Artificial Intelligence. Daedalus 117 (1): 283-296. *NOTE: All articles in this section listed from the journal Daedalus 117(1) are reprinted in the book The Artificial Intelligence Debate: False Starts, Real Foundations, ed. Stephen R. Graubard. Cambridge, MA: MIT Press, 1990.

Doyle, Jon. 1983. What is Rational Psychology? Toward a Modern Mental Philosophy. AI Magazine 4(3): Fall 1983, 50-53. "Rational psychology is the conceptual investigation of psychology by means of the most fit mathematical concepts. Several practical benefits should accrue from its recognition." Dreifus, Claudia. 2000. A Conversation with Anne Foerst [Director of MIT's God and Computers project]. The New York Times. Science, page D3. November 7, 2000.

Gelernter, David. 1997. How Hard is Chess? Time Magazine (May 19, 1997): 72.

Kirsch, D. 1991a. Foundations of AI: The Big Issues. Artificial Intelligence 47: 3-30.

Kirsch, D. 1991b. Today the Earwig, Tomorrow Man? Artificial Intelligence 47: 161-184

LaForte, Geoffrey, Patrick J. Hayes, and Kenneth M. Ford. 1998. Why Godel's Theorem Cannot Refute Computationalism. Artificial Intelligence 104 (1/2): 211-264. The authors find flaws in Roger Penrose's claim that Godel's theorem implies that human thought cannot be mechanized..

McCorduck, Pamela. 1988. Artificial Intelligence: An Apercu. Daedalus 117 (1): 65-84.

Papert, Seymour. 1988. One AI or Many? Daedalus 117 (1): 1-14.

Putnam, Hillary. 1988. Much Ado About Not Very Much. Daedalus 117 (1): 269-282.

  Sokolowski, Robert. 1988. Natural and Artificial Intelligence. Daedalus 117 (1): 45-64.

Ullman, Ellen. 2002. Programming the Post-Human: Computer science redefines "life." Harper's, Vol. 305, No. 1829: 60-70. "Growing impatient with me as I pressed [Cynthia Breazeal] for a definition of 'alive,' she said: 'Do you have to go to the bathroom and eat to be alive?'" [p. 67]

Readings and Chapters from Books

Glymour, Clark, Kenneth Ford, and Patrick Hayes. 1995. The Prehistory of Android Epistemology. In Computation and Intelligence: Collected Readings, ed. Luger, George F., 3-21. Menlo Park/Cambridge/London: AAAI Press/The MIT Press. Going back to the ancient Greeks, the authors put the philosophical questions posed by AI into the context of Western philosophical tradition.

McCarthy, John. 1977. Epistemological Problems in Artificial Intelligence. In Readings in Artificial Intelligence, ed. Webber, Bonnie Lynn and Nils J. Nilsson, 459-465. Palo Alto, CA: Tioga Publishing Co., 1977. (Originally published in Proceedings of the Fifth International Joint Conference on Artificial Intelligence [IJCAI-77].)

Minsky, Marvin. 1961. Steps Toward Artificial Intelligence. In Computers and Thought, ed. Feigenbaum, Edward A. and Julian Feldman, Cambridge, MA: MIT Press, 1995.

Russell, Stuart, and Peter Norvig. 1995. Artificial Intelligence: A Modern Approach. Upper Saddle River, NJ: Prentice Hall. Chapter 26 (pages 817-841) takes an accessible approach to the question "Can machines think?" by clearly analyzing the question, describing positions taken by various contributors to the discussion, and simply defining much of the jargon related to the philosophical issues.

Searle, John R. 1992. The Rediscovery of the Mind. Cambridge, MA: MIT Press.

Waltz, David L. 1988. The Prospects for Building Truly Intelligent Machines. In The Artificial Intelligence Debate, ed. Graubard, Stephen R., Cambridge, MA: The MIT Press.

Winograd, Terry. 1990. Thinking Machines: Can there be? Are we? In Foundations of Artificial Intelligence: A Sourcebook, ed. Partridge, D. and Y. Wilks, 167-189. Cambridge, England: Cambridge University Press.


Journal for Artificial Intelligence, Philosophy and Cognitive Science. Editor: J.H. Moor.


Anderson, Alan R., editor. 1964. Minds and Machines. Englewood Cliffs, NJ: Prentice-Hall.

Boden, Margaret, editor. 1990. The Philosophy of Artificial Intelligence. Oxford: Oxford University Press.

Bynum, Terrell Ward, and James H. Moor, editors. 1998. The Digital Phoenix: How Computers are Changing Philosophy. Cambridge, MA: Blackwell Publishers. A collection of readings.

Churchland, Paul M. 1992. Matter and Consciousness. Cambridge and London: MIT Press. Written expressly for people who are not professionals in philosophy or artificial intelligence, Churchland writes about the nature of conscious intelligence with an eye toward the progress science is making in understanding it.

Churchland, Paul M. 1995. The Engine of Reason, the Seat of the Soul. Cambridge, MA: MIT Press/Bradford Books. Explanations of recent scientific discoveries about the mind by a philosopher who examines not only the science, but also social and ethical implications of ascribing consciousness to all but the simplest of animal life.

Churchland, P. S. 1986. Neurophilosophy: Toward a Unified Science of the Mind-Brain. Cambridge, MA: MIT Press.

Clark, Andy. 1997. Being There: Putting Brain, Body, and World Together Again. Cambridge, MA and London: The MIT Press.

Copeland, Jack. 1993. Artificial Intelligence: A Philosophical Introduction. Oxford: Blackwell.

Crane, Tim. 1991. The Mechanical Mind: A Philosophical Introduction to Minds, Machines and Mental Representation. New York and London: Penguin Books.

Cummins, Robert, and John Pollock, editors. 1991. Philosophy and AI: Essays at the Interface. Cambridge, MA: MIT Press.

Dennett, Daniel C. 1998. Brainchildren: Essays on Designing Minds. Cambridge, MA: MIT Press/Bradford Books. A multidisciplinary look at the mind -- biological, social, philosophical. Reprinted from scholarly journal articles appearing 1984-1996.

Dennett, Daniel. 1978. Brainstorms: Philosophical Essays on Mind and Psychology. Montgomery, VT: Bradford Books.

Denning, Peter, and Bob Metcalfe, editors. 1997. Beyond Calculation: The Next 50 Years of Computing. New York: Springer Verlag. Essays by Terry Winograd, Sherry Turkle, Donald Norman and many others.

Gershenfeld, Neil. 1998. When Things Start to Think. New York: Henry Holt and Co. Philosophical discussion and lots of information about new inventions at MIT's Media Lab.

Gelernter, David. 1994. The Muse in the Machine: Computerizing the Poetry of Human Thought. New York: Free Press of Macmillan, Inc.

Graubard, Stephen, editor. 1988. The Artificial Intelligence Debate: False Starts, Real Foundations. Cambridge, MA: MIT Press. Reprinted 1990. Essays that examine fundamental conceptual issues in AI. This book reprints a collection of articles from the journal Daedalus 117(1). Contributors include Dennett, Dreyfus, McCarthy, McCorduck, Papert, Waltz, and others. For individual annotations, see the "Articles" section, above.

Haugeland, John., editor. 1997. Mind Design II: Philosophy, Psychology, Artificial Intelligence. 2nd edition. Cambridge, MA: MIT Press. With contributions from both scientists and philosophers, this book retains a few classic essays from the first edition and expands with articles on connectionism, dynamical systems, and symbolic versus nonsymbolic models

Hofstadter, Douglas R., and Daniel C. Dennett 1981. The Mind's I: Fantasies and Reflections on Self and Soul. New York: Basic Books. Philosophical essays on the self, the intellect, and consciousness.

Kurzweil, Ray. 1998. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. New York: Viking. Speculations on how society will be influenced and affected as intelligent machines become more powerful and prevalent. "This is a book for computer enthusiasts, science fiction writers in search of cutting-edge themes and anyone who wonders where technology is going next." (New York Times Book Review, Jan. 3, 1999.)

Ringle, M. 1979. Philosophical Perspectives in Artificial Intelligence. Atlantic Highlands, NJ: Humanities Press.

Sloman, Aaron. 1978. The Computer Revolution in Philosophy. Hassocks, Sussex, UK: Harvester Press. [Out of print, but available online from the author.]

Smith, Brian Cantwell. 1996. On the Origin of Objects. Cambridge, MA: MIT Press/Bradford Books. The author offers his conclusions about the philosophical and metaphysical underpinnings of artificial intelligence, cognitive science, and computation.

Thagard, Paul 1993. Computational Philosophy of Science. Cambridge, MA: MIT Press.

Tags: Philosophy
AAAI   Recent Changes   Edit   History   Print   Contact Us
Page last modified on July 25, 2012, at 10:58 AM