How AI Programs Solve Problems
AITopics > Reasoning
The pages for this topic are maintained by the AITopics Editorial Board.
Many problems of practical importance are problems of reasoning about actions. In these problems, a course of action has to be found that satisfies a number of specified conditions. Everyday examples include planning an airplane trip, organizing a dinner party, etc. ...
A problem of reasoning about actions is given in terms of an initial situation, a terminal situation, a set of feasible actions, and a set of constraints...The task of a problem solver is to find the best sequence of permissible actions that can transform the initial situation into the terminal situation.
- Saul Amarel, On Representations of Problems of Reasoning About Actions.
"When the system is required to do something that it has not been explicitly told how to do, it must reason - it must figure out what it needs to know from what it already knows. For instance, suppose an information retrieval program 'knows' only that Robins are birds and that All birds have wings. Keep in mind that for a system to know these facts means only that it contains data structures and procedures that would allow it to answer the questions:
If we then ask it, Do robins have wings? the program must reason to answer the query. In problems of any complexity, the ability to do this becomes increasingly important. The system must be able to deduce and verify a multitude of new facts beyond those it has been told explicitly."
And speaking of birds, please check out our Bird Watcher's Field Guide to Logic. It's a work in progress and your input would be most appreciated.
Reasoning at The Computational Intelligence Research Laboratory of the University of Oregon. "Complementing general questions of how to represent knowledge is the need to understand how knowledge can be used. In general, realistic problems have enormous associated spaces of possible solutions which must be explored (searched) to find an actual solution that meets the requirements of the problem. These spaces are much too large to be searched in their entirety, and ways must be found to focus or short-circuit the search for solutions if systems are to have any practical utility."
Automated Reasoning. Entry by Frederic Portoraro. The Stanford Encyclopedia of Philosophy (Fall 2001 Edition); Edward N. Zalta, editor."Reasoning is the ability to make inferences, and automated reasoning is concerned with the building of computing systems that automate this process."
Logical Agents. Chapter 7 of the textbook, Artificial Intelligence: A Modern Approach (Second Edition), by Stuart Russell and Peter Norvig. "This chapter introduces knowledge-based agents. The concepts that we discuss -- the representation of knowledge and the reasoning processes that bring knowledge to life -- are central to the entire field of artificial intelligence. ... We begin in Section 7.1 with the overall agent design. Section 7.2 introduces a simple new environment, and illustrates the operation of a knowledge-based agent without going into any technical detail. Then, in Section 7.3, we explain the general principles of logic. Logic will be the primary vehicle for representing knowledge throughout Part III of the book."
Computational Intelligence - A Logical Approach. By David Poole, Alan Mackworth and Randy Goebel. 1998. Oxford University Press, New York. "In order to use knowledge and reason with it, you need what we call a representation and reasoning system (RRS). A representation and reasoning system is composed of a language to communicate with a computer, a way to assign meaning to the language, and procedures to compute answers given input in the language. Intuitively, an RRS lets you tell the computer something in a language where you have some meaning associated with the sentences in the language, you can ask the computer questions, and the computer will produce answers that you can interpret according to the meaning associated with the language. ... One simple example of a representation and reasoning system ... is a database system. In a database system, you can tell the computer facts about a domain and then ask queries to retrieve these facts. What makes a database system into a representation and reasoning system is the notion of semantics. Semantics allows us to debate the truth of information in a knowledge base and makes such information knowledge rather than just data." - excerpt from Chapter 1 (pages 9 - 10).
Computers versus Common Sense. Video (May 30, 2006; 1 hour, 15 minutes) from Google TechTalks. Dr. Douglas Lenat, President and CEO of Cycorp, talks about common sense: "It's way past 2001 now, where the heck is HAL? ... What's been holding AI up? The short answer is that while computers make fine idiot savants, they lack common sense: the millions of pieces of general knowledge we all share, and fall back on as needed, to cope with the rough edges of the real world. I will talk about how that situation is changing, finally, and what the timetable -- and the path -- realistically are on achieving Artificial Intelligence."
Computing's Killer Problem Lee Gomes, Forbes Magazine (March 29, 2010). "...much of what we do with computers--the fundamental security of the Internet, for example--is based not on anything we know for sure, but on what's essentially just a good guess. ... Computer science theorists regard P=NP as the central theoretical question of the day ... Here's what's at stake: P represents the collection of mathematical problems that a computer can solve in a reasonable amount of time. Mathematicians being mathematicians, they define reasonableness in their own special way. P stands for polynomial. A problem that gets only somewhat more difficult as the numbers get bigger is said to be solvable in polynomial time. The opposite of polynomial is exponential, where the time needed to solve a problem quickly grows unreasonably large. The other side of the equation, NP, represents problems whose answers can be verified in a reasonable amount of time. To ask if P=NP is to ask if those two are the same. "
Marvin Minsky on Common Sense and Computers That Emote - As artificial intelligence research celebrates its 50th birthday, the MIT icon asks what makes the minds of three-year-olds tick. By Wade Roush. Technology Review (July 13, 2006). (Also be sure to see Marvin Minsky's papers elsewhere on this page.)
Artificial Intelligence. [Radio broadcast; audio available.] Reported by Shay Zeller for The Front Porch. New Hampshire Public Radio (July 12, 2006). "Dartmouth College is celebrating 50 years of Artificial Intelligence this week with a special conference that takes a look forward and a look back at the field. We'll find out how AI has evolved since its inception and how far scientists have come to creating the technological brain that's been depicted in science fiction for decades. We'll also look at the philosophical and ethical questions that go along with creating machines that emulate the human mind. Our guest are: Eugene Charniak, professor of Computer Science at Brown University. Charniak's expertise is in language development, and he's presenting a speech at the conference entitled 'Why Natural Language Processing is Now Statistical Natural Language Processing.' James H. Moor, professor of Philosophy at Dartmouth. He's the conference's main organizer."
Art and Science of Cause and Effect. By Judea Pearl, UCLA. With illustrations and clear explanation, this marvelous lecture takes you across centuries of human thought on cause and effect, and presents the issues involved in translating messy human ideas into precise language that computers can use.
AI Bite: A Short Review of Theorem Proving by Simon Colton. AISB Quarterly (No. 126, Spring 2008), Sponsored by, and available from, The Society for the Study of Artificial Intelligence and Simulation of Behaviour. Downloadable PDF.
AI: History and Applications. Chapter One of George F. Luger's textbook, Artificial Intelligence: Structures and Strategies for Complex Problem Solving, 4th Edition, is available online. 2002. Addison-Wesley. "Once thinking had come to be regarded as a form of computation, its formalization and eventual mechanization were obvious next steps. In the seventeenth century, Gottfried Wilhelm von Leibniz, with his Calculus Philosophicus, introduced the first system of formal logic as well as constructed a machine for automating that calculation (Leibniz 1887)."
The St. Thomas Common Sense Symposium: Designing Architectures for Human-Level Intelligence. By Marvin Minsky, Push Singh, and Aaron Sloman. AI Magazine 25(2): Summer 2004, 113-124. Abstract: "To build a machine that has "'common sense' was once a principal goal in the field of artificial intelligence. But most researchers in recent years have retreated from that ambitious aim. Instead, each developed some special technique that could deal with some class of problem well, but does poorly at almost everything else. We are convinced, however, that no one such method will ever turn out to be 'best,' and that instead, the powerful AI systems of the future will use a diverse array of resources that, together, will deal with a great range of problems. To build a machine that's resourceful enough to have humanlike common sense, we must develop ways to combine the advantages of multiple methods to represent knowledge, multiple ways to make inferences, and multiple ways to learn. We held a two-day symposium in St. Thomas, U.S. Virgin Islands, to discuss such a project --- to develop new architectural schemes that can bridge between different strategies and representations. This article reports on the events and ideas developed at this meeting and subsequent thoughts by the authors on how to make progress."
Scientist on the Set: An Interview with Marvin Minsky. A chapter from Hal's Legacy (Edited by David G. Stork. 1996. MIT Press). "Stork: Could you give a very broad overview of the techniques in AI? Minsky: There are three basic approaches to AI: Case-based, rule-based, and connectionist reasoning. The basic idea in case-based, or CBR, is that the program has stored problems and solutions. Then, when a new problem comes up, the program tries to find a similar problem in its database by finding analogous aspects between the problems. The problem is that it is very difficult to know which aspects from one problem should match which ones in any candidate problem in the other, especially if some of the features are absent. In rule-based, or expert systems, the programmer enters a large number of rules. The problem here is that you cannot anticipate every possible input. It is extremely tricky to be sure you have rules that will cover everything. Thus these systems often break down when some problems are presented; they are very 'brittle.'Connectionists use learning rules in big networks of simple components -- loosely inspired by nerves in a brain. Connectionists take pride in not understanding how a network solves a problem."
Stuart Russell on the Future of Artificial Intelligence. Ubiquity; Volume 4, Issue 43 (December 24 - January 6, 2004). "UBIQUITY: Will they be based on a probability theory? RUSSELL: Yes. Speech has already gone this route. Speech recognition is a giant calculation of posterior probabilities from evidence. ... At the same time, logical AI tradition has broadened to include probability theory. A lot of high-level representation, reasoning and planning can go on in a probabilistic formalism."
Primer on Logic and Logical Concepts. By Erwin M. Segal, Department of Psychology and Center for Cognitive Science at the State University of New York at Buffalo. "Logic was an attempt to describe normative or correct reasoning. A widespread belief in AI, Cognitive Psychology, and Cognitive Science is that there are procedures representing the way people think which can be implemented on computers using logic type rules directly (algorithms) or by finding other ways to implement logical principles. The study of reasoning in this case is the study of 'natural' logic."
Ninth Workshop on Automated Reasoning. AISB 2002. Toby Walsh, editor. One of the many convention proceedings available from The Society for the Study of Artificial Intelligence and Simulation of Behaviour (SSAISB).
AI on the Web: Logic and Knowledge Representation. A resource companion to Stuart Russell and Peter Norvig's "Artificial Intelligence: A Modern Approach" with links to reference material, people, research groups, books, companies and much more.
Argonne National Laboratory, Mathematics and Computer Science Division: Automated Reasoning: "Our goal is to develop techniques and build practical programs to help mathematicians, logicians, scientists, engineers, and others with some of the deductive aspects of their work. Most of our work applies to problems that can be stated in the language of first-order logic with equality. Our programs have been applied to many real problems, mostly in abstract algebra and logic, and many new results have been obtained through their use."
"Association for Automated Reasoning (AAR) is a not-for-profit corporation intended for educational and scientific purposes. The objective of AAR is to advance the field of automated reasoning by disseminating and exchanging information among its international members...."
Course materials prepared for college and university classes are a great place to find both current and core readings and other resources. This course is representative of what you can find online.
Journal of Automated Reasoning. Kluwer Academic Publishers. Aims and Scope: "The Journal of Automated Reasoning is an interdisciplinary journal that maintains a balance between theory, implementation and application. The spectrum of material published ranges from the presentation of a new inference rule with proof of its logical properties to a detailed account of a computer program designed to solve various problems in industry. The main fields covered are automated theorem proving, logic programming, expert systems, program synthesis and validation, artificial intelligence, computational logic, robotics, and various industrial applications. The papers share the common feature of focusing on several aspects of automated reasoning, a field whose objective is the design and implementation of a computer program that serves as an assistant in solving problems and in answering questions that require reasoning."
SRI International's Artificial Intelligence Center's, Representation and Reasoning Program. Be sure to check out their representative sampling of programs.
UCLA Automated Reasoning Group: "The group focuses on research in the areas of probabilistic and logical reasoning and their applications to problems in science and engineering disciplines. On the theoretical side, the research involves formulation of various tasks such as diagnosis, belief revision, planning and verification as reasoning problems. On the practical side, the group focuses on the development of efficient and embeddable reasoning algorithms that can scale to real-world problems, and software environments that can be used to construct and validate large-scale models."
Other References Offline
Allen, J. 1984. Towards a General Theory of Action and Time. Artificial Intelligence 23: 123-154.
Brachman, Ronald, and Hector Levesque. 2004. Knowledge Representation and Reasoning. Morgan Kaufmann (part of Elsevier’s Science and Technology Division). Excerpt from the publisher's description: "Knowledge representation is at the very core of a radical idea for understanding intelligence. Instead of trying to understand or build brains from the bottom up, its goal is to understand and build intelligent behavior from the top down, putting the focus on what an agent needs to know in order to behave intelligently, how this knowledge can be represented symbolically, and how automated reasoning procedures can make this knowledge available as needed. This landmark text takes the central concepts of knowledge representation developed over the last 50 years and illustrates them in a lucid and compelling way. Each of the various styles of representation is presented in a simple and intuitive form, and the basics of reasoning with that representation are explained in detail."
Forbus, Kenneth D., and Johan de Kleer. 1993. Building Problem Solvers. Cambridge, MA: MIT Press.
Heckerman, D. 1986. Probabilistic Interpretation for MYCIN's Certainty Factors. In Uncertainty in Artificial Intelligence, ed. Kanal, L. N. and J. F. Lemmer, 167-196. Amsterdam, London, New York: Elsevier/North Holland.
Nii, H. Penny. 1995. Blackboard Systems: The Blackboard Model of Problem Solving and the Evolution of Blackboard Architectures. In Computation and Intelligence, ed. Luger, George F., Menlo Park, CA: AAAI Press.
Post, Stephen and Andrew P. Sage. 1990. An Overview of Automated Reasoning. IEEE Transactions on Systems, Man and Cybernetics, 20 (1): 202 -224. "[T]his paper provides an overview of information processing methods that are most directly based on default reasoning, a term that has come to be used for patterns of inference that permit drawing conclusions suggested but not entailed by their premises, given that the conclusions are consistent with the rest of what is known."