John McCarthy(1927-2011), who coined the term “artificial intelligence” at the Dartmouth Conference in 1956, would have no textbook on artificial intelligence without him. Professor McCarthy has worked at Massachusetts Institute of Technology, Dartmouth College, Princeton University and Stanford University. He was a professor emeritus at Stanford university.
He is credited with inventing the LISP programming language. Over the years, especially in the United States, LISP has become the standard language for developing artificial intelligence programs. McCarthy was gifted in mathematics and received a bachelor’s degree in mathematics from the California Institute of Technology in 1948. In 1951, he received a doctorate in mathematics from Princeton University, under the supervision of Solomon Lefschetz.
Professor McCarthy has a wide range of interests and his contributions cover many areas of artificial intelligence. For example, he has published in a number of areas, including logic, natural language processing, computer chess, cognition, fact-negation, common sense, and raises philosophical issues from an artificial intelligence standpoint. As the founding father of artificial intelligence, McCarthy often commented on his papers such as Some Expert Systems Need Common Sense (1984) and Free Will Even for Robots, Figure out what an AI system needs to be useful and effective.
McCarthy received the Turing Award in 1971 for his contributions to artificial intelligence. Other awards he has received include the National Science Award in mathematics, Statistics and computational sciences and the Benjamin Franklin Award in computer and cognitive sciences.
The ability of a computer program to display any kind of intelligence dictates that it needs to be able to reason. English mathematician Georage Boole(1815-1864) established a mathematical framework for representing the laws of human logic. His publications include some 50 personal papers. His main achievement was known as the theory of difference equations, which appeared in 1859. Then, in 1860, he published the theory of finite difference operations. The latter work is a sequel to the previous one. Boole gave a general approach to symbolic reasoning in Laws of Thought, perhaps his greatest achievement. Given logical propositions with arbitrary terms, Boole treated these premises with pure symbols, showing how to make reasonable logical inferences.
In the second part of Laws of Thought, Boole tried to invent a general method to transform the prior probabilities of the event system to determine the posterior probabilities of any other events logically associated with a given event.
The algebraic language (or notation) he established allowed variables to interact (or establish relationships) based on only two states (true and false). As is now known, his Boolean algebra has three logical operators: and, or, and not. The combination of Boolean algebra and logical rules allows us to prove things “automatically”. So a machine that can do this is, in a sense, capable of reasoning.
More than two centuries later, KurtGodel (1931) demonstrated that Leibniz’s goals were overly optimistic. He showed that any branch of mathematics that uses only the rules and axioms of its own branch, even if it is itself complete, always contains propositions that cannot be proved true or false. The great French philosopher Rene Descartes, in Meditations, addresses the problems of physical reality through cognitive introspection. He proved his existence through the reality of his mind, eventually coming up with the famous “Cogito ergo sum” — “Cogito ergo sum”. In this way, Descartes and the philosophers who followed him established separate worlds of mind and material. Ultimately, this led to the contemporary idea that mind and body are essentially the same.