There are three main schools of artificial intelligence: symbolism, connectionism, and behaviorism.
Symbolism is an intelligent simulation method based on logical reasoning. The symbolism school believes that AI originates from mathematical logic and is dedicated to simulating human cognitive processes with symbolic operations of computers, which essentially simulates the left-brain abstract logical thinking of human beings.
Connectionism is an intelligent simulation method based on neural networks and inter-network connection mechanisms and learning algorithms. Connectionism is based on the research results of neurophysiology and cognitive science, and attributes human intelligence to the higher-level activities of the human brain, emphasizing that intelligent activities are the result of a large number of simple units running in parallel through complex interconnections, which can be simplified as “simulating the operational structure of the brain”, represented by neural networks and deep learning. Behaviorism originated in the 20th century.
Behaviorism originated from a school of psychology in the early 20th century, which linked the working principles of the nervous system with information theory, control theory, logic and computers, and gave birth to intelligent control and intelligent robotic systems in the 1980s, which can be simply understood as “simulating human behavior”, represented by reinforcement learning and genetic algorithms.
The history of AI development is not a smooth one, and has experienced two ups and two downs before entering the era of rapid development today.
The first rise (1956~1974): After the Dartmouth conference, AI development fast tracked. 1957, the symbolist school introduced syntactic structures and the LISP language, which had a deep impact on almost all programming languages that followed. One of the representatives of the connectionist school, a Russian mathematician, introduced Markov (Markov) decision process, which later became the mathematical basis of reinforcement learning; the second representative, Frank Rosenblatt, introduced perceptrons, which can be understood as early neural networks without hidden layers, and AI had a big progress from algorithms to programming languages in this time period.
The first decline (1974~1980): In 1969, the representative of the symbolism school – Marvin Minsky, published the book “Perceptron”, which bluntly stated that the perceptron proposed by connectionism could not do even the most basic heterogeneous or operations, and the main reason leading to this problem was the low arithmetic power at that time, which could not solve complex operations, common sense and reasoning, etc. The connectionism school therefore also stagnated for 10 years, “shortly after the victory of the symbolism school”, AI entered the first winter.
The second rise (1980~1987): In the 1980s, the emergence of expert systems brought the second golden age of AI, which was a set of computer software capable of answering or solving problems in a particular field based on a set of logical rules deduced from expertise; in 1982, the connectionist John Hopfield discovered the associative neural network, a neural network algorithm with learning capabilities.
The second decline (1987~1993): In this period, IBM and Apple launched the general-purpose computing PC with increasingly good cost performance, the market demand for expert systems fell off a cliff, while in the algorithm, the birth of convolutional neural networks, the need to calculate multiple layers of neurons, the complexity of the algorithm is extremely high, AI hardware arithmetic bottleneck but never break through, “symbolism fell”, AI entered the winter for the second time.
In 1995, the emergence of support vector machines dominated the entire world of machine learning, which consumed less computational resources and had simpler algorithms, and was widely used; in 2006, the “father of deep learning” Hinton proposed the restricted Boltzmann machine model and deep belief network, successfully trained multilayer neural networks, and called multilayer neural networks “deep learning”, which proved for the first time the possibility of large-scale deep neural network learning, and of course, was also influenced by the development of AI hardware, which allowed AI to re-enter the stage of rapid comprehensive development.
In summary, the history of each generation of AI is inevitably influenced by two key elements – algorithms and arithmetic power; as for arithmetic power, in 2008, Nvida launched the Tegra series of chips, as the earliest GPU available in the field of artificial intelligence, which has now become one of Nvida’s most important AI chips, mainly used in the field of intelligent driving. In 2010, IBM released its first brain-like chip, a prototype that simulates brain structure with cognitive and perceptual capabilities and massively parallel computing capabilities. Since then FGPA supports DNN, Google TPU (AlphaGo), Huawei Kirin 970 and other hardware AI technologies are widely used in various fields.