Defining AI is not difficult, but impossible, not because we don’t understand human intelligence. Strangely, the progress of artificial intelligence will help us define what human intelligence is not more than what artificial intelligence is?
But no matter what artificial intelligence is, we have made a lot of progress in many fields, from machine vision to playing games in the past few years. Artificial intelligence is changing from a research topic to early enterprise adoption. Companies such as Google and Facebook have made huge bets on artificial intelligence and have applied this technology to their products.
But Google and Facebook are just the beginning: in the next decade, we will witness the spread of artificial intelligence into one product after another. We will communicate with BOT – they are not scripted Robo dialers, and we can’t even realize that they are not human. We will rely on cars for route planning and response to road hazards.
It is no exaggeration to estimate that in the coming decades, every application we contact will be integrated into some AI functions, and if we use applications, we will not be able to do anything.
Given that our future will inevitably be tied to artificial intelligence, we must ask: how are we doing now? What is the current situation of artificial intelligence? Where will we go?
The capabilities and limitations of artificial intelligence today
The description of AI revolves around the following centers: intensity (how intelligent is it), breadth (whether it solves a narrow range of problems or a broad range of problems), training (how to learn), ability (what problems can be solved) and autonomy (whether AI is an auxiliary technology or can act alone). Each of these centers has a scope, and each point in this multidimensional space represents a different way to understand the goals and capabilities of artificial intelligence systems.
At the strength center, it is easy to see the results of the past 20 years and realize that we have built some extremely powerful programs. Deep blue beat Garry Kasparov in chess; Watson beat jeopardy’s winning champion; Alphago defeated Li Shishi, arguably the best go player in the world.
But all these successes are limited. Dark blue, Watson and alphago are highly specialized machines with a single purpose. They can only do well in one thing. Dark blue and Watson can’t play go, alphago can’t play chess or jeopardy, or even the most basic level. Their intelligence range is very narrow and cannot be generalized.
Watson has made many achievements in medical diagnosis and other applications, but it is still basically a question and answer machine that must be specially modulated for specific fields. Dark blue has a lot of expertise on chess strategies and encyclopedic open knowledge. Alphago is built with a more general architecture, but there is still a lot of manual coding knowledge in its code. I don’t belittle or underestimate their achievements, but it’s also important to recognize what they haven’t done yet.
We have not been able to create artificial general intelligence that can solve a variety of different types of problems. We haven’t heard a recording of human dialogue for a year or two. Although alphago “learned” to play go by analyzing thousands of games and then playing more self games, the same procedure cannot be used to master chess.
The same general method? Maybe. But our best achievement is still far from real general intelligence – real general intelligence can learn flexibly and unsupervised, or choose what you want to learn flexibly enough, whether it’s playing board games or designing PC boards.
Towards general artificial intelligence
How can we move from narrow, domain specific intelligence to more general intelligence? The “general intelligence” here does not necessarily mean human intelligence, but we do want machines to solve different kinds of problems without coding specific domain knowledge. We hope that machines can make human judgments and decisions.
This does not necessarily mean that machines will realize concepts without digital analogy, such as creativity, intuition or instinct. General intelligence will have the ability to handle many types of tasks and adapt to unexpected situations. A universal intelligence can undoubtedly realize the concepts of “justice” and “fairness”: we are already talking about the impact of artificial intelligence on the legal system.
We first prove the problems we face by using self driving cars. To achieve automatic driving, automobiles need to integrate pattern recognition and other abilities, including reasoning, planning and memory. It needs to recognize patterns in order to respond to obstacles and street signs; It requires reasoning, so as to understand traffic rules and solve tasks such as avoiding obstacles; It needs to plan to obtain the path from the current location to the target location, taking into account other modes such as traffic conditions.
It needs to do these things repeatedly and update its solutions. But even if an autopilot integrates all these AI, it does not have the flexibility that we expect to have in general intelligence. You will not expect an autopilot to talk with you or decorate your garden. It is very difficult to apply the knowledge learned from one field to the transfer learning of another field.
You may be able to reprocess many of these software components, but that can only point out what is missing: our current artificial intelligence can provide a narrow range of solutions to specific problems, and they are not general problem solvers. You can stack a narrow range of AI together (a car can have a bot that can talk about where to go, recommend restaurants and play chess with you so that you won’t feel bored), but the superposition of narrow AI can never get a general AI. The key to general artificial intelligence is not how many capabilities there are, but the integration of these capabilities.
Although the method of neural network was originally developed to simulate the process of human brain, many artificial intelligence projects have abandoned the concept of imitating biological brain. We don’t know how the brain works; Neural network computing is very useful, but they do not simulate human thinking.
In the book “Artistic Intelligence: a modern approach”, Peter Norvig and Stuart Russell wrote: “the pursuit of” artificial flight “was successful when the Wright brothers and others stopped imitating birds and began to learn aerodynamics.”
Similarly, to succeed, AI does not need to focus on imitating the biological processes of the brain, but should try to understand the problems handled by the brain. It can be reasonably estimated that humans use any number of technologies for learning, regardless of what may happen at the biological level. This may be the same for general AI: it will use pattern matching (similar to alphago), it will use a rule-based system (similar to Watson), and it will use an exhaustive search tree (similar to dark blue).
None of these technologies can directly correspond to human intelligence. What humans do better than any computer is to build models of their world and act on them.
The next step beyond general intelligence is super intelligence (super intelligence or super intelligence). At present, it is not clear how to distinguish between general artificial intelligence and super intelligence. Do we expect super intelligent systems to be creative and intuitive? Since we do not understand human creativity, it is more difficult to think about the creativity of machines.
Go experts call some of alphago’s moves “creative”; But they come from exactly the same process and model as all others, rather than looking at the game from a new perspective. The repeated application of the same algorithm may produce surprising or unexpected results, but only surprise is not what we call “creativity”.
It would be easier to think of super intelligence as a problem of scale. If we can create “universal intelligence”, we can easily estimate that it will soon be thousands of times stronger than human beings. Or, more precisely, general artificial intelligence will either be significantly slower than human thinking and difficult to accelerate through hardware or software; Or it will get fast speed through large-scale parallelism and hardware improvement.
We will expand from thousands of core GPUs to trillions of cores on thousands of chips, and its data flow comes from billions of sensors. In the first case, when the acceleration slows down, general intelligence may not be so interesting (although it will be a great journey for researchers). In the second case, the slope of its growth rate will be very steep and very fast.
Training or not
Alphago’s developers claim to use far more general algorithms than dark blue to train artificial intelligence: they have made a system with only the least go knowledge strategy, and learning is mainly obtained by observing go games. This points out the next general direction: can we move from supervised learning based on labeled data to unsupervised learning based on self-organized and structured data?
Yann Lecun once said in a Facebook post: “before we want to get real artificial intelligence, we must solve the problem of unsupervised learning.”
To classify photos, an artificial intelligence system will first obtain millions of photos that have been correctly classified; After learning these classifications, it also uses a series of labeled photos to test whether they can correctly label the test set. If there is no mark, what can the machine do? If there is no metadata to tell the machine “this is a bird, this is a plane, this is a flower”, can it still find the important content in the photo? Can machines, like humans and animals, discover patterns with far less data?
Both humans and animals can build models and abstractions from relatively little data: for example, we don’t need millions of images to identify a new bird or find our way in a new city. One problem that researchers are studying is the prediction of the future picture of video, which will require artificial intelligence systems to build an understanding of the way the world works.
Is it possible to develop a system that can cope with a new environment? For example, on ice, cars will slip unpredictably. Humans can solve these problems, although they are not necessarily good at them. Unsupervised learning points out that the problem cannot be solved simply by better and faster hardware, or developers only use the current library for development.
There are some learning methods in the middle of supervised learning and unsupervised learning. In reinforcement learning, the system will be given some values representing reward. Can a robot cross a piece of ground without falling? Can a robot drive a car through the city center without a map? Rewards can be fed back to the system and maximize the probability of success（ Openai gym is a potential reinforcement learning framework).
At one end, supervised learning means reproducing a set of markers, which is pattern recognition in essence and prone to over fitting. At the other extreme, completely unsupervised learning means learning to infer about a situation inductively, which also requires a breakthrough in algorithm. Semi supervised learning (using minimal annotation) or reinforcement learning (through continuous decision-making) represents the method between these extremes. We’ll see how far they can go.