For artificial intelligence (AI), 2021 will be a great year.
In the past 2020, the global AI industry can be said to be bucking the trend under the epidemic. In China, the amount of investment and financing in this field hit a new high, reaching 174.8 billion yuan , a year-on-year increase of 73.8% compared to 2019 .
By this year, this upward trend has continued, and large technology companies have invested more and more in large-scale artificial intelligence projects. This also makes more and more people believe that the end point “AI for General Purposes” (AGI) is coming soon.
According to McKinsey reports, many scholars and researchers insist that there is at least a chance to achieve human-level artificial intelligence in the next ten years. “AGI is not a distant fantasy. It will come to the world faster than most of us imagine.”
Recently, DeepMind, a well-known AI research laboratory under Google, submitted a compelling peer-reviewed paper entitled “Reward is Enough” to the “Artificial Intelligence” magazine . They believe that reinforcement learning will one day replicate human cognitive abilities and achieve AGI.
Although this “prose” paper with an extremely strong taste has strong public relations attributes, it still once again set off discussions and even concerns about AGI in the industry.
▍Are we ready for AGI?
For those who support general artificial intelligence, they always emphasize that AGI will benefit all mankind. But in fact, from the point of view that artificial intelligence has been developed to a certain extent now, this conclusion seems a bit sloppy.
The Harvard Business Review gives a typical example. When predictive monitoring or automatic credit scoring algorithms are not controlled, they pose a serious threat to our society.
Pew Research recently published a survey of technology innovators, developers, business and policy leaders, researchers, and activists that show whether their ethical principles of artificial intelligence can be widely implemented in 2030 skeptical.
In the final analysis, this is because it is generally believed that companies will always prioritize profit and that technology will be monopolized by fewer and fewer oligarchs. And if it is so difficult to even ensure that weak artificial intelligence conforms to ethical standards, the consequences of AGI out of control seem to be even harder to imagine.
▍Technology may mature faster
In addition to DeepMind, we can give another example-OpenAI’s GPT-3 .
In fact, GPT-3 is in a transitional phase from weak artificial intelligence to AGI. GPT-3 does not require additional training to complete many different tasks, such as being able to generate convincing text narratives, generate computer code, automatically complete images, translate between different languages, perform mathematical calculations, and other A feat within the creator’s plan.
But even so, today’s deep learning algorithms (including GPT-3) cannot adapt to the ever-changing environment. This is a fundamental difference between today’s artificial intelligence and AGI. One step of versatility is multi-modal artificial intelligence, which requires the combination of GPT-3 language processing with other capabilities (such as visual processing).
For example, on the basis of GPT-3, OpenAI launched DALL-E, which generates images based on the concepts it has learned. Using a simple text prompt, DALL-E can produce “a picture of a capybara sitting in a field at sunrise”.
Although it may never have “seen” such a picture before, it can combine what it has learned about painting, capybara, fields and sunrises to produce dozens of images. Therefore, it is multi-modal, and although it is still not AGI, it has stronger capabilities and versatility.
Earlier this month, Beijing Zhiyuan AI Research Institute released the ultra-large-scale intelligence model “Enlightenment 2.0” at the 2021 Beijing Zhiyuan Conference. It is reported that its parameter scale reaches 1.75 trillion, which is 10 times that of the AI model GPT-3, breaking the 1.6 trillion parameter record previously created by the Google Switch Transformer pre-training model, and it is currently China’s first and the world’s largest trillion-level model.
Like GPT-3, the multi-modal “Enlightenment 2.0” can perform natural language processing, text generation, image recognition, and image generation tasks. But it can be done faster, it can be said to be better, and it can even sing.
The traditional view is that the realization of AGI does not necessarily require an increase in computing power and the number of parameters of the deep learning system. However, another point of view is that complexity will give birth to intelligence.
Last year, Geoffrey Hinton, a pioneer in deep learning, Turing Award winner, and University of Toronto professor Geoffrey Hinton pointed out, “In a cubic centimeter of the brain, there are one trillion synapses. If there is general artificial intelligence, then the system may need one trillion synapses.” Synapses are the biological equivalent of the parameters of a deep learning model.
Enlightenment 2.0 has clearly reached this number. Zhang Hongjiang, chairman of the Zhiyuan Research Institute, also said that at present, “large model + large computing power” is a feasible path towards general artificial intelligence. The large model is of great significance to the development of artificial intelligence. In the future, a transformative AI industrial infrastructure similar to the power grid will be formed based on the large model. The AI large model is equivalent to a “power plant”, which converts data, that is, “fuel” into intelligent capabilities to drive various AI applications. If the large model is connected with all AI applications to provide users with a unified intelligent capability, the whole society will form a network for the production and use of intelligent capabilities, that is, the “intelligent network”. The big model is the next basic platform for AI and the strategic infrastructure for future AI development.
Just a few weeks after the release of Enlightenment 2.0, Google Brain announced a deep learning computer vision model with 2 billion parameters. Although it is not certain that recent results in these fields will continue to develop rapidly, there are models that indicate that by 2025, computers may have the same capabilities as the human brain.
▍”Large model + large computing power” opened up a path
At present, deep learning technology has been widely used and further applications have been obtained. For example, self-driving car companies like Waymo are using reinforcement learning to develop control systems for their cars. The military is actively using reinforcement learning to develop collaborative agent systems, such as a team of robots that can fight side by side with future soldiers.
Google recently used reinforcement learning on a data set of 10,000 computer chip designs to develop its next-generation TPU, a chip specifically designed to accelerate the performance of AI applications. The work that originally required a team of human design engineers to spend many months can now be completed by artificial intelligence in six hours. Therefore, Google is using artificial intelligence to design chips that can be used to create more complex artificial intelligence systems, and further accelerate the already exponentially increasing performance through a virtuous circle of innovation.
In this context, as artificial intelligence models such as Enlightenment 2.0 and computing power are multiplying, will reinforcement learning machine learning eventually lead to AGI as DeepMind believes?
Frankly speaking, although the above examples are convincing, they are still weak artificial intelligence use cases. Where is AGI? The DeepMind paper states that “rewards are sufficient to drive behaviors that exhibit the capabilities studied in nature and artificial intelligence, including knowledge, learning, perception, social intelligence, language, generalization, and imitation.” This means that as the model matures and computing power expands, AGI will naturally arise from reinforcement learning.
If DeepMind is correct, then the ethical and responsible artificial intelligence practices and norms emphasized by the industry and the government become even more important. With the rapid acceleration and progress of artificial intelligence, we obviously cannot bear the risk of general artificial intelligence getting out of control.