Development prospect of artificial intelligence in the future


liu, tempo Date: 2021-09-01 09:26:13 From:ozmca.com
Views:97 Reply:0

The meaning of intelligence

 

What we call “intelligence” is a fundamental problem. In an article by radar in 2014, Beau Cronin brilliantly summarized many definitions of artificial intelligence. Our expectations of AI depend heavily on what we want to do with AI. The discussion of Artificial Intelligence almost always begins with the Turing test.

 

Turing assumes that people can interact with computers by chatting: he assumes a way of communicating with computers. This assumption limits what we expect a computer to do: for example, we can’t expect it to drive a car or assemble circuits. This is also a deliberate ambiguity test. The computer’s answer may be evasive or completely incorrect. Correctness is not the point. Human intelligence may also be flashing on its side or incorrect. We are unlikely to misunderstand the correct artificial intelligence as human beings.

 

If we assume that AI must be embedded in moving hardware, such as robots or autopilot cars, we will get a set of different standards. We will ask the computer to perform an undefined task under its own control (such as driving to a store). We have built artificial intelligence systems that do better than most humans in route planning and driving.

 

The reason why Google’s autopilot was responsible for the accident is that the algorithm is modified to drive more like humans, which brings the risk that AI systems usually do not have.

 

Autopilot has many difficult problems that can not be solved: for example, on the mountain path of blizzard. Whether the artificial intelligence system is embedded in a car, an unmanned aerial vehicle or a humanoid robot, the problems it faces are essentially similar: it is easy to execute in a safe and comfortable environment; It is much more difficult in high-risk and dangerous situations.

 

Humans are not good at these tasks. Although Turing expects artificial intelligence to avoid or even answer questions incorrectly in the dialogue, fuzzy or incorrect solutions are unacceptable when driving on the highway.

 

Artificial intelligence, which can perform physical behavior, forces us to think about the behavior of robots. What kind of ethics should be used to regulate autonomous robots? Asimov’s robot law? If we believe that robots should not kill or harm humans, weaponized UAVs have broken this boundary. Although the typical question is “if an accident is inevitable, should an automatic car hit a baby or a grandmother?” Is false morality, but there are some more serious versions of the problem.

 

In order to avoid accidents that would kill its internal passengers, should the self driving car rush to the crowd? It’s easy to answer this question in the abstract, but it’s hard to imagine that humans would be willing to buy cars that would sacrifice them without harming bystanders. I doubt that the robot will be able to answer this question in the future, but it will certainly be discussed on the boards of directors of Ford, GM, Toyota and Tesla.

 

We can define AI more simply through the complexity distribution of dialogue system or autonomous robot system, and say that AI is just about building a system that can answer and solve problems. The system that can answer questions and reason complex logic is the “expert system” we have developed for many years, most of which are embedded in Watson( Alphago solves different types of problems.)

 

However, as beau Cronin pointed out, it is relatively simple to solve the problem of intellectual challenge for human beings; What is more difficult is to solve problems that are very simple for humans. Few three-year-old children can play go. But all three-year-olds recognize their parents – without the need for a large set of labeled images.

 

artificial intelligence

 

What we call “intelligence” depends heavily on what we want it to do. There is no single definition that can meet all our goals. If there is no well-defined goal to describe what we want to achieve or let us measure whether we have achieved it, the transformation from narrow AI to general AI will not be easy.

 

Assistant or protagonist?

 

AI news reports focus on machine autonomous systems that can act autonomously. There’s a good reason to do this: it’s fun, sexy and a little scary. While watching human assisted alphago play chess, it is easy to imagine a future dominated by machines. However, compared with automatic equipment, artificial intelligence has more than human beings. Where is the real value – artificial intelligence or intelligence enhancement? Artificial intelligence or intelligence enhancement?

 

This question has been asked since the first attempt at artificial intelligence and has been deeply discussed by John Markoff in machines of loving grace.

 

We may not want to make decisions by an artificial intelligence system, but may want to reserve the right to make decisions for ourselves. We may want artificial intelligence to enhance intelligence by providing information, predicting the consequences of any action process and making suggestions, leaving the decision to mankind. Although it feels like the matrix, it is more likely that the future served by AI will enhance our intelligence rather than overthrow us than serve a runaway AI.

 

GPS navigation system is an excellent case of artificial intelligence system used to enhance human intelligence. Given a suitable map, most people can navigate from point a to point B, although there are still many requirements for their ability, especially in areas we are not familiar with. Drawing the best route between two locations is a tricky problem, especially when you consider bad traffic and road conditions.

 

But with the exception of autonomous vehicles, we never connected the navigation engine to the steering wheel. GPS is an assistive technology in the strict sense: it gives advice, not commands. When a person has made a decision (or mistake) to ignore GPS recommendations, you will hear GPS say “recalculate the route”, that is, it is adapting to the new situation.

 

In the past few years, we have seen many applications qualified as artificial intelligence in various senses. Almost everything under the framework of “machine learning” is qualified to become artificial intelligence: in fact, “machine learning” is the part that is referred to as the more successful part of artificial intelligence when the discipline of artificial intelligence falls into disrepute. You don’t have to build artificial intelligence with human voice, such as Amazon’s Alexa. Of course, its recommendation engine must be artificial intelligence.

 

Web applications like stichfix are also artificial intelligence, which increases the choices made by fashion experts using recommendation engines. We are used to the chat robots that handle customer service calls (and are often annoyed by them) – high or low accuracy. You may end up talking to humans, and the secret is to use chat robots to clean up all routine problems. It doesn’t make sense for someone to copy your address, policy number and other standard information: if the content is not too much, the computer can do it at least as accurately.

 

The next generation of assistants will be (already) semi autonomous. A few years ago, Larry Page said that the computer in Star Trek is an ideal search engine: it is a computer that can understand humans, digest all available information, and give answers before being asked. If you are using Google now, you may be surprised when it first tells you that you should leave early to keep the appointment due to traffic jam.

 

This requires looking at multiple different data sets: your current location, your appointment location (possibly in your calendar or contact list), Google maps data, current traffic conditions, and even time sequence data about the expected traffic model. Its purpose is not to answer a question; It’s about helping users even before they realize their needs.

Leave a comment

You must Register or Login to post a comment.