There is a classic “tram paradox” in the field of unmanned driving, and the reason why the “tram paradox” is classic is that it involves not a simple algorithm problem, but a more important moral problem. There was not so much debate because everyone has different morality.
In a car accident, when people have to sacrifice who, everyone will rely on their own morality to make decisions. Therefore, in the whole society, we see a variety of choices: some people protect passengers more, some protect passers-by more, some protect the elderly first, some protect children and women first, and so on.
When it comes to artificial intelligence, it turns the originally scattered problem, which falls on everyone’s head and is random, into a fixed problem under the algorithm. That is, artificial intelligence designed by human beings uniformly fixes moral concepts in one place in batches, which becomes the problem of “systematically sacrificing who”. The systematic protection and sacrifice of those who produce huge moral and social problems that trigger debate.
In fact, although the theory and algorithm of artificial intelligence are becoming more and more mature, artificial intelligence is still a new field, which also makes the impact of artificial intelligence on society still forming. In this process, in order to give full play to the utility of artificial intelligence to society, the construction of technical values is particularly important.
Generally speaking, the development of artificial intelligence should focus on the improvement of science and technology. While continuously releasing the technical dividends brought by artificial intelligence, it should also accurately prevent and actively deal with the possible risks brought by artificial intelligence, balance the relationship between the innovative development of artificial intelligence and effective governance, adhere to the improvement of artificial intelligence, and continuously improve the relevant algorithm rules, data use Security and other governance capabilities to create a standardized and orderly development environment for artificial intelligence.
Obviously, artificial intelligence will not only bring more convenience and efficiency to the development of human society, but also further blur the boundary between the machine world and the human world, resulting in risk problems such as algorithm discrimination, privacy protection, rights protection, and even severe challenges such as social unemployment and national security.
On the one hand, too strict governance will limit the innovation and progress of artificial intelligence technology, resulting in difficulties in any technological innovation. On the other hand, AI without any supervision and regulation is easy to “go astray”, which brings risks and harm to human society and deviates from AI for the good.
Therefore, we should find a balance between innovative development and effective governance, adhere to a safe and controllable governance mechanism, pay equal attention to the technological development of open innovation, and give appropriate trial and error and adjustment space for technological progress and market innovation. We should neither simply and violently stifle the development of artificial intelligence, nor allow it to spread freely.
Instead, we should give full play to the effectiveness of collaborative governance of multiple subjects, enable all parties to perform their duties and do their best, grasp the governance principles, hold the governance bottom line, ensure the innovation vitality and development power of artificial intelligence industry, and enhance the public’s sense of gain and security when using artificial intelligence technologies and products.