WHO outlines principles for ethics in health AI


liu, tempo Date: 2021-07-08 16:16:14 From:ozmca.com
Views:52 Reply:0

According to foreign media The Verge, the World Health Organization recently released a guidance document that outlines six key principles for the ethical use of artificial intelligence in the health field. Twenty experts spent two years developing the guidelines, which marked the first consensus report on the ethics of artificial intelligence in the medical environment.

ethics of artificial intelligence

The report highlights the promise of healthy artificial intelligence and its potential to help doctors treat patients-especially in areas with insufficient resources. But it also emphasizes that technology is not a quick way to solve health challenges, especially in low and middle-income countries. Governments and regulatory agencies should carefully review where and how artificial intelligence is used in the medical field.

The WHO stated that it hopes that these six principles will become the basis for governments, developers and regulators to deal with the technology. The six principles proposed by the experts are: protect autonomy; promote human safety and well-being; ensure transparency; promote accountability; ensure fairness; and promote responsive and sustainable tools.

There are dozens of potential ways to apply artificial intelligence to healthcare. There are some applications under development that use artificial intelligence to screen medical images, such as mammograms; tools that scan patient health records to predict whether they may be sick; devices that help people monitor their health; and systems that help track disease outbreaks . In areas where people do not have access to professional doctors, tools can help assess symptoms. However, when they are not carefully developed and implemented, they may-in the best case-fail to fulfill their promises. In the worst case, they can cause harm.

In the past year, some hidden dangers are obvious. In the fight against the COVID-19 pandemic, medical institutions and governments have turned to artificial intelligence tools to find solutions. However, many of these tools have some of the characteristics that the WHO report warns about. For example, the Singapore government admits that data collected by a contact tracking application can also be used for criminal investigations-this is an example of “functional creep” in which health data is reused beyond its original goal. Most artificial intelligence programs designed to detect COVID-19 based on chest scans are based on bad data and ultimately useless. Hospitals in the United States used an algorithm designed to predict which COVID-19 patients may require intensive care before testing the program.

“Emergency situations do not justify the deployment of unproven technology,” the report said.

The report also acknowledges that many artificial intelligence tools were developed by large private technology companies (such as Google) or partnerships between the public and private sectors. These companies have the resources and data to build these tools, but may not have the incentive to adopt the proposed ethical framework for their products. Their focus may be profit, not public interest. “Although these companies may provide innovative methods, people worry that they may eventually exercise too much power in front of the government, suppliers and patients,” the report reads.

Artificial intelligence technologies in the medical field are still new, and many governments, regulatory agencies, and health systems are still trying to evaluate and manage them. The WHO report says that careful and measured approach will help avoid potential harm. “The attractiveness of technological solutions and the promise of technology may lead to overestimation of benefits and deny the challenges and problems that new technologies such as artificial intelligence may bring.”

Below is a breakdown of the six ethical principles in the WHO guidelines and why they are important.

Protect autonomy: Humans should supervise all health decisions and have the final power to make decisions-these decisions should not be made entirely by machines, and doctors should be able to overturn them at any time. Without consent, artificial intelligence should not be used to guide someone’s medical care, and their data should be protected.

Promote human safety: Developers should continuously monitor any artificial intelligence tools to ensure that they work as required and do not cause harm.

Ensure transparency: Developers should publish information about the design of artificial intelligence tools. A frequent criticism of these systems is that they are “black boxes” and it is difficult for researchers and doctors to know how they make decisions. WHO wants to see enough transparency so that users and regulators can fully audit and understand them.

Cultivate accountability: When there is a problem with artificial intelligence technology – such as the decision made by the tool that causes the patient to be harmed – there should be a mechanism to determine who should be responsible (such as the manufacturer and the clinical user).

Ensure fairness: This means ensuring that the tool is available in multiple languages ​​and trained on different data sets. In the past few years, a careful review of ordinary health algorithms has revealed that racial biases exist in some algorithms.

Promote sustainable artificial intelligence: Developers should be able to update their tools regularly, and if a tool does not seem to be effective, the organization should have a way to adjust it. Institutions or companies should also only introduce tools that can be repaired, even in health systems with insufficient resources.

Leave a comment

You must Register or Login to post a comment.