If a medical AI makes a mistake, who’s to blame? WHO publishes Guidelines

liu, tempo Date: 2021-08-12 09:47:10 From:ozmca.com
Views:55 Reply:0

The World Health Organization (WHO) has issued guidelines on the Ethics and Governance of THE Use of AI in Healthcare. The guide, which covers more than 160 pages, covers nine chapters on AI applications in healthcare, applicable laws and policies, key ethical principles and corresponding ethical challenges, accountability mechanisms and governance frameworks.


The WHO said it was the first comprehensive international guide to medical AI based on ethical and human rights standards. The guidelines were led by the WHO Digital Health and Health Innovation and Research Team and took two years to complete. Meanwhile, WHO is working with a leading group of 20 experts to identify six core principles to improve the ethical use of AI in healthcare. These are also the first consensus principles in the field.


The six core principles are: 1. to protect self-government 2. to promote human welfare, human security and the public interest 3. to protect self-government Ensure transparency, interpretation and understandability 4. Foster responsibility and accountability 5. 6. Promote AI development that is responsive and sustainable


WHO pointed out that the application of AI technology in healthcare has great prospects, including medical care, health research and drug development, health system management and planning, and public health and surveillance. In a comprehensive guide, “The Medical Community” summarizes some of the ethical issues that may arise from the use of AI in healthcare and how to deal with them.


Will AI cause health care workers to lose their jobs?


“If we get it right, in 30 years’ time doctors will not be able to find jobs, there will be fewer and fewer hospitals, and there will be a lot fewer pharmaceutical companies.” At the World Internet Conference in November 2014, Jack Ma, then alibaba’s executive chairman, announced that health would be one of alibaba’s two future industries.


medical AI


In the guidelines, “The Ethics and Governance of AI use in Healthcare,” the WHO has repeatedly mentioned Google, Facebook and Amazon in the United States, as well as Internet technology companies such as Tencent, Alibaba and Baidu in China. Relevant Chinese platforms provide users with online medical information, benefiting millions of People in China.


Describing trends in the use of AI in clinical care, the WHO cited China as an example, “where the number of telemedicine providers increased nearly fourfold during the COVID-19 pandemic.” This can improve the current situation of insufficient medical resources and medical staff. At the same time, patients with chronic diseases and other types of diseases can better manage themselves with THE help of AI, reducing the need for health care human resources.


Will doctors not be able to find jobs in the future, as Jack Ma said? The optimistic view in The Ethics and Governance of THE Use of AI in Healthcare is that AI will reduce the daily workload of clinicians, allowing them to spend more time with patients and focus on more challenging tasks; Healthcare workers will perform other roles, such as tagging data or designing and evaluating AI technology, so they won’t lose their jobs.


Pessimistic view: AI will automate many jobs and tasks for healthcare workers. The use of AI in large numbers of jobs will lead to instability in the short term. Even if new jobs are created and overall employment is increased, many jobs in certain areas will be lost, and those who are not qualified for the new jobs will lose their jobs.




Almost all health jobs require a minimum level of digital and technical proficiency. It has been suggested that within 20 years, 90 per cent of NHS jobs will require digital skills. Doctors must improve their skills in this area and communicate more with patients about the risks of using AI, making predictions and discussing trade-offs, including the ethical and legal risks of using AI technology.


It’s also important to note that reliance on AI systems can impair human independent judgment. In the worst case, if AI systems fail or are damaged, health care workers and patients may be unable to take action. Therefore, a strong plan should be in place to support technical systems in the event of failure or disruption.


In fact, many high, middle and low income countries are currently facing a shortage of health workers, and the WHO estimates that by 2030, there will be a shortage of 18 million health workers, mainly in low and middle income countries. Moreover, clinical experience and knowledge about the patient is critical. Therefore, AI will not replace the clinician in the foreseeable future.


What if there is a “peer disagreement” between AI and doctors? How can doctor autonomy be guaranteed?


In medical care, in diagnosis, AI is widely used in radiology and medical imaging, but it is still relatively new and has not been routinely used in clinical decision-making. Currently, AI is being evaluated for the diagnosis of tumor, ophthalmological and pulmonary diseases, and may achieve timely detection of stroke, pneumonia, cervical cancer and other diseases through imaging, echocardiography, etc. It may be used to predict diseases such as cardiovascular disease, diabetes or major health events. In clinical care, AI can predict some disease progression and drug resistance.


In some aspects of clinical care, there are benefits to AI replacing human judgment: humans can make more unfair, biased and worse decisions than machines. When machines are able to execute decisions more quickly, accurately and sensitively, handing them over to humans could mean that some patients suffer avoidable illness and death.


At present, in health care, decision-making power has not been fully transferred from people to machines. Paradoxically, in the case of “peer disagreement” between AI and doctors, the AI has little value if the doctor ignores the machine; If the doctor chooses to fully accept the AI’s decision, this could undermine its authority and responsibility.


Something even more than peer divergence may be emerging: AI systems are replacing humans as cognitive authorities, and routine medical functions may be handed over entirely to AI. If appropriate measures are not taken, this will undermine the autonomy of humans, who may neither be able to understand how the AI decides nor be able to negotiate with the AI to reach a negotiated decision.




In the context of AI in healthcare, autonomy means that humans should continue to have complete control over healthcare systems and medical decisions. In practice, it should be possible to decide whether an AI system should be used for a particular medical decision and, where appropriate, to prioritize the decision. This ensures that clinicians can override decisions made by the AI system, making them “essentially reversible”.


In addition, AI should be transparent and explainable. Healthcare organizations, health systems, and public health institutions should regularly publish information on why some AI technologies are being used, as well as regularly evaluate AI, to avoid “algorithm black boxes” that developers can’t understand.


Who is responsible for mistakes made with AI?


According to “Japanese economic news” reported in July 2018, the Japanese government will improve the medical equipment about AI in a series of rules, regulations of diagnostic ultimate responsibility borne by doctor: possibility misdiagnosed because of AI, so the AI positioning for auxiliary equipment of medical equipment, based on “medical practitioners law” regulation “make a decision on the final diagnosis and treatment principles of liability shall be borne by the doctor”. By clarifying the scope of responsibility, manufacturers are encouraged to develop AI medical devices.


There was a huge controversy when the news broke. The WHO guidelines point out that overall, AI may reduce misdiagnosis rates if the underlying data are both accurate and representative. However, when misdiagnosis occurs, is it reasonable for doctors to take responsibility?


The WHO’s answer is no. First, the clinician does not control the AI technology. Second, because AI technology is often opaque, doctors may not understand how AI systems translate data into decisions. Third, the use of AI may be due to the preference of the hospital system or other outside decision makers, rather than the clinician’s choice.


The guidelines point out that certain characteristics of AI technology affect the concept of responsibility and accountability and may create the problem of a “responsibility gap” : Because AI develops on its own, and not every step is artificial, developers and designers claim they are not responsible for it, putting all the risk of harm on the health care workers who are closer to the AI, which is unreasonable.


The second challenge is the “traceability” of harm, which has long plagued health-care decision-making systems. Since AI development involves contributions from many departments, it is difficult, legally and ethically, to apportion responsibility. Also, ethics guidelines are often issued by technology companies and lack authoritative or legally binding international standards; Monitoring companies’ compliance with their own guidelines is often done in-house, with little transparency and no third party enforcement, and has no legal force.




If clinicians make mistakes using AI technology, they should examine whether anyone in their medical training is responsible. If there is a faulty algorithm or data used to train the AI technology, the responsibility may fall on the person who developed or tested the AI technology. However, clinicians should not be completely exempt from responsibility. They cannot simply put their stamp on machine advice and ignore their own expertise and judgment.


Accountability procedures should clarify the relative roles of manufacturers and clinicians when ai-enabled medical decisions harm individuals. Assigning responsibility to developers encourages them to minimize harm to patients. Other manufacturers, including drug and vaccine makers, medical device companies and medical device makers, also need to be clear about their responsibilities.


Developers, institutions, and doctors may all play a role in medical harm when deciding to use AI throughout the healthcare system, but no one person is “fully responsible.” In such cases, the responsibility may lie not with the provider or developer of the technology, but with the government agency that selected, validated, and deployed the technology.


In its guidelines, the WHO also addresses ethical issues such as public privacy, the responsibilities of commercial technology companies, bias and errors in algorithmic allocation, and climate change caused by carbon dioxide emissions from machines. WHO called for the establishment of a framework for healthcare AI governance, and made recommendations on data governance, private and public sector governance, policy observations and model legislation, and global governance.


“Our future is a race between the growing power of technology and the wisdom with which we use it.” In the guide, its chief scientist, Dr Sumya Swaminathan, quotes Stephen Hawking.

Leave a comment

You must Register or Login to post a comment.