Experts are wondering whether the health care world is ready for the age of artificial intelligence (AI).

AI in health care has arrived, with machine learning and deep neural network tools assisting medical decision making and management.

The new technology has permeated into at least three different levels; AI-assisted image interpretation, AI-assisted diagnosis and AI-assisted prediction and prognostication.

“From diagnosing retinopathy to cardiac arrhythmias, from screening for skin cancer to breast cancer, from predicting outcome of stroke to self-management of chronic diseases, AI and machine learning devices can replace many time-consuming, labour-intensive, repetitive and mundane tasks of clinicians and give possible suggestions of management plans,” experts write in a new article for the Medical Journal of Australia.

 But the quality of AI in health care is dependent on the quality of the data on which it is based.

“Algorithms are being developed and validated on data generated by health care systems where current practices may already be inequitable,” the experts wrote.

“A system built on poor-quality, biased data will reflect those problems (‘garbage in, garbage out’). If a health care system has excluded populations of patients, the structural inequalities of health care will be repeatedly reinforced by the AI.”

AI also requires access to big data.

“Big data in health care is primarily generated by public health systems, funded by the public for the public. Increasingly, claims over the health data generated by these public systems are being contested,” researchers say.

“Issues of data sovereignty threaten the existence of effective AI. Patient data should not be provided to technology giants without a good governance structure to protect data sovereignty.”

They say it will almost certainly lead to change in standards of care.

“If AI keeps its promise of benefit and it is integrated more into practice, standards of care must require AI use, and traditional forms of therapeutics will be forced to change,” the article states.

“We will see a time when all medicine and allied health work as a team with AI. Those who refuse to partner with AI might be replaced by it.”

At some point, the industry may have to work out new lines of responsibility, such as in the case of AI-caused injury.

“A doctor using AI should be responsible for AI decisions made in the course of treatment, especially if the doctor retains the power to make the final decision regarding treatment,” the experts say.

“But as AI takes on more autonomous decision making, it might be argued by some doctors that they should not be responsible for that which they cannot control. Similarly, it seems unfair for doctors to be held responsible for an AI decision when they are unable to deduce how and why that decision was made.

“A stepwise gradation model of shared responsibility between the human doctor and the machine in diagnosis and clinical management has been proposed.”

The article was written by Joseph Sung, the Mok Hing Yiu Professor of Medicine at the Chinese University of Hong Kong, Cameron Stewart, Professor of Health, Law and Ethics at the University of Sydney, and Professor Ben Freedman, the Deputy Director of Research Strategy at the Heart Research Institute and the University of Sydney’s Charles Perkins Centre and Concord Clinical School.

Sung and colleagues conclude that before AI tools can be put into daily use in medicine, “data quality and ownership, transparency in governance, trust-building in black box medicine, and legal responsibility for mishaps are some of the hurdles that need to be resolved”.