In a perspective piece published on 15 March in The New England Journal of Medicine, researchers at Stanford University School of Medicine in California warned about the ethical implications of using machine-learning tools in making healthcare decisions, particularly when used at a large scale.
Among the concerns raised by the authors:
- Data used to create algorithms may contain biases that are reflected in the algorithms, and in the clinical recommendations they generate: “The algorithms being built into the healthcare system might be reflective of different, conflicting interests”, explains David Magnus, PhD, senior author of the piece and director of the Stanford Center for Biomedical Ethics. “What if the algorithm is designed around the goal of saving money? What if different treatment decisions about patients are made depending on insurance status or their ability to pay?”
- Physicians must adequately understand how algorithms are created, critically assess the source of the data used to create the statistical models, understand how the models function to avoid becoming overly dependent on them, and guard against the misinterpretation of data that could adversely affect care decisions.
- Information collected on the basis of artificial intelligence data “needs to be heavily weighed against what they know from their own clinical experience”. The human aspect of patient care must always be taken into account.
- Clinical guidance based on machine-learning introduces a third party into the physician-patient relationship, turning it into a relationship between the patient and the healthcare system. It could challenge the dynamics of responsibility in the relationship and confidentiality.
“We need to be cautious about caring for people based on what algorithms are showing us”, says Danton Char, assistant professor of anaesthesiology, perioperative and pain medicine, who is doing research funded by the National Institutes of Health on the ethical and social implications of expanded genetic testing of critically ill children. “The one thing people can do that machines can’t do is step aside from our ideas and evaluate them critically.” He feels that “society has become very breathless in looking for quick answers”, and that caution and reflection are needed in developing artificial intelligence applications for health data.
Medical Press, Patricia Hannon (15/03/2018)