M. Scudellari, “AI Won’t Replace Us, Docs Say”, Dec. 20, 2018. [Online] IEEE Spectrum: Technology, Engineering, and Science News. Available at: https://spectrum.ieee.org/the-human-os/biomedical/ethics/ai-wont-replace-us-docs-say [Accessed 24 Jan. 2019].
About the article author
Megan Scudellari is an award-winning journalist who specializes in writing about the life sciences and biotechnology. Beyond writing for the Boston Globe, Newsweek, Scientific American, and Nature, she has also co-authored a biology textbook entitled “Biology Now”.
Can artificial intelligence ever replace doctors? This week we are profiling an article from Megan Scudellari, who interviewed Charlotte Blease, author of a recent study that asked doctors this very question. In the study, around 700 doctors were asked to rate the likelihood that “current and future innovations in artificial intelligence” would replace physicians in six key areas: diagnoses, prognoses, referrals, treatment plans, empathic care, and documentation. In four of these six areas – diagnoses, referrals, treatment plans, and empathic care – the majority of doctors thought it either “extremely unlikely” or “unlikely” that they could be replaced by machines. Of the remaining two areas, 51.8% of doctors thought machines could likely be trained to better deliver prognoses, while 80.2% believed machines could likely do the paperwork. This study revealed physicians’ general skepticism about the potential for artificial intelligence to undercut their jobs, a fact that Blease finds concerning. She says, “We need a medical community that is fully engaged in critical debates about the ethics and regulation of AI in healthcare.”
While there is no denying the immense potential of artificial intelligence – in every profession, not just healthcare – it may be too soon to entertain the idea of machines supplanting physicians in the workplace. For instance, how would biases in the training data for these algorithms manifest themselves in a healthcare setting? How would these machine learning models consider potentially sensitive data, such as race, gender, or income? These pieces of information may prove statistically significant in diagnoses, but would it be ethical to introduce these attributes?
What do you think? Would it be ethical for machines to take the lead in caring and prescribing treatment for a patient?