To Err is Both Human and AI

In a recent Outlook commentary in the journal Nature, Dr. Aaron Lee discusses the use of artificial intelligence (AI) in clinical medicine - both the significant potential for this technology as well as the various limitations and ethical issues involved with its implementation. He specifically addresses the case of AI screening systems for diabetic retinopathy - the first of these has already been approved for use by the FDA - but the issues with these systems apply to other areas of clinical medicine beyond ophthalmology. Dr. Lee makes the case that as this technology improves, and especially with the potential for deep learning algorithms to "learn" from their mistakes, the benefits from using AI as a tool for physicians may outweigh the risks.

Diabetic retinopathy screening is recommended for all patients with diabetes, and as the prevalence of diabetes has greatly increased, the demand for screening exams has risen rapidly. AI screening systems that are very good at detecting diabetic retinopathy are ready to go and can be deployed in remote areas or in other settings where patients have difficulty accessing ophthalmologic care. The main concern with relying on these systems is that they might fail to identify disease in some patients, who would therefore not receive needed care. Relying on humans to verify the results of the AI system would eliminate the efficiency of using this technology in the first place. Other potential concerns are related to how the screening system is developed and "trained" - bias, low-quality images, and the presence of other diseases can all be problems. However, the number of patients who could receive access to much needed diabetic retinopathy screening thanks to this technology and therefore avoid future blindness greatly outweigh the very small number of patients who may be screened incorrectly.

Dr. Lee is hopeful that we can learn to live with these concerns, find solutions to them where possible, and accept some potential for error in exchange for the greater potential for good that artificial intelligence can bring to certain medical applications. Human physicians also make mistakes from time to time, and society has found ways to continue to improve medical care despite human error. As Dr. Lee concludes: "A medical error rate of zero, although laudable as a goal, is unrealistic as a reference standard...Society will need to accept an inescapable truth for the greater good: to err is both human and AI."