How would these algorithms impact the healthcare system?
Buyx: So far, autonomous algorithms have not seen extensive practical implementation because they are not yet good enough and involve many challenges. If we manage to design ethical algorithms, it can trigger a positive transformation in medicine. But only if they can diagnose more accurately than a physician or make authoritative therapeutic recommendations and we have enough insight into why they do that and whether these conclusions are based on a reasonable medical foundation. That’s when they could free up time for patients, avoid mistakes and reduce costs.
You have briefly touched on it earlier: Which ethical concerns or problems do these algorithms raise?
Buyx: First, the algorithms must present thorough evidence and perform reliably and accurately to avoid risks and harm. We simply must not fall prey to our obsession with technology. Second, this must not lead to broad misconceptions that algorithms and AI will replace doctors or other health care professionals. Algorithms perform a specific, well-defined task and are unable to search for other characteristics that a doctor sees when he examines a patient, meaning they cannot make a complete differential diagnosis.
Third, there must not be any algorithmic bias that stems from data sets or programming. We all heard about facial recognition algorithms where the respective data sets are not as diverse as the real world. That’s why these types of algorithms are great at identifying the faces of white men but struggle to recognize faces of women or people of color. We will have to correct that via the training data.
We also need to consider how and to what extent we educate patients about the role of algorithms and how we ensure patient autonomy when the algorithms attain the status of an actual medical consultation as assistance systems.
What do policymakers have to do to create the right framework?
Buyx: Policymakers must definitely provide a framework if these types of algorithms are to be approved as medical devices. Needless to say, the approval process differs from the procedure for an ultrasound machine. One of the major tasks in this connection is to make these processes sustainable, ethical and socially responsible. The biggest challenge lies in the commercial realm and health-related apps. The imposed requirements in this area are nowhere near as strict as those required of medical devices.
A number of mental health apps use artificially intelligent algorithms. We have to decide how we plan to manage the situation if these apps directly engage consumers or patients without physician supervision and involvement. If apps are meant to effectively provide clinical support that was previously within a physician’s scope of responsibilities (and rightfully so), we must ensure that these apps are classified and treated as potential medical devices.