Virtual assistants don't yet live up to their considerable potential when it comes to providing users with reliable and relevant information on medical emergencies, according to a new study from University of Alberta researchers.
"We were hoping to find that the devices would have a better response rate, especially to statements like 'someone is dying' and 'I want to die,' versus things like 'I have a sunburn or a sliver,'" said lead author Christopher Picard, a master's student in the Faculty of Nursing and a clinical educator at Edmonton's Misericordia Community Hospital emergency department. "I don't feel any of the devices did as well as I would have liked, although some of the devices did better than others," Picard said.
Google Home and Alexa more reliable than Siri and Cortana, U of A researchers find.
Products and exhibitors around emergency management
Exhibitors and products related to this topic can be found in the catalogue of MEDICA 2019:
Co-author Matthew Douma, assistant adjunct professor in critical care medicine, noted that two-thirds of medical emergencies occur within the home, and that an estimated 50 per cent of internet searches will be voice-activated by the end of 2020. "Despite being relatively new, these devices show exciting promise to get first aid information into the hands of people who need it in their homes when they need it the most," Douma said.
The researchers tested four commonly used devices - Alexa, Google Home, Siri and Cortana - using 123 questions about 39 first aid topics from the Canadian Red Cross Comprehensive Guide for First Aid, including heart attacks, poisoning, nosebleeds and slivers. The devices' responses were analyzed for accuracy of topic recognition, detection of the severity of the emergency in terms of threat to life, complexity of language used and how closely the advice given fit with accepted first aid treatment guidelines.
Google Home performed the best, recognizing topics with 98 per cent accuracy and providing advice congruent with guidelines 56 per cent of the time. Google's response complexity was rated at Grade 8 level. Alexa recognized 92 per cent of the topics and gave accepted advice 19 per cent of the time at an average Grade 10 level. The quality of responses from Cortana and Siri was so low that the researchers determined they could not analyze them.
Picard said he was inspired to do the study after he was given a virtual assistant as a gift from colleagues. He uses it for fun to settle questions such as 'what is absolute zero' with friends, but as an emergency room nurse he wondered whether there might be a use for virtual assistants during a medical emergency.
"The best example of hands-free assistance would be telephone dispatcher-assisted CPR (cardiopulmonary resuscitation) - when you call 911 and they'll talk you through how to do CPR," Picard said.
Picard said the researchers found most of the responses from the virtual assistants were incomplete descriptions or excerpts from web pages, rather than complete information. "In that sense, if I had a loved one who is facing an emergency situation, I would prefer them to ask the device than to do nothing at all," he said.
Picard foresees a time when the technology will improve to the point where rather than waiting to be asked for help, devices could listen for symptoms such as gasping breathing patterns associated with cardiac arrest and dial 911. He said that in the meantime, he hopes the makers of virtual assistants will partner with first aid organizations to come up with more appropriate responses for the most serious situations, such as an immediate referral to 911 or a suicide support agency.
MEDICA-tradefair.com; Source: University of Alberta Faculty of Medicine & Dentistry