Work With Us

AI in Medicine: Establishing Trust Through Usability

Artificial Intelligence (AI) has made a substantial impact in healthcare in the past decade. For one timely example, look at the important role AI has played in the speedy development of COVID vaccines.

AI has automated a lot of administrative tasks in healthcare, such as online scheduling of appointments, digitization of medical records, and reminder calls for follow-up appointments and immunization dates. Then there are more novel applications such as Stanford’s development of an Intelligent Hand Hygiene system that uses depth sensors to detect “missed hand hygiene events” leading to reduced hospital-acquired infections.

But in many areas of healthcare where AI has great potential, use of the technology is still in its infancy. In these realms where it’s slower to catch on, a common obstacle to adoption is lack of trust.

AI in medicine and the link between trust and usability

The Link Between Trust and Usability

From a general usability perspective, when you design a product, you want to make sure it can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use. In healthcare and medical device design, usability encompasses all of the above, with an emphasis on efficacy and safety.

In the areas of healthcare where AI is slower to catch on, a common obstacle to adoption is lack of trust.

In “Relationship between trust and usability in virtual environments: An ongoing study,” a paper published in conjunction with the 2016 International Conference on Human-Computer Interaction, Davide Salanitri and his coauthors wrote that trust is indispensable in any situation involving an interaction between human and technology. And where technology is involved, perceived usability directly affects trust. “Low level usability,” they find, “could compromise an individual’s trust in use of the technology.”

If a product isn’t perceived as highly usable — and in medicine, this assessment includes an evaluation of safety and efficacy, and is tied to patient outcomes — it won’t be widely adopted. When it comes to AI and medicine, building trust in machines is key to a product’s success — and usability is key to gaining trust.

What is Artificial Intelligence?

What is “Artificial Intelligence” or “AI”? Often used as a catch-all term, AI refers to the span of technologies and techniques that enable machines to mimic human behavior. Machine Learning and Deep Learning are subsets of AI.

  • Artificial Intelligence describes all the ways machines are designed to perform tasks that typically require human intelligence. AI is the field of designing machines to imitate human behavior.
  • Machine Learning is a subset of AI that uses statistical methods to enable machines to improve and grow smarter with experience.
  • Deep Learning is a subset of Machine Learning that makes the computation of a multilayer neural network feasible. This neural network can evaluate factors in a way that’s similar to the human neural network. It can learn and improve without supervision from its proximity to unstructured data.

Looking across the wide expanse of healthcare, there are three medical specialties that come to mind when thinking about areas that have not yet fully leveraged AI. One is Surgical Robotics, which is best described as AI since the products currently being used require a constant level of human interaction.

When thinking about areas that haven’t fully leveraged AI, three specialties come to mind: Surgical Robotics, Radiology, and Internal Medicine.

What’s being incorporated into Radiology and Internal Medicine is better described as, respectively, Machine Learning and Deep Learning. Each AI method and medical specialty has unique usability concerns, but ALL benefit from a human-centered perspective on AI and product development.

In the development of AI-assisted machines and systems in healthcare, assessing trust via usability testing, and doing user research to learn how artificial intelligence can aid clinicians and augment their work, is just as important as testing the functionalities of the machines’ systems.

AI in medicine

AI-Driven Surgical Robots

In recent years, the growth of investment in AI-assisted robotic surgery has escalated, and this trend shows no sign of slowing down. Technological advancements in this area are predicted to expand the value of the market up to $24 billion by 2025.

If you’ve ever played a video game, you can understand how usability can increase trust in surgical robots. You might remember playing one of the first video games, the Atari game, Pong, and pressing left and right controls to move the bar on the screen back and forth. You trusted those controls to move the bar as promised.

Think about surgical robotics as a kind of glorified video game — but with higher stakes. How does a surgeon know that if they move the joystick in a certain direction, they’ll be moving the scalpel one millimeter down?

Now, thirty-five to forty years later, think about surgical robotics as a kind of glorified video game — but with higher stakes. From this standpoint, how does a surgeon know that if they move the joystick in a certain direction, they’ll be moving the scalpel one millimeter down? What if the surgeon makes a mistake? When a surgeon has their hands directly on a patient, they might feel a tear or some other kind of tactile feedback that they immediately recognize and act upon. This won’t happen when there’s a machine between them and their patient — at least, not yet.

Setup is another usability sticking point in surgical robotics. Humans are required to set up the robotics, position it on the patient, and potentially reposition it on the patient. The setup process is very detailed and can take a long time, in most cases longer than it takes to set up for “analog” surgery. Companies that design surgical robots are working on making this process easier and faster — which in itself introduces more potential layers of error.

Usability Challenges for AI-Driven Surgical Robots: For the foreseeable future, surgical robots will continue to require the direction of a surgeon, and help from technicians for setup and positioning. To win over these users, product designers need to consider usability challenges including accurate tactile feedback and ease of setup.

Machine Learning-Powered Radiology

Machine learning has become immediately relevant in the field of radiology. Recent research from Google Health and Imperial College London and from Stanford University has documented how AI algorithms can be more accurate than radiologists at diagnosing both breast cancer and pneumonia.

I recently heard a lecture where the speaker suggested that the use of AI in radiology can increase accuracy two to three percent, with it trending upwards. That may not sound like much, but when you consider that one radiologist might look at hundreds of images in any given week, an increase of two to three percent in accuracy is significant.

Radiologists want to know, “How can I trust that AI will catch the nuances I’ve spent decades learning how to detect?”

Still, radiologists are skeptical that machine learning can interpret all the data with accuracy. A question we hear from radiologists is, “How can I trust that AI will catch the nuances I’ve spent decades learning how to detect?”

Usability Challenges for Machine Learning-Powered Radiology: The usability challenge here is about better education, and helping radiologists understand how machine learning works — and how algorithms reliably get from Point A to Point B to Point C. This education about the potential of AI and how to best leverage it can start in medical school. Once radiologists are savvy about data science, they’ll be able to harness machine learning as a valuable quantitative tool that increases accuracy, and improves workflow, communication, and patient safety.

Deep Learning-Informed Internal Medicine

Deep learning is being used in internal medicine to drive diagnostics decisions based on a patient’s history and symptoms as well as relevant cases. Physicians make diagnostic decisions based on the same factors, but deep neural networks can be exposed to far more cases within minutes than a clinician could possibly evaluate in days or weeks.

If it’s applied wisely, this type of deep learning has the potential to free up doctors’ cognition and emotional space. They’ll be able to enjoy a shift from transactional tasks to empathic patient care.

A study conducted in 2016 found that physicians spent 27% of their office day on direct clinical face time with their patients and 49.2% of it on electronic hospital records (EHR) and desk work. When in the examination room with patients, physicians spent 52.9% of their time on EHR and other work. These numbers aren’t surprising to anyone who has visited a doctor lately. Increased AI usage would free up primary care physicians’ time and allow them to focus on patients.

Usability Challenges for Deep Learning-Informed Internal Medicine: One of the main usability challenges in internal medicine will be establishing trust by helping physicians understand how deep learning works and how it can benefit their practice and lead to better interactions with patients. That comes down to education and helping them recognize the advantages.

AI in medicine

The Disadvantages and Advantages of AI in Medicine (source: Amisha et al. “Overview of artificial intelligence in medicine.” Journal of family medicine and primary care vol. 8,7 (2019))

Human-Centered Medical AI

The lack of trust I’ve framed above as a usability challenge is partly due to the subtext of fear running throughout these specialties that clinicians will one day be replaced by AI. But the opposite is true! If developed and applied wisely, AI will help clinicians become better doctors.

How can we promote trust in order to realize this future? Here are some ideas:

  • Emphasize the human. As we continue to bring AI into the medical field, we need to put just as much emphasis on the human interaction component as we do on the technology. As I wrote above, assessing trust via usability testing, and doing user research to learn how artificial intelligence can aid clinicians and augment their work, is as important as testing the functionalities of the machines’ systems.
  • Test early and often. For the best results, we encourage conducting usability testing early and often. Talk to physicians who’ll be using the technology to get an understanding of their concerns. What is their barrier to using it? What can we do to help them trust the technology? And consider ways to quell those concerns through education.
  • Make educational resources (such as Instructions for Use) universal. Remember that not everyone understands AI, machine learning, and deep learning. Create material that’s geared toward clinicians who have little to no understanding of how this technology works.
  • Advocate for education. If you’re in a position to do so, advocate for education about artificial intelligence to begin in medical school. It’s important to train the new generation of clinicians on the concepts and applicability of AI. People shouldn’t be expected to instinctively understand how to function efficiently in a workspace alongside machines.

Another way to look at the development of AI in medicine is to see it as an opportunity for the field to elevate the importance of so-called soft skills such as interpersonal and communication skills, emotional intelligence, and creativity, which can’t be learned by machines.

I hope this provides some food for thought about the role of usability in the adoption of AI by the wide network of medical specialties. Artificial Intelligence, Machine Learning, and Deep Learning present exciting opportunities to deliver better healthcare, and to make it accessible to all, and I look forward to seeing this realized.

This blog post was adapted from a presentation given during World Usability Day 2020, organized by PhillyCHI —  watch a recording of the event.