header_hfes
header_hfes

What’s New in Human Factors for Medical Devices

The 2015 International Symposium on Human Factors and Ergonomics in Healthcare, held in Baltimore in late April, had four program tracks going at once: Patient and Health-Care Provider Safety; Clinical and Consumer Health-Care IT; Medical and Drug-Delivery Devices; and Clinical Care Settings.

The attendees included a surprising number of health-care providers (physicians, nurses, and pharmacists). There were also a lot of familiar human factors professionals who work in the medical device industry or as consultants. It continues to surprise me that relatively few industrial designers and academic researchers participate in this conference.

With the caveat that it was impossible to catch everything — remember, there were four tracks going at once — a couple of topics were memorable for how frequently they came up.

I noticed more discussion in general this year of user interface (UI) design, often in the context of smart devices. I anticipate UI becoming a stronger focus in upcoming HFES conferences as it continues to become clear that ineffective UIs represent the highest risk to safety and efficacy.

ROI of UI

A related point of discussion was the return on investment of early integration of user experience design and human factors into the product development process. Robert Rauschenberger of Exponent, Inc. included the above graph in his presentation, “A Product Liability Perspective on Medical Device Development” — it shows how a company that was proactive about quality function deployment (QFD) was able to make 90% of its major engineering changes long before product launch. (The source for this graph: Rezayat, M(2000) Knowledge based development using XML and KC, computer aided design.) Similar realizations are happening across industries. In the healthcare industry as a whole there is plenty of variation. On my Usability Maturity Ladder, I’d place some in healthcare at the “Burden” stage and others at “Table Stakes.” The following were also popular topics:

FDA Draft Guidance

Presentations from FDA representatives (both CDRH and CDER) were a highlight. It is clear that the FDA folks (Ron Kaye, Shannon Hoste, Mary Brady, and Irene Chan) are working hard in a collaborative way to help ensure medical products are safe and effective for use. They continue to emphasize their desire for a dialogue with human factors professionals before premarket submissions (e.g., to review protocols, etc.).

The FDA draft guidance for applying human factors and usability engineering to optimize medical device design came out in 2011 — and at every conference we hold out hope that Ron will announce that the guidance has been finalized. (It hasn’t.) The big announcements at this conference were Ron Kaye and Mary Brady’s upcoming retirements.

Takeaways:
  • The draft guidance has been effective at getting companies to integrate human factors and usability practices into their product development cycle — which may be why the FDA doesn’t feel rushed to get it finalized.
  • Build four to six weeks into your schedule if you’re going to do a pre/Q-submission review of your protocol with the FDA.
  • The draft tends to be widely accepted and followed as good practice, but there are two areas — learning decay and usability testing of instructions for use (IFUs) — where it’s especially open to interpretation.

inline_instructions

Learning Decay

Learning decay (or training decay) is the falling-off of knowledge that occurs between training and the first time you use that training, such as when learning to use a medical device before use. If a doctor prescribes a combination product such as an Epipen for bee stings, there’ll presumably be a significant delay between the time the patient, or the lay caregiver, is shown how to use the pen and the first time the patient is stung. A less extreme example is a medication that needs to be self-administered once a day or once a week. There might be a day or a week’s delay from when the doctor, nurse, or pharmacist trains the patient and the patient self-administers the medication.

The draft guidance states: “Training should represent the actual user training experience taking into account … the fact that retention of training decays over time. For this reason, prior to testing, a period of time should elapse following training to provide an opportunity for training decay to occur.” So, we attempt to simulate learning decay in our usability study protocols.

The exact nature of that simulated learning decay period is still a topic of debate. There are many interpretations of what we should be doing.

Most agree that we can get by with some shortened version of the decay period due to the exponential decay of learning (first proposed by Hermann Ebbinghaus in 1885). We find ourselves asking how short of a decay period still serves as a realistic simulation? We have to balance this realism with the reality that potential participants may be less willing to join a study that requires multiple trips to the lab, or worse yet, drop out before the second session.

Takeaways:
  • The learning decay discussion continues to evolve — there are no clear cut answers being provided by the FDA, and the industry in general still has a lot of leeway in terms of incorporating it.
  • Don’t forget the possibility that your product might fall into the hands of an untrained user. The FDA wants to see some untrained participants in your validation study if this situation is possible.
  • To come up with your own best practice, you might start with a literature review of learning decay before designing your study.

Usability Testing of Instructions for Use (IFUs)

There were a number of talks and discussions related to usability testing of instructions for use (IFUs). This is another point where the guidance is open for interpretation.

How do you validate that your IFU is as good as it can be? What is the best method for evaluating it? Should you do a study where people are evaluating the IFU separately from the device, or do you combine testing of the IFU and device? Should you explicitly require people to use the instructions or do you leave it up to them?

I’ve used several variations. In one study we quizzed people on the IFU after they read it. We asked them key questions that demonstrated their comprehension. I’ve also done a more naturalistic approach — let the study participants decide whether they’re going to throw the IFU aside or read the instructions before attempting to use the product.

The best approach to testing really depends on how confident you are in the systems underlying your hardware and software. I’ve seen very successful programs where the instructions, labels, and quick reference cards have been refined to effectively increase the level of task success. On the other hand, I’ve seen teams struggle, trying to use instructions as a Band-Aid for poor product design — where even the best instructions would not fix basic usability problems.

Takeaways:
  • Human factors professionals are using a variety of methods to test their IFUs, and there’s no one size fits all solution. There seems to be a general consensus that some testing focused on the IFU is good.
  • In general, the best practice is to test the IFU explicitly in a separate formative study early on. Use the feedback from this study to modify the IFU prior to subsequent formative studies or the summative study.

inline_iec_1

International Usability Standards – Updates

Ed Israelski, director of human factors at AbbVie, gave his annual update on standards and on the new version of IEC 62366, the international standard concerning the application of usability engineering to medical devices. The committee has now broken the standard into two parts — Part 1, a normative standard and Part 2, an informative tutorial. Part 1 was revised and published in February, and Part 2 is undergoing review now.

One notable amendment deals with legacy devices. It applies to any device on the market before the original version of IEC 62366:2007 existed that is now being changed in some way. This amendment (Annex K) talks about how to bring these types of products into compliance — if you’re changing some part but not all, and how you apply usability engineering. In the standard, such a products is said to have a “user interface of unknown provenance.” Annex K provides the process for dealing with such rogue products including documenting the intended use, identifying tasks that could lead to harm, reviewing existing post-market date to discover relevant complaints, performing a risk analysis such as a use FMEA, exploring opportunities to control risks, and evaluating residual risks.

Takeaways:
  • If your company is making changes to an existing product, review the process associated with such legacy devices in IEC 62366:2015 (Annex K).
  • Check out the new version of IEC 62366 for other changes that may be relevant to your products and processes.

The Good News

Human factors professionals continue to formulate their views about areas that are open to interpretation. The good news is, as long as you provide sound rationale for the study design decisions, your chances of success will be high. Keep in mind that the speakers from the FDA continued to emphasize their openness to reviewing study protocols in advance — especially for summative studies.

Learn more about our medical device design expertise.