HAL: I’m sorry, Dave. I’m afraid I can’t do that.
Dave Bowman: What’s the problem?
HAL: I think you know what the problem is just as well as I do.
Dave Bowman: What are you talking about, HAL?
HAL: This mission is too important for me to allow you to jeopardize it.
– From 2001: A Space Odyssey
High-stakes industries, from nuclear power plants to the space shuttle, use automated systems to guard against human error, as HAL succinctly explains to astronaut Dave Bowman in 2001. But without a human check, an automated system can make a serious mistake. This is as true for a malfunctioning home thermostat as it is for a spaceship gone rogue.
We now see a push toward the inclusion of more automation in other industries, including automotive (think Google’s self-driving car) and medicine. This surge in the implementation of automation in product domains where we previously did not envision it is due to three main factors:
- We can build automated systems to be more reliable and less expensive than before. Technology is advancing – allowing us to automate things we could not practically automate in the past.
- We want to do it. People want technology that relieves them from some of life’s more mundane tasks such as perceiving, thinking, calculating, deciding, implementing, and monitoring.
- We trust automated systems to work. As a culture, we are starting to believe that an automated system may be more reliable than a manual system.
As automated systems become more mainstream, adopters face the same basic tension as HAL and Dave. This is especially true of the medical industry with its understandable emphasis on safety and effectiveness.
I first encountered it when I did some work related to the Juvenile Diabetes Research Foundation’s (JDRF) Artificial Pancreas Project. The artificial pancreas would marry an insulin pump with a continuous glucose monitor, and help people with diabetes manage their blood glucose levels more effectively. But when I mentioned the idea to a close friend, who is both a physician and a Type 1 diabetic, he snickered. “That’ll never happen. Too risky,” he said. I politely disagree. Other industries have successfully walked this tightrope, and medicine can, too.
Machine Deduction Befriends Human Intuition
Last spring I organized a panel of human factors experts from the healthcare, automotive, aviation, and nuclear power industries. The goal was to discuss the potential and challenges of automated systems in healthcare. These systems use feedback from sensors to guide the actions of a machine or device, without human intervention. The artificial pancreas would ideally be such a system, using continuous feedback from the glucose monitor and its record of a person’s daily habits to guide the timing and dosage of insulin.
Despite our diverse experiences, all the panelists agreed that to be successful and safe, automated systems need to work as a team with a human partner. There need to be regular touch points for the human and machine to check each other’s decisions. And the systems need to behave intelligently and predictably enough to foster a basic level of trust between the human and the automation.
First, the human and machine need to share the same goal.
In the case of a nuclear power plant, this goal might be to generate a certain number of megawatts in a specific time period while operating at safe temperatures. For the artificial pancreas and the person with diabetes, the goal should be keeping blood glucose within an ideal range. For a plane on autopilot, the goal is detecting and correcting the course when it goes awry. While a shared goal may seem obvious for a nuclear power plant, an artificial pancreas, or an airplane, we shouldn’t take it for granted.
A recent article in Popular Science examines the question of whether your self-driving car should kill you if it would save the lives of two people in another vehicle. Do two lives count more than one? Or should a vehicle preserve the lives of its occupants no matter the cost to other people? Questions like these are worth pondering.
The human and the machine need to work as a team.
During the panel discussion, Steve Harris of Rational Healthcare reminded us that machines are very good at certain types of logic — they can be ‘aware of’ and take into account many more variables than the average human — but they’re terrible at induction. Kathryn Rieger, a consultant who has extensive experience in work related to the artificial pancreas, made the point that humans are able to be predictive.
The artificial pancreas may know that the human typically relaxes on the couch and eats a snack at 3 pm, and needs extra insulin then. But the human knows she is beginning triathlon training and will be going for a run in a couple of minutes. Not only will she not need extra insulin at 3 pm — she won’t need any insulin at all for a couple of hours. There needs to be a way for the human to inform the machine of events it would otherwise be unaware of, and for the machine to react appropriately.
When we think about automation in the nuclear power industry, we think about systems that minimize operator error. Bruce Hallbert (Director of Nuclear Energy Enabling Technologies at the Idaho National Laboratory) reminded the panelists and the audience that some jobs are simply not suited to human operators. He pointed to operation and maintenance tasks inside a nuclear reactor that simply cannot be performed safely by a human operator. In such conditions automated systems are not simply nice to have.
Touch points are also hugely important.
Steve Harris also pointed out that one of the most common things heard on inflight recordings is one pilot saying to the other, “What’s it doin’ now?” The implication is that at least one of the pilots doesn’t have a clue what the autopilot is doing or why. That’s scary. An artificial pancreas could avoid that by having regular check-ins with the human throughout the day, perhaps at breakfast, lunch, and dinner times. The pancreas could show its predicted insulin schedule and dosage, and the human could let it know about unusual events that will change the calculation. An ice cream social is happening that afternoon, or a party that night — events that will change the insulin necessary to stabilize blood sugar.
Trust (But Verify)
Human partners need to understand how to operate and override their autonomous system. In the case of an airplane pilot engaging an autopilot system, he needs to know how to fly; a person with diabetes needs to know what his glucose level should be and how much insulin will get him there.
Eric Bergman (Director of Human Factors Engineering at Fresenius Medical Care North America) reinforced the notion that we don’t want to oversell the autonomy of an ‘automatic system,’ because complete system autonomy isn’t what we want.He pointed to the over reliance on automation (the aircraft autopilot system) in some recent aircraft accidents.
Panelist Andy Gellatly (General Motors Technical Fellow in Human-Machine Interaction and Human Factors), explained that the system and the human need to be collaborators. And that collaboration needs to evolve over time, like a friendship.
People don’t dive into friendships with an unknown person. Rather, they slowly spend time together; discover each others’ capabilities, likes, and dislikes; and gradually trust each other more and more. That’s the way it should be between a human and an automatic system.
Whether a plane on autopilot or an artificial pancreas, an automated system needs to check in with its human partner regularly and (re)align its strategies to get to the shared goal, whether that’s slightly higher-than-normal blood glucose levels in anticipation of a long afternoon run or a safe airport landing.