Work With Us

Who Knows Best? Designing User Control in the Age of AI

Performing a blood test is a job for a professional technician — that’s why we send samples to the lab and wait patiently for a reply.

But what if we could encode some of that technician’s expertise into a device, so a pediatric nurse could get results in minutes, not days? The savings in time and money would be dramatic, and dangerous medical conditions could be spotted earlier, improving outcomes and perhaps saving lives.

Is this kind of automation worth it, though? In the case of FINDER (pictured below), a device Bresslergroup recently developed with Baebies, the answer was a resounding yes. Potential benefits were enormous, and the problem was neatly constrained.

But FINDER is very different from what you’d find in a medical lab. Rather than pipettes, the blood sample goes into a cartridge; and instead of dials, switches and keypads, a touchscreen walks you through the process. It’s a prime example of how smarter machines can assume some human agency — and we’re better off for it.

But how much agency should a machine be allowed? It’s a hot question now, as Artificial Intelligence slips from science fiction into reality, and self-driving cars begin to traverse our roads. For both medical devices and cars, the potential to save lives is very real. But the consequences of things going wrong could be catastrophic.

Solving the Agency Problem for Your Product

At its core, the agency problem is a question of who we trust more, a person or an algorithm. In some cases the answer is cut and dried: self-driving cars have shown themselves to be far safer than the average non-professional driver. In others it’s less clear — would you be comfortable letting a chatbot diagnose you with cancer?

At its core, the agency problem is a question of who we trust more, a person or an algorithm.

For designers working with “smart” products, this is an unexplored landscape. Automated decision-making has never been so powerful, and consumers never so accepting of it. Fortunately, it’s a landscape that responds well to the modern tools of the design trade: user observation, journey mapping, prototyping, and iteration.

Over the course of several years, Bresslergroup has managed to boil down the question of agency to a few key considerations. Here are the four most crucial:

1. How Expert Is the User?

The self-driving car is the holy grail of automation right now, largely because of the safety benefit: almost 40,000 Americans die per year in (mostly preventable) car crashes. While we’ve learned through years of practice to handle common driving challenges, we’re not trained for tricky situations, and emotion can persuade us to speed, merge without looking, roll through stop signs, etc. We’re mostly inexpert drivers — a good argument for turning certain driving tasks over to a more predictable onboard computer.A medical lab technician, on the other hand, has years of training and experience, and a clear set of best practices. She’s working with a complex suite of measurement tools, which she learned to use and interpret under close supervision. She’s also probably got numerous colleagues to consult for feedback should things get borderline. A pediatric nurse, while expert in many areas, lacks this specific training. It makes sense, then, for both Tesla and Baebies to automate certain mundane but highly specific tasks. Both commuters and nurses can benefit from having automated skills made available to them, even if it means letting an algorithm call some of the shots. The advantage in safety and effectiveness is worth the small loss of agency.

This is why many current medical devices come with different control settings depending on who’s using them. A take-home respirator or infusion pump is useful because it frees the patient from being tied to a medical environment. And while it’s important for a trained technician to be able to adjust settings and check fine-grained recorded data, these capabilities are potentially dangerous in the hands of the patient. Dialing in exactly how much control each setting gets, and how to handle the switch-off, is of crucial importance.

2. How Specific Is the Intended Outcome?

When you use TurboTax, Intuit’s powerful tax-preparation software, you turn a lot of detailed math and form-filling over to the computer, because you and it agree on the goal: a legal tax return with the maximum possible refund. The software requests numbers and facts, and you supply them. It’s a tremendous surrender of agency compared to sitting down and grinding through the forms yourself, yet millions of Americans happily do it every year.But say we applied that approach to day-to-day banking. If Intuit released a package that requested full access to your bank account, so it could move funds around between checking, savings and investments, and enforce budgets for all your spending, would you use it? Chances are low, I’m guessing, and not because of any concern that the software is incapable. The issue is that there’s a wide array of possible goals, from saving for college to starting a business, or going temporarily into debt to help with someone’s medical bills. The right balance of goals is subjective and changeable, leaving most of us uncomfortable with turning decision-making over to another person, much less an algorithm.

In fact, subjectivity is one of the hardest variables to solve for in smart devices. The Nest smart thermostat works to reduce your utility bills — which are easy to quantify — but it has to stay mindful of your “comfort”, and what exactly is that? The subjectivity of comfort is why Nest can’t just be a plug-and-play device. It has to be trained, given feedback, and sometimes taken over completely by a frustrated (perhaps shivering) owner. Again, deciding when and how this shift takes place is one of the key design challenges of the automated era.

3. How Much Input Can the Device Gather?

Self-driving cars don’t look at the tail lights in front of them, because they know precisely how fast every other car is moving, every millisecond of the journey. With a suite of cameras, motion sensors, GPS locators and hyper-detailed environment maps, your car is actually hundreds of times more aware of its surroundings than any driver, with just two eyes, two ears and a sense of touch, could ever be.This makes information-dense contexts ideal for automation. When our inability to process multiple input streams impedes safety or effectiveness, letting technology make some of the decisions makes a lot of sense. So in addition to intended outcome and user expertise, it’s also worth evaluating the number of variables being thrown around. This is one reason why automated search and recommendation algorithms (Google, Pandora, Amazon, Netflix, etc) were among the first digital helpers to be embraced on a broad scale: there’s nothing like a computer for sorting through way too much information.

4. What Are the Potential Ethical Concerns?

Unlike information sorting, the ethics of algorithmic decision-making is something we’re just digging into. There’ve already been a handful of crashes involving self-driving cars, and while the fault almost always rests with another (human) driver, and the overall safety record of autonomous cars is sterling, these incidents rattle us a lot more than those involving humans alone. This may not be logical, from a broader safety perspective. But it’s an inescapable fact of human nature that we’re far more frightened of the unfamiliar than the often deadly behavior we grew up around.For designers, this means treating any automation or robot-assistance with extreme care, if it has the potential to harm. “We’re still safer in the long run” is little comfort when things go wrong — most of us still need a human to hold accountable. The Da Vinci robotic surgery system, for example, has sidestepped the problem entirely by making it clear that a trained surgeon is always in control. In the end, most of us are far more comfortable with smart tools than smart replacements.

Know Who’s in Charge

At the lower stakes end of the spectrum, Bresslergroup designed an automated coffee brewing system for a startup called Bruvelo a few years back. It worked beautifully — with one small hitch. The system could grind the right volume of beans, heat the water to the right temperature, and steep for the right amount of time, but it couldn’t empty the old grounds or grab a coffee cup from the cupboard. While testing with an early interactive prototype we realized that users forgot to take care of these steps themselves, even when prompted at the beginning of the cycle, because the touchscreen interface created such a complete sense of automation. Once we’re in a state of passive interaction, it’s tempting to relinquish all control.

The lesson here is clear: automation is mostly an all-or-nothing proposition.

In the case of Bruvelo, we discovered the problem through testing and observation, and solved it through iterative prototyping — two of the oldest tricks in the designer’s book. The solution turned out to be one of timing and clarity. By shifting the notification to the exact moment it needed to be addressed, and making it more prominent (“There’s no cup in place!”) we were able to create a transition of agency, putting the user in charge for a moment, then asking them for permission to shift control back to the device.

In self-driving cars, too, the greatest danger comes from situations of shared control. One of the few, prominent crashes experienced by a Tesla in autopilot mode last year happened when a driver failed to realize the car was no longer in control — a situation not that different from what we saw with Bruvelo. The lesson here is clear: automation is mostly an all-or-nothing proposition. While it’s possible to automate some activities and leave others to humans, most of us have trouble grasping the idea of partial control.

This makes automation, more than anything, an interface design problem. As we move into an increasingly automated, AI-directed, and potentially far safer and efficient world, we have to take care with the transitions. Letting users know exactly who is in control at any given moment, and signalling those transfers of agency with unmistakable clarity is going to become the prominent safety challenge of the next decade. For those of us in the UI design business, it’s not a new challenge, but the stakes have never been higher.