Now that we’re used to devices with screen technology and user interface design built for touch and usability, it’s easy to understand Steve Jobs’s infamous utterance, “If you see a stylus, they blew it.”
Capacitive touchscreens are our new normal. But before touchscreens became popular, we interacted with products primarily using physical buttons, switches, knobs — and, yes, styli.
The widespread adoption of new technologies causes major shifts across industries as well as in our collective behavior. The ubiquity of touchscreens with multi-touch functionality expanded consumers’ understanding of user interfaces beyond websites and software to include devices and appliances. This has dovetailed with the advent of the Internet of Things and a desire for more devices to be connected and smart. Consumer expectations, user interfaces, and interaction design continue to evolve alongside leaps in technology.
Most think of a smartphone when they think of a connected device, but capacitive touch is just one of the many interactive technologies out there. Are there other touchscreen-significant turning points around the corner that will shift consumer behavior and expectations, and expand notions of interaction design? We’ve identified technologies that will cause big and small shifts in the next few years and looked at how they are changing the practice of interaction design. First, the technologies:
Flexible Displays and Curved Screens
The world is no longer flat: Flexible displays and curved screens have gone from concept to reality in the last few years as companies such as LG, Samsung, Nokia, and Apple have introduced or filed patents for mobile phones and TVs featuring this technology. This wave has engendered a new type of user interface, called organic user interface (OUI), for physical interactions with non-flat displays.
Products and concepts with flexible screens and curved displays showcase the potential for new modes of interaction and new ways to exhibit information. In 2011 the Nokia Kinetic Device proposed flexing the screen itself as a new form of interactivity — use both hands to twist the device to scroll through a photo album, and bow it to zoom in or out of a photo.
The MorePhone, a smartphone prototype with a flexible E Ink display introduced in 2013, presented shape-shifting as a form of interaction. The screen’s corners curl up to notify the user of a new email or text. Curved displays wrap around devices’ edges, expanding the viewable area. Samsung’s Galaxy Note Edge uses this extra real estate along one side to display information ticker-style. None of the above is about to sweep into our everyday, but they are early steps toward determining practical applications for technologies on the brink.
A related area of innovation is information-display technologies that are occupying formerly unutilized surfaces, such as Samsung’s Star Display. In the context of a refrigerator door, the display shows interior compartments’ temperatures via embedded LED lights shining through pinholes to create a subtle readout that resembles a projection. This display shows off the appliance’s modular control capability, which allows for temperature adjustment per section — including the option of flipping one section back and forth from freezer to refrigerator temps. Samsung employs this display technology very effectively, in a way that both increases the usability of a complex product and enhances the branding of its Chef Collection product line.
It’s possible that other predictive and intuitive modes of interactivity will take over before screen technology has a chance to advance further. The objective in interaction design is always for the simplest interaction requiring the least amount of work on the part of the user. Cutting out the screen enables improved interaction — the less input required from the user, the smoother the interaction.
Screens become redundant when you can activate a device by talking to it or waving at it, or if it’s equipped with sensors that adjust to your preferences over time. Speech recognition smart enough to engage in conversation; camera-based gesture-recognition technology; and haptic technology that lets devices communicate via vibration patterns — an “alphabet of haptics,” as coined by Wired magazine—are all becoming more sophisticated and widespread.
What This Means for IxD
A shifting technological landscape changes the way interaction designers approach product development. New tools and new contexts lead to new considerations. The goal is to incorporate new technology because it enhances usability and functionality; in other words, never do it solely for novelty’s sake. Here are some of the core interaction design issues that continue to evolve alongside technological evolution:
What’s the Input Versus the Output?
Interaction with a device can be broken down into two categories: input and output. “Input” describes how we tell the device what we want from it, or feed it the information it needs. “Output” is how the device tells us what it needs or gives us information. Choosing modes of input versus output and determining the best pairing is a decision process we undergo with every product.
The list of possible inputs and outputs is growing, and each possibility presents an interesting way of interacting with the product. How is a product best “fed” — via gesture or voice control? Or should the input happen via touch-button display? How should the user be notified — via auditory cue? Each element has its own associated cost and range of complexity.
Does It Need To Be Connected?
This has become a standard question for nearly every product we develop. Being connected makes sense for certain devices, such as a thermostat and a security system — temperature and alarms are easy to automate. But there are other product categories where smart capabilities might present more work for the user.
The more connected devices people own, the lengthier their list of technological tasks. This can quickly become overwhelming, turning tasks into chores. How many apps, devices, or features have you abandoned because you didn’t want to bother with the set-up, or because the workflow for inputting information was too laborious? A recent study conducted by researchers at the University of Pennsylvania found that fifty percent of participants had stopped using their wearable health trackers, most within six months of purchasing it. “Gadget fatigue,” or the number of steps required to operate the device, was a major reason.
How Much Functionality Will Be Automated?
As more products become connected, there is opportunity to design automated systems that allow products to function independently. Think of Google’s self-driving car. Some smart devices are also learning devices; Nest’s learning thermostat is the most well-known example. It gets to know users’ temperature preferences over time and adjusts itself accordingly. As people become more overloaded with products, automated and learning devices will become more attractive options.
Since devices like these are automatically receiving inputs that are often better and more accurate than the ones they might get from a human, they require less interaction. Better, more consistent input results in better performance, and reduces the burden on the user. These devices are “grown up,” and as such require less supervision. But the low-touch nature of these products comes with its own set of interaction design considerations.
For example, the more infrequent the use, the more intuitive the user experience needs to be. Appliances used every day have a learning curve at the start, but once a user is accustomed to the interface, further actions can become almost automatic. Learning devices that require interactions a week or a month apart can’t rely on recall. They need to be especially simple and intuitive to use.
Screen or No Screen?
When approaching the development of a product we ask ourselves, what’s the screen for? Is it a necessary element for interacting with the product? When designing an Internet of Things device, there is a choice of controlling it via embedded display or mobile app. As more devices become connected, it might make less sense for each device to have a screen and more sense for your mobile device to control everything from one screen.
There are other times when, depending on the interaction, it’s handy to have a screen on the product itself. At home, where people don’t carry their mobile devices from room to room, having screens in context of use can make more sense. When we designed Bruvelo, an automated coffee maker, we knew that the typical user stumbling out of bed every morning would be happier hitting a “Make coffee” button on a touchscreen embedded in Bruvelo’s base than having to find his phone, unlock it, and find and launch an app to get his caffeine fix. Frequency comes into play here, too. If an action in the context of home is performed only once every week or month, why not skip the screen and control it via mobile app?
Making the Complex Simple
The most successful products are those that work in the most intuitive, simplest way—not ones that are bloated with features and options. As capabilities and technologies multiply and devices become more connected and complex, the onus remains on designers to keep interactions simple. The more things change, the more that one goal remains the same.