The first IxDA (@IxDALondon) event of the year focused on the user experience of in-car interactions and offered a mix ofresearch findings and concept designs that served as a good introduction to the topic and sparked some interesting discussions among the attending crowd.
Driver distraction and multitasking
The first talk was by Duncan Brumby, Senior Lecturer at UCL, who focused on the issue of in-car systems being a potential source of distractions for drivers.
Distractions are a matter of priorities: a driver has to allocate his cognitive resources to the tasks at hand
According to Duncan, distractions are a matter of priorities: a driver has to allocate his cognitive resources to the tasks at hand. Driving is an ongoing and extensive task that needs attention and coordination, but new tasks appear along the way, competing for these resources. Even if we are able to dual task, efficiency falls sharply when we try to divide these cognitive resources.
Sight has been the primary focus of distraction research; Duncan showed a (quite funny) early experiment where a mechanical contraption would obstruct the view of the driver every few seconds while actually driving on a public highway. During these moments, the driver had to drive using the mental image of the road in his memory, instead of being able to retrieve the actual situation with his eyes. Most of these experiments are now conducted on computer simulators, where the driver can be subjected to different conditions without endangering him (or other drivers).
There is a need to design for fragmented bursts of interaction
The key result of these studies is the need to design for fragmented bursts of interaction. The NHTSA guidelines state a maximum permitted total glance time of 15 seconds per task, with no glance longer than 2 seconds.
Multimodal (usually voice-controlled) interfaces are sometimes seen as a potential solution as they allow the eyes to stay on the road, but they face another problem: speed. Audio interfaces are very slow to use, with voice-to-text tasks taking longer than typing directly, for example.
Besides, audio interfaces free the eyes but not the mind. The cognitive load is still there, and it can be higher than actually talking with a passenger (who knows when to shut-up based on the context).
This talk finished with the popular topic of autonomous cars. Although we hope they will liberate us from the mundane task of driving, the current situation presents a different challenge; the hand-over of control from the car to the user.
Even if the car can function autonomously most of the time (and particularly because of that), the situations where the software cannot make a good decision are particularly dangerous and require the help from a potentially distracted user. Until this issue is solved, riding on an autonomous car will be a stressing experience (as shown in a multitude of online videos).
Are we there yet?
Next on stage were Harsha Vardhan and Tim Smith @mypoorbrain who are part of ustwo´s core automotive team, having worked for clients such as Ford and Toyota. Part of a digital product studio, they are outsiders to the car industry, where they work with “splinter cells” from OEMs and Tier 1 car producers.
A brief look at the exterior and interior design of an expensive car like a Ferrari reveals that a lot of thought, experience, effort and money has been invested in creating a cohesive design.
But when we get to the onboard system, the consistency breaks down. There´s a basic, unbranded interface, pretty similar to the one you can find in a much cheaper car… because in fact it is the one on cheaper cars, car producers do not pay attention to these parts, foreign to their traditional experience, and most of these systems come straight from OEMs with just some cosmetic customisation.
Ustwo´s experience with more design-conscious OEMs allowed them to research and experiment with in-car systems, and that gave birth to a book, “Are we there yet? Thoughts on in-car HMI”.
In their book Tim and Harsha describe a series of principles, namely:
– Limitations of technology
– Cognitive models
– Contextual empathy
– Insight not raw data
– Adaptable & accessible interfaces
– Considered design
This book attracted the attention of the industry and granted them contact with the automotive teams at both Google and Apple among others.
They set out a challenge: show their learnings and principles in a meaningful way, without delving into futuristic concepts removed from reality. For this purpose, they focused on the humble cluster, which had not seen any tangible redesign in the last decades, even after the introduction of LCD panels.
Taking inspiration from the Citroen CX and its retro-futuristic design, they followed an “adaptive hierarchy” guided by a “the right info at the right time” principle. The cluster would show the most important information depending on the context: the maximum range when the car is stopped, proceeding to show the speed once it is moving. Also, the fuel indicator would get more real state (and an alarming orange tint) when fuel ran low.
Other principles put into practice would be “insight not raw data” (the car would suggest an ideal speed based on the conditions of the road, and show the distance to the destination on the fuel/range indicator, hinting for a possible refuel) and “macro/micro dwell”, where interface elements would become subtle and smaller as the user increased his experience with the vehicle.
There was also a lot of research and concerns about the selection of the typography (Roboto finally) based on intrinsic and extrinsic factors, including experimenting with font sizes that would increase or decrease depending on the distance of the user to the screen (which could get very annoying in practice! Imagine getting closer to the screen to get a clearer view, and the information on display being shown in a smaller font size as a result).
Cognitive psychology was also applied to the design. Inspired by the work of Wolfgang Köhler on the Bouba/Kiki effect, they used curvy icons for information and more angular ones for warnings.
The result was a generic unbranded archetype tested on simulators, which was later adapted to the branding of well-known cars like the Ferrari California or the Mercedes Class S Saloon, although they felt prouder of the adaptation to the Citroen Cactus, sporting a unified design that took into consideration the interior, exterior and interface of the car as a whole. The code and design assets for this interface have been shared and can be downloaded from a GIT repository.
The selection and combination of these two speakers were excellent in conveying a holistic view of the user experience of in-car interactions. And, though this area has been traditionally neglected, the current situation is ripe for intrepid designers as a future of self-driving cars appears closer than ever.
If you have any thoughts on this article or would like to express your views on the subject, please get in touch at firstname.lastname@example.org and make sure you say hi at the next IxDA!