1. Project overview
What does it mean to trust an autonomous car? How can the interface of the car be designed in order to build trust?
- Nine weeks
- HCD course project
- Collaboration with Huawei.
- Markus Bergland - UX research/design
- Sara Johansson - UX research/design
- Christoffer Mattson - UX design/research, video
- UX research
- User testing
- UX design
The project resulted in a concept for an autonomous car interface utilizing anthroporphism. The final prototype can be seen in the video below.
Trust toward machines is an ever more relevant issue in today's society, and the automotive industry is no exception. As autonomous cars become more prevalent, making sure passengers trust them becomes an ever more relevant topic of inquiry. This project intended to define the issues faced by designers of autonomous car interfaces as well as propose methods for overcoming these issues, where possible. The project was carried out in collaboration with Huawei, as part of a course on human-centered design at Chalmers University of Technology.
3. Design Process
The project followed a human-centered design approach throughout. It began with a literature study on the various aspects of trust, followed by ideation of concepts. Once an idea was chosen, an iterative process of prototyping and user testing followed. A video of the final concept can be seen above.
3.1. Literature Study
The project started of with a literature study on the subject of trust. The findings indicated three main important considerations:
- A) Utilize anthropomorphism
- Anthropormorphism is the act of making something nonhuman seem human. Something as simple as giving a car a name, voice, and gender, increases the passengers' trust in it.
- B) Present information on why rather than how
- When an autonomous car does something, it is more important that the interface shows why it is doing it rather than what specific action it is taking. Presenting only how information leads to dangerous driving performance from the humans in the car, while presenting both why and how information bothers drivers by creating a greater cognitive load.
- C) Provide adequate feedback
- Two different kinds of feedback should be given: action feedback (immediate feedback indicating whether an action was successful), and learning feedback (more detailed, leading to slower but more persistent learning). Vocal feedback is safer and more understandable than just a display and an acoustic beep.
Finally, two scenarios were identified that were considered to require high levels of trust in the autonomous car. Firstly, when the car is required to perform sudden evasive maneuvers in order to prevent an accident; secondly, when the car is driving in tight spaces such as in a middle lane or exiting a parking garage.
3.2. Prototyping & User Testing
The prototyping phase consisted of two design iterations, with user testing after the first iteration to evaluate it and provide data for the second iteration. Unfortunately, due to time constraints the final version was not evaluated.
3.2.1. First Iteration
Armed with knowledge from the literature study, an initial concept was developed (shown in figure 1). Three main features were implemented to address the aspects mentioned above: an abstract representation of the car's artifical intelligence, a heads up display (HUD), and a screen located at the center of the dashboard.
The HUD was intended to display the planned route and indicate other cars, but was relatively quickly deemed unfeasible and abandoned. The AI is represented by a round blue light which blinks in rhythm with the voice when the car is "speaking" and slowly pulsates otherwise - intended to evoke anthropomorphism by communicating that the car is a living, breathing entity. For the center display, the main feature is a number of indicators which appear when there are objects nearby, illustrating these objects' positions relative to the car. For further customization, the passengers can select profile pictures to represent individuals in the car and their seating arrangements (note the man and frog icons in the image).
3.2.2. User Testing
User testing consisted of placing participants in front of a display showing the interface over three different driving scenarios, created from dash cam footage of a car driving in Gothenburg (a frame from the displayed video is shown in figure 2). The participants were seated in alignment with the driver seat to increase immersion and encouraged to think aloud during the test.
In total, five tests were performed, with participants aged 23 to 34. Three participants had a driver's license. The tests were performed in five stages, alternating calm sections with critical scenarios. After each sequence, participants rated on a five-point Likert scale how much they felt they could trust the autonomous driver (dubbed Lisa), how relaxed they felt, and if they would prefer to take over control from the autonomous driver. After completing all sections, participants where asked questions about their overall feeling of trust in Lisa, how safe they felt, whether Lisa knew how to drive and how aware Lisa was to potential dangers in traffic.
The overall results were positive, with participants feeling trust and relaxation without wanting to take control. A trend is noticeable where participants feel less inclined to take control as the test progresses, suggesting an increase in trust over time. After all driving scenarios, only one participant did not trust Lisa. However, with regards to feeling safe, only two agreed while two others disagreed and one was neutral. All participants considered Lisa a competent driver aware of potential dangers. There was some confusion regarding the blue light, with participants being unsure what it was supposed to represent. Furthermore, they wanted information on distance to other objects, not simply their relative position to the car. Some also requested information on the car’s destination and planned route.
3.2.3. Second iteration
Responding to data from the user test, a second prototpye was designed. The center display was updated to show surrounding objects from a top-down view - meaning that speed and distance can easily be determined - as well include features similar to a GPS navigation device, such as the planned route. The pulse frequency of the blue light was adjusted to more closely mimic human breathing. The final design can be seen in the figure 3, as well as in the concept video above.
The project was a huge learning experience in many regards. Getting immersed in human-centered design and continously applying the principles throughout the project really showcased the strength of the design philosophy. User testing was immensely fun and valuable, and getting to perform the test was a great lesson for the future. It was also interesting to work with a real corporate stakeholder - based in another country! - and getting their perspective and learning about their goals and objectives. Finally, it was incredibly interesting to work with the subject of trust and how to design for it, something that will likely only become more important in the future.