By Dave Block

Psychology professor Mike NeesMike Nees, associate professor of psychology, researches human factors and engineering psychology. He talks about research related to advanced driver assistance systems that he conducted with student assistants during his yearlong sabbatical.

What were you trying to achieve with this research project?

Advanced driver assistance systems like adaptive cruise control and lane-keeping assistance are part of steady efforts to automate driving. These systems involve complex arrays of sensors and computers that have safety benefits, but they also are limited in some circumstances (bad weather, poorly marked or curving roads, etc.). When people interact with these technologies, they build mental models—psychological descriptions, beliefs, expectations, and predictions about how the technology works. There is a real concern that drivers may have inaccurate mental models of driver assistance systems because they receive little instruction on how the systems work and tend to learn to use these systems by trial and error. This is especially problematic when people do not understand the circumstances in which these systems are limited because the driver may become overly reliant on the technology or use it in ways that compromise safety. This project sought to learn more about drivers’ mental models of advanced driver assistance systems.

What did the research involve?

For this project, we used semi-structured interviews. Drivers who use advanced driver assistance systems were recruited on campus. The research term included Nithya Sharma ’20, who worked on this project for Advanced Research course credit, and Karli Herwig ’20, who was a research volunteer. First, we studied owner’s manuals, videos, and all manner of information related to advanced driver assistance systems. Then we came up with a set of interview questions. The overarching research question was: How do drivers think these technologies work? Finally, Nithya and Karli conducted in-person interviews in the lab, and we coded the results into several categories of beliefs. We were hoping to identify potential weak points or inaccuracies in mental models—especially ones that might result in unsafe behaviors.

What were the results?

We saw a lot of variability in mental models of vehicle automation. Most drivers have a general impression that the systems use sensors and computers, but there is a big range of beliefs about how the sensors work. And some beliefs are inaccurate. One driver who insisted their car has no camera actually owned a car with eight cameras. Drivers tended to make inferences about the system’s functionality based on feedback from the interface (e.g., display icons, lights, sounds) and also from lack of feedback in certain driving scenarios. This suggests that engineers and designers need to be really careful when they design the interface because drivers may use even incidental information to inform their beliefs about how the system works.

Perhaps most concerning was our observation that some inaccurate beliefs could lead to unsafe driving behaviors. A few people indicated that they believed the systems were effective in scenarios in which the systems are less capable. For example, the technology in some systems means their functionality is limited on curvy roads. Yet we heard from participants who believed that the systems work even better on curvy roads. They reasoned that since curvy roads are more demanding for drivers, the system must have been designed to be even better on curvy roads. These are the types of beliefs that are important to understand because they may lead to unsafe behaviors. Most participants also tended to believe the systems are highly reliable, which again raises concerns that drivers may be unaware of the limitations of these systems.

What’s the relevance of this in your field?

In my field, there is a consensus that mental models are important for understanding how people interact with automation in vehicles. But mental models are difficult to measure, and to date, most researchers have used quiz-like measures. These approaches assume that the researcher already knows the range of beliefs that drivers possess. The strength of our approach is that interviews allow participants to express their beliefs in their own words. Participants expressed some beliefs that hadn’t occurred to us before we began the research.

How does this research impact your teaching?

The topic and methods are most relevant to my PSYC 226: Human Factors and Engineering Psychology course. In that course, we talk about user-centered design—designing all parts of technology with a focus on what people actually think and do (as opposed to what the designer/engineer believed that people ought to think and do). Much of my research tends to be quantitative, so taking on a qualitative research project also has given me some new perspectives that will influence how I teach PSYC 203: Design and Analysis.

Where does the research go from here?

The results of this research are scheduled to be presented (virtually) at the Human Factors and Ergonomics Society Annual Meeting in October. In-person research was interrupted by the pandemic, so we can’t do interviews for the time being. I hope to continue the research online in the fall. The goal would be to use the information we’ve gained to develop a more standardized approach to measuring mental models of vehicle automation.

1 Comment

  1. Roger says:

    Excellent article and easy to comprehend.

Leave a Reply

Your email address will not be published.

You may use basic HTML tags and attributes.