TY - GEN
T1 - Multimodal Gaze-Based Interaction in Cars
T2 - International Conference on Human-Computer Interaction
AU - Spakov, Oleg
AU - Venesvirta, Hanna
AU - Lylykangas, Jani
AU - Farooq, Ahmed
AU - Raisamo, Roope
AU - Surakka, Veikko
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2023
Y1 - 2023
N2 - We studied two interaction techniques to perform secondary tasks in a driving simulator environment with the focus on driving safety. In both techniques, the participants (N = 20) used gaze pointing to select virtual task buttons. Toggling the controls was achieved by either mid-air gestures with haptic feedback or physical buttons located on the steering wheel. To evaluate each technique, we compared several measures, such as mean task times, pedestrian detections, lane deviations, and task complexity ratings. The results showed that both techniques allowed operation without severely compromising driving safety. However, interaction using gestures was rated as more complex, caused more fatigue and frustration, and pedestrians were noticed with longer delays than using physical buttons. The results suggest that gaze pointing accuracy was not always sufficient, while mid-air gestures require more robust algorithms before they can offer functionality comparable to interaction with physical buttons.
AB - We studied two interaction techniques to perform secondary tasks in a driving simulator environment with the focus on driving safety. In both techniques, the participants (N = 20) used gaze pointing to select virtual task buttons. Toggling the controls was achieved by either mid-air gestures with haptic feedback or physical buttons located on the steering wheel. To evaluate each technique, we compared several measures, such as mean task times, pedestrian detections, lane deviations, and task complexity ratings. The results showed that both techniques allowed operation without severely compromising driving safety. However, interaction using gestures was rated as more complex, caused more fatigue and frustration, and pedestrians were noticed with longer delays than using physical buttons. The results suggest that gaze pointing accuracy was not always sufficient, while mid-air gestures require more robust algorithms before they can offer functionality comparable to interaction with physical buttons.
KW - in-vehicle interaction
KW - mid-air gestures
KW - multimodal gaze-based interaction
KW - ultrasonic haptic feedback
U2 - 10.1007/978-3-031-35702-2_24
DO - 10.1007/978-3-031-35702-2_24
M3 - Conference contribution
AN - SCOPUS:85169474084
SN - 9783031357015
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 333
EP - 352
BT - Design, User Experience, and Usability
A2 - Marcus, Aaron
A2 - Rosenzweig, Elizabeth
A2 - Soares, Marcelo M.
PB - Springer
Y2 - 23 July 2023 through 28 July 2023
ER -