Presentation Schedule


Presenter Registration Banner 5

Effects of Auditory Cue Strategies in AR-HUD and HUD Navigation Warning Systems on Driving Performance (103256)

Session Information:

Tuesday, 24 March 2026 16:00
Session: Poster Session 3
Room: Orion Hall (5F)
Presentation Type: Poster Presentation

All presentation times are UTC + 9 (Asia/Tokyo)

Background and Motivation:
With the advancement of in-vehicle display technologies, Head-Up Displays (HUD) and Augmented Reality Head-Up Displays (AR-HUD) have become key interfaces in driver-assistance systems. These displays project navigation and warning information directly onto the driver’s view, reducing gaze shifts and cognitive load. Therefore, this study compares the effects of AR-HUD and HUD under different auditory warning conditions on driving performance.
Method:
Twenty participants performed randomized sudden-event scenarios using a STISIM Model 300 driving simulator equipped with a VOLVO 340DL vehicle body. A 2 (Display: HUD, AR-HUD) × 3 (Auditory Cue: none, action-first, reason-first) within-subject repeated-measures design was employed. Dependent variables included reaction time, minimum time-to-collision (TTC), collision rate, driving performance, and subjective ratings (DALI and event-cue preference).
Results:
A significant interaction was found between display type and auditory cue for reaction time (p = .024), steering-angle variation (p < .001), and collision rate (p < .001). The AR-HUD with the reason-first cue showed the best performance, whereas the HUD with no auditory cue performed the worst. AR-HUD also outperformed HUD in lane deviation (p < .001) and TTC (p = .008). Subjective evaluations indicated that AR-HUD was rated significantly higher in event-cue preference (p < .001). Conclusion: The findings confirm that integrating AR-HUD with auditory cues enhances driver responsiveness and safety. In particular, the reason-first auditory warning enabled drivers to recognize hazards faster and maintain steadier control, highlighting multimodal interface integration as a key direction for future intelligent vehicle human factors design.

Authors:
Wei-Lun Huang, National Yunlin University of Science and Technology, Taiwan
Hung-Lin Fu, National Yunlin University of Science and Technology, Taiwan
Yung-Ching Liu, National Yunlin University of Science and Technology, Taiwan


About the Presenter(s)
My name is WeiLun Huang, currently studying for a master's degree at National Yunlin University of Science and Technology.

See this presentation on the full scheduleTuesday Schedule



Conference Comments & Feedback

Place a comment using your LinkedIn profile

Comments

Share on activity feed

Powered by WP LinkPress

Share this Presentation

Posted by James Alexander Gordon

Last updated: 2023-02-23 23:45:00