Auditory and audiovisual time-to-collision judgments for electric and conventional vehicles
* Presenting author
Abstract:
To avoid collision, pedestrians intending to cross a street need to judge the time-to-collision (TTC) of an approaching vehicle. We investigated how the loudness and the engine type (electric vs. conventional) of a vehicle influence the TTC estimation. We developed an audiovisual virtual-reality setup, which will also be used and extended in our project within the AUDICTIVE program. This system simulated an urban street with a car approaching at a constant velocity. Using acoustic recordings of real cars as source signals, the dynamic spatial sound fields corresponding to the approaching car were generated with an acoustic virtual-reality software (TASCAR) and presented via higher-order Ambisonics. The conventional and electric vehicles were loudness-matched, and their sound levels were varied by 10 dB. In the auditory-only condition, the cars were not visible, and lower loudness of the cars resulted in considerably longer estimated TTCs. Importantly, the loudness of the cars also had a significant effect on the estimated TTCs in the audiovisual condition, where the cars were additionally visually presented on a virtual-reality headset. Thus, pedestrians use auditory information when estimating TTC, even when full visual information is available. At equal loudness, the TTC judgments for electric and conventional vehicles were virtually identical.