Modelling of speech recognition and benefit from hearing devices in realistic auditory scenes
* Presenting author
Abstract:
Hearing aid research more and more focusses on evaluating benefits of hearing aid fitting in acoustically complex auditory scenes. To assist hearing aid fittings for the individual patient, models can be used to predict benefits in speech recognition performance with different hearing aid fittings. In this study, speech recognition was measured with the Oldenburg Sentence Test in virtual realistic scenes and anechoic conditions with 13 hearing impaired listeners and two different hearing aid fittings, trueLOUDNESS and NAL-NL2.The anechoic test conditions were a speech-shaped stationary and a fluctuating masker (ICRA5-250) at 45 dB SPL and 70 dB SPL. The realistic scenes were created with the Toolbox for Acoustic Scene Creation and Rendering (TASCAR) and included a low-level „nature” - scene and a high-level „cafeteria“ - environment at corresponding levels to the anechoic conditions.Two speech recognition models were used to predict and compare the performance of the hearing aid fittings, i.e. the framework for auditory discrimination experiments (FADE) and the binaural speech intelligibility model (BSIM). Individual benefits could be predicted well for the low-level conditions. For the high-level conditions only the group medians were modelled with a sufficient accuracy but not the individual performance of hearing impaired listeners.