On Least-squares-based Auditory Attention Decoding with Individual Neural Latency Compensation
* Presenting author
Abstract:
Auditory attention decoding is a technique to distinguish between attended and unattended speakers in a cocktail-party scenario based on signals recorded from the brain. It has been demonstrated that a simple but effective decoder can be trained with least-squares (LS) regression using recorded electro-encephalogram (EEG) signals and the corresponding envelopes of the speech stimuli (O'Sullivan et al., Cereb. Cortex 25, 2015). The decoder is typically implemented as a filter, which estimates the speech envelopes by convolving the decoder weights with the EEG signals. The convolution length defines the filter order in this system. A crucial parameter for accurate decoding appears to be the latency between stimulus onset and neural response. In this work, we investigate the performance of LS-based decoders with respect to a subject-specific tuning of the latency and the convolution length. We train and evaluate several LS-based decoders for a publicly available dataset (Das et al., 2020, http://doi.org/10.5281/zenodo.3997352), using stimuli with a duration of 30 seconds.