Konferenzsystem

Article

Visualising and Explaining Deep Learning Models for Speech Quality Prediction

* Presenting author
Day / Time: 17.08.2021, 05:40-06:00
Room: Schubert 3
Typ: Regulärer Vortrag
Article ID:
Online-access: Bitte loggen Sie sich ein, damit weitere Inhalte sichtbar werden (bspw. der Zugang zur Onlinesitzung).
Abstract: Estimating quality of transmitted speech is known to be a non-trivial task. While traditionally, test participants are asked to rate the quality of samples; nowadays, automated methods are available. These methods can be divided into: 1) intrusive models, which use both, the original and the degraded signals, and 2) non-intrusive models, which only require the degraded signal. Recently, non-intrusive models based on neural networks showed to outperform signal processing based models. However, the advantages of deep learning based models come with the cost of being more challenging to interpret. To get more insight into the prediction models, in this paper, the non-intrusive speech quality prediction model NISQA is examined. NISQA is composed of a convolutional neural network (CNN) and a recurrent neural network (RNN). The task of the CNN is to compute relevant features for the speech quality prediction on a frame level, while the RNN models time dependencies between the individual speech frames. To understand the automatically learned features of the CNN, different explanation algorithms are used. In this way, several interpretable features could be identified, such as the sensitivity to noise or strong interruptions. On the other hand, it was found that multiple features carry redundant information.