Konferenzsystem

Article

Effect of Simple Visual Inputs on Syllable Parsing

* Presenting author
Day / Time: 17.08.2021, 12:00-12:40
Room: Schubert 6
Download: 631.pdf
Typ: Poster
Article ID:
Online-access: Bitte loggen Sie sich ein, damit weitere Inhalte sichtbar werden (bspw. der Zugang zur Onlinesitzung).
Information: Die Poster sind von Montag morgen bis Mittwoch nachmittag in der Mall bzw. hier als PDF im jeweiligen Posterbeitrag einsehbar. Das Posterforum zu diesen Postern findet am Dienstag von 16:00 - 16:40 Uhr im hier angegebenen Saal statt. Für weiterführende Diskussion verabreden Sie sich bitte mit der/dem jeweiligen Autor(in) am Poster oder nutzen Sie die Chatfunktion im virtuellen Posterausstellungsraum. Dieser steht bis Dienstag ca. 18:30 Uhr zur Verfügung.
Abstract: Visual signals, such as arising from a talker's face, can aid speech comprehension. The neural mechanisms behind the audiovisual integration remain, however, poorly understood. To probe the mechanisms involved, here we utilize a computational model of a cortical microcircuit for speech processing. The model generates oscillations in the theta frequency range through the coupling of an excitatory and an inhibitory neural population. The theta rhythm becomes entrained to the onsets of syllables in presence of a speech input, thus enabling the deduction of syllable onsets from the network activity. We add visual stimuli to this model and investigate their respective effect on parsing scores. Specifically, the different visual input currents are related to the rate of syllables as well as the mouth-opening area of the speakers. We find that adding visual currents to the excitatory neuronal population influences speech comprehension, either boosting it or impeding it, depending on the audiovisual time delay and on whether the currents occur in an excitatory or inhibitory manner. In contrast, adding visual input currents to the inhibitory population does not affect speech comprehension. Our results, therefore, suggest neural mechanisms for audiovisual integration and make predictions that can be experimentally tested.