Benedikt Grothe and his research group study the neuronal processing mechanisms that enable the mammalian auditory system to localize sounds in space. In their latest study, the researchers take a closer look at the impact of context on sound localization, and demonstrate that the human hearing system is capable of dynamically adjusting its response when stimuli are presented in sequences. The results question the conventional view that the system primarily serves to localize sound sources with very high precision based on physical differences between the same sound as perceived by the two ears. “Our study will lead to a paradigm change in the understanding of spatial hearing,” Grothe states. The findings appear in the online journal Scientific Reports.
We perceive sounds because the pressure changes they set up in the inner ear are transduced into electrical signals by sensory nerve cells, which are transmitted and processed via several way stations to the auditory cortex. According to the generally accepted model, the processing system localizes sound sources in space by measuring the difference between the times of arrival of the sound at the ipsilateral ear (which is closer to the source) and the contralateral ear. The mammalian auditory system can perceive timing differences on the order of a few microseconds. This feat is made possible, in part, by a novel mechanism that involves precisely timed inhibition of neuronal firing during intermediate stages of processing, as a recent paper published in Nature Communications by a team led by Grothe and Michael Pecka has shown. However, the studies on which the timing-difference model is based involved the use of isolated single tones. When the stimulus consists of two sounds in succession, one observes ‘curious adaptation processes’, says Grothe. “In essence, we find it more difficult for us to accurately localize the second sound.”
In collaboration with Christian Leibold, Professor of Computational Neuroscience at LMU‘s Biocenter, Grothe and his colleagues have now worked out why this is so. Leibold has developed a theoretical model that allows him to predict the neuronal processes triggered by a sequence of sounds. The model suggests that, when confronted with a series of tones, the system sacrifices precision in absolute localization to enhance the ability to assess the relative distances of successive sounds. “Our perception of auditory space is inherently dynamic. The processing system foregoes absolute localization accuracy in favor of sharpening the relative spatial resolution. This enhances our ability to distinguish a tone that emanates from a point further to the left from one that is closer to us,” Grothe explains. This is what enables us to orient ourselves in situations in which several sound sources compete for our attention – in an open-plan office or at a party, for instance.
The new study confirms the predictions derived from Leibold’s model. Experimental subjects were presented with tones over headphones. “They were able to locate the sources of single sounds with astonishing precision, but if the initial sound was followed by a second, they made localization errors of up to 40 degrees,” Grothe says. However, the ability to gauge the relative distances between the two sound sources was significantly enhanced. The LMU researchers therefore assume that evolution has enhanced the ability of the mammalian auditory system to estimate the relative distances between different sound sources rather than the absolute location of each sound source. Hence the auditory processing system has evolved to serve as a dynamic aural ‘rangefinder’. (Scientific Reports 2018)
For more information on LMU research in neurobiology, see : Where microseconds matter Where did that noise come from? New model for the origin of grid cells