Echoes can make speech harder to understand, and tuning out echoes in an audio recording is a notoriously difficulty engineering problem. The human brain, however, appears to solve the problem successfully by separating the sound into direct speech and its echo, according to a study publishing February 15 in the open-access journal PLOS Biology by Jiaxin Gao from Zhejiang University, China, and colleagues.
The audio signals in online meetings and auditoriums that are not properly designed often have an echo lagging at least 100 milliseconds from the original speech. These echoes heavily distort speech, interfering with slowly varying sound features most important for understanding conversations, yet people still reliably understand echoic speech. To better understand how the brain enables this, the authors used magnetoencephalography (MEG) to record neural activity while human participants listened to a story with and without an echo. They compared the neural signals to two computational models: one simulating the brain adapting to the echo, and another simulating the brain separating the echo from the original speech.
Participants understood the story with over 95% accuracy, regardless of echo. The researchers observed that cortical activity tracks energy changes related to direct speech, despite the strong interference of the echo. Simulating neural adaptation only partially captured the brain response they observed — neural activity was better explained by a model that split original speech and its echo into separate processing streams. This remained true even when participants were told to direct their attention toward a silent film and ignore the story, suggesting that top-down attention isn’t required to mentally separate direct speech and its echo. The researchers state that auditory stream segregation may be important both for singling out a specific speaker in a crowded environment, and for clearly understanding an individual speaker in a reverberant space.
The authors add, “Echoes strongly distort the sound features of speech and create a challenge for automatic speech recognition. The human brain, however, can segregate speech from its echo and achieve reliable recognition of echoic speech.”
On the night before the Boxing Day Test, Sam Konstas told his batting coach Tahmid…
Aston Martin's Valkyrie AMR-LMH race car may swap out its mufflers for a straight-pipe setup…
In the continual arms race between parasites and their hosts, innovation was thought to be…
Beyoncé's much-anticipated halftime performance, part of Netflix's NFL Christmas Gameday event, is set to release…
Sydney Thunder 182 for 8 (Billings 72, Sangha 33, Webster 3-42) beat Melbourne Stars 164…
General Motors is looking at ways to add quad-motor powertrains to its electric vehicles, with…