Categories: Science

The brain processes speech and its echo separately


Echoes can make speech harder to understand, and tuning out echoes in an audio recording is a notoriously difficulty engineering problem. The human brain, however, appears to solve the problem successfully by separating the sound into direct speech and its echo, according to a study publishing February 15 in the open-access journal PLOS Biology by Jiaxin Gao from Zhejiang University, China, and colleagues.

The audio signals in online meetings and auditoriums that are not properly designed often have an echo lagging at least 100 milliseconds from the original speech. These echoes heavily distort speech, interfering with slowly varying sound features most important for understanding conversations, yet people still reliably understand echoic speech. To better understand how the brain enables this, the authors used magnetoencephalography (MEG) to record neural activity while human participants listened to a story with and without an echo. They compared the neural signals to two computational models: one simulating the brain adapting to the echo, and another simulating the brain separating the echo from the original speech.

Participants understood the story with over 95% accuracy, regardless of echo. The researchers observed that cortical activity tracks energy changes related to direct speech, despite the strong interference of the echo. Simulating neural adaptation only partially captured the brain response they observed — neural activity was better explained by a model that split original speech and its echo into separate processing streams. This remained true even when participants were told to direct their attention toward a silent film and ignore the story, suggesting that top-down attention isn’t required to mentally separate direct speech and its echo. The researchers state that auditory stream segregation may be important both for singling out a specific speaker in a crowded environment, and for clearly understanding an individual speaker in a reverberant space.

The authors add, “Echoes strongly distort the sound features of speech and create a challenge for automatic speech recognition. The human brain, however, can segregate speech from its echo and achieve reliable recognition of echoic speech.”



Source link

24timenews.com

Recent Posts

BGT – Aus vs Ind – Sam Konstas taking on Jasprit Bumrah no surprise to childhood coach Tahmid Islam

On the night before the Boxing Day Test, Sam Konstas told his batting coach Tahmid…

7 hours ago

Aston Martin may straight-pipe the Valkyrie race car

Aston Martin's Valkyrie AMR-LMH race car may swap out its mufflers for a straight-pipe setup…

7 hours ago

Fly vs. wasp: Stealing a defense move helps thwart a predator

In the continual arms race between parasites and their hosts, innovation was thought to be…

7 hours ago

Beyoncé’s NFL Christmas Halftime Show Now Streaming on Netflix: Everything You Need to Know

Beyoncé's much-anticipated halftime performance, part of Netflix's NFL Christmas Gameday event, is set to release…

8 hours ago

BBL 2024 2024/25, Melbourne Stars vs Sydney Thunder 14th Match, Canberra Match Report, December 28, 2024

Sydney Thunder 182 for 8 (Billings 72, Sangha 33, Webster 3-42) beat Melbourne Stars 164…

17 hours ago

GM maps out how it might configure performance quad-motor EVs

General Motors is looking at ways to add quad-motor powertrains to its electric vehicles, with…

17 hours ago