Email Print Share

News Release 18-028

How does the brain learn categorization for sounds? The same way it does for images

A two-step process may explain how the brain quickly builds on past learning

Images of the brain.

Functional MRI response from a representative subject during a listening task.


April 18, 2018

This material is available primarily for archival purposes. Telephone numbers or other contact information may be out of date; please see current contact information at media contacts.

Categorization, or the recognition that individual objects share similarities and can be grouped together, is fundamental to how we make sense of the world. Previous research has revealed how the brain categorizes images. Now, researchers funded by the National Science Foundation (NSF) have discovered that the brain categorizes sounds in much the same way.

The results are published today in the journal Neuron.

"Categorization involves applying a single label to a wide variety of sensory inputs," said Max Riesenhuber, professor of neuroscience at Georgetown University and lead co-author of the article. "For example, apples come in many colors, shapes and sizes, yet we label each as an apple. Children do this all the time as they learn language, but we actually know very little about how the brain categorizes the world."

The importance of this work was underlined by Uri Hasson, program director for NSF's Cognitive Neuroscience Program, which supported the work.

"These findings reveal what may not only be a general mechanism about how the brain learns, but also about how learning changes the brain and allows the brain to build on that learning," Hasson said. "The work has potential implications for understanding individual differences in language learning and can provide a foundation for understanding and treating people with learning disorders and other disabilities."

Riesenhuber's group at Georgetown had previously studied how the brain categorizes visual objects and found that at least two distinct regions of the brain were involved. One region, in the visual cortex, encoded images, while a region in the prefrontal cortex signaled their category membership. For their more recent research, Riesenhuber and lead author Xiong Jiang were interested in whether the same processes underlie categorization of auditory cues. They joined forces with co-author Josef Rauschecker, also a professor of neuroscience at Georgetown and an expert on the auditory cortex and neural plasticity.

To find out how the brain categorizes auditory input, the researchers invented new sounds using an acoustic blending tool to produce sounds from two types of monkey calls. The blending produced hundreds of new sounds that differed from the original calls.

Subjects listened to several hundred calls and categorized them under two arbitrary labels that were created by the researchers. The researchers used functional MRI prior to and following the training to image subjects' brains while they listened to the sounds, but did not yet label them. The results showed that learning to categorize the sounds had increased the brain's sensitivity to the acoustic features that distinguished one sound from another. This occurred in the lower-level auditory cortex, which is responsible for representing sound but does not appear to give it any meaning or significance.

The subjects' brains were then scanned while they judged which category the sounds belonged to. These scans showed that neural activity patterns in another brain region, the prefrontal cortex, distinguished between categories and that subjects used that information to make their judgments. The researchers also found that the category selectivity of neural activity patterns in the prefrontal cortex was task-dependent. When subjects were listening to the sounds but not judging which category they belonged to, the neural activity patterns in the prefrontal cortex region did not distinguish one category from another.

The discovery of similar processes for visual and auditory categorization promises important advances for how we understand learning.

"Knowing how senses learn the world may help us devise workarounds in our very plastic brains," Riesenhuber said. "If a person can't process one sensory modality, say vision, because of blindness, there could be substitution devices that allow visual input to be transformed into sounds." Added Rauschecker, "One disabled sense would be processed by other sensory brain centers."

-NSF-

Media Contacts
Stanley Dambroski, NSF, (703) 292-7728, email: sdambros@nsf.gov

The U.S. National Science Foundation propels the nation forward by advancing fundamental research in all fields of science and engineering. NSF supports research and people by providing facilities, instruments and funding to support their ingenuity and sustain the U.S. as a global leader in research and innovation. With a fiscal year 2023 budget of $9.5 billion, NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and institutions. Each year, NSF receives more than 40,000 competitive proposals and makes about 11,000 new awards. Those awards include support for cooperative research with industry, Arctic and Antarctic research and operations, and U.S. participation in international scientific efforts.

mail icon Get News Updates by Email 

Connect with us online
NSF website: nsf.gov
NSF News: nsf.gov/news
For News Media: nsf.gov/news/newsroom
Statistics: nsf.gov/statistics/
Awards database: nsf.gov/awardsearch/

Follow us on social
Twitter: twitter.com/NSF
Facebook: facebook.com/US.NSF
Instagram: instagram.com/nsfgov