Slide background
Slide background
Slide background
Slide background
Slide background
Slide background
Slide background

Biologists Locate Brain’s Processing Point for Acoustic Signals Essential to Human Communication

Print Friendly, PDF & Email

ScienceDaily (Mar. 8, 2012) — In both animals and humans, vocal signals used for communication contain a wide array of different sounds that are determined by the vibrational frequencies of vocal cords. For example, the pitch of someone’s voice, and how it changes as they are speaking, depends on a complex series of varying frequencies. Knowing how the brain sorts out these different frequencies — which are called frequency-modulated (FM) sweeps — is believed to be essential to understanding many hearing-related behaviors, like speech. Now, a pair of biologists at the California Institute of Technology (Caltech) has identified how and where the brain processes this type of sound signal.

Their findings are outlined in a paper published in the March 8 issue of the journal Neuron.

Knowing the direction of an FM sweep — if it is rising or falling, for example — and decoding its meaning, is important in every language. The significance of the direction of an FM sweep is most evident in tone languages such as Mandarin Chinese, in which rising or dipping frequencies within a single syllable can change the meaning of a word.

In their paper, the researchers pinpointed the brain region in rats where the task of sorting FM sweeps begins.

“This type of processing is very important for understanding language and speech in humans,” says Guangying Wu, principal investigator of the study and a Broad Senior Research Fellow in Brain Circuitry at Caltech. “There are some people who have deficits in processing this kind of changing frequency; they experience difficulty in reading and learning language, and in perceiving the emotional states of speakers. Our research might help us understand these types of disorders, and may give some clues for future therapeutic designs or designs for prostheses like hearing implants.”

The researchers — including co-author Richard I. Kuo, a research technician in Wu’s laboratory at the time of the study (now a graduate student at the University of Edinburg) — found that the processing of FM sweeps begins in the midbrain, an area located below the cerebral cortex near the center of the brain — which, Wu says, was actually a surprise.

“Some people thought this type of sorting happened in a different region, for example in the auditory nerve or in the brain stem,” says Wu. “Others argued that it might happen in the cortex or thalamus. ”

To acquire high-quality in-vivo measurements in the midbrain, which is located deep within the brain, the team designed a novel technique using two paired — or co-axial — electrodes. Previously, it had been very difficult for scientists to acquire recordings in hard-to-access brain regions such as the midbrain, thalamus, and brain stem, says Wu, who believes the new method will be applicable to a wide range of deep-brain research studies.

In addition to finding the site where FM sweep selectivity begins, the researchers discovered how auditory neurons in the midbrain respond to these frequency changes. Combining physical measurements with computational models confirmed that the recorded neurons were able to selectively respond to FM sweeps based on their directions. For example, some neurons were more sensitive to upward sweeps, while others responded more to downward sweeps.

“Our findings suggest that neural networks in the midbrain can convert from non-selective neurons that process all sounds to direction-selective neurons that help us give meanings to words based on how they are spoken. That’s a very fundamental process,” says Wu.

Wu says he plans to continue this line of research, with an eye — or ear — toward helping people with hearing-related disorders. “We might be able to target this area of the midbrain for treatment in the near future,” he says.

Discovery of Hair-Cell Roots Suggests the Brain Modulates Sound Sensitivity

Print Friendly, PDF & Email

ScienceDaily (Mar. 8, 2012) — The hair cells of the inner ear have a previously unknown “root” extension that may allow them to communicate with nerve cells and the brain to regulate sensitivity to sound vibrations and head position, researchers at the University of Illinois at Chicago College of Medicine have discovered.

Their finding is reported online in advance of print in the Proceedings of the National Academy of Sciences.

The hair-like structures, called stereocilia, are fairly rigid and are interlinked at their tops by structures called tip-links.

When you move your head, or when a sound vibration enters your ear, motion of fluid in the ear causes the tip-links to get displaced and stretched, opening up ion channels and exciting the cell, which can then relay information to the brain, says Anna Lysakowski, professor of anatomy and cell biology at the UIC College of Medicine and principal investigator on the study.

The stereocilia are rooted in a gel-like cuticle on the top of the cell that is believed to act as a rigid platform, helping the hairs return to their resting position.

Lysakowski and her colleagues were interested in a part of the cell called the striated organelle, which lies underneath this cuticle plate and is believed to be responsible for its stability. Using a high-voltage electron microscope at the National Center for Microscopy and Imaging Research at the University of California, San Diego, Florin Vranceanu, a recent doctoral student in Lysakowski’s UIC lab and first author of the paper, was able to construct a composite picture of the entire top section of the hair cell.

“When I saw the pictures, I was amazed,” said Lysakowski.

Textbooks, she said, describe the roots of the stereocilia ending in the cuticular plate. But the new pictures showed that the roots continue through, make a sharp 110-degree angle, and extend all the way to the membrane at the opposite side of the cell, where they connect with the striated organelle.

For Lysakowski, this suggested a new way to envision how hair cells work. Just as the brain adjusts the sensitivity of retinal cells in the eye to light, it may also modulate the sensitivity of hair cells in the inner ear to sound and head position.

When the eye detects light, there is feedback from the brain to the eye. “If it’s too bright the brain can say, okay, I’ll detect less light — or, it’s not bright enough, let me detect more,” Lysakowski said.

With the striated organelle connecting the rootlets to the cell membrane, it creates the possibility of feedback from the cell to the very detectors that detect motion. Feedback from the brain could alter the tension on the rootlets and their sensitivity to stimuli. The striated organelle may also tip the whole cuticular plate at once to modulate the entire process.

“This may revolutionize the way we think about the hair cells in the inner ear,” Lysakowski said.

The study was supported by the grants from the National Institutes of Deafness and other Communication Disorders, the American Hearing Research Foundation, the National Center for Research Resources, and the 2008 Tallu Rosen Grant in Auditory Science from the National Organization for Hearing Research Foundation.

Graduate student Robstein Chidavaenzi and Steven Price, an electron microscope technologist, also contributed by identifying three of the proteins composing the striated organelle and demonstrating how they arise during development. Guy Perkins, Masako Terada and Mark Ellisman from the National Center for Microscopy and Imaging Research in Biological Systems, University of California, San Diego, also contributed to the study.