Who says what?

Who says what?. 45937.jpegby Dráuzio Varella

You receive a phone call. In seconds, you try to identify if the voice is known, if it's a man, a woman or a child, and what state of mind the speaker is in: nervous, happy, rushed, unhappy, indifferent?

To gather all such information from a single auditory event, neural circuits are mobilized on either side of the brain. Those responsible for voice recognition occupy areas of the right hemisphere, while those in charge of deciphering the meaning of speech are concentrated on the left.

In the past, we thought that these two modules worked independently: one of them would address the "who," while the other the "what."

This year, the Tyler Perrachione group at MIT, working with people with dyslexia, in the journal Science published a study that contradicts this notion of the independence between the what and whom.

Dyslexia has been regarded as resulting from a defect in sensory or cognitive processing. Neuroscientists at MIT have shown for the first time that these phonological deficiencies also compromise the recognition of voices.

They compared people with many years of history of dyslexia with other non-dyslexic persons (the control group), age, educational level and intellectual endowments were similar. All five had to learn to recognize voices that they had never heard before.

When the unknown voices spoke in English, the native language of participants, people with dyslexia showed a 40% lower performance. When they spoke Mandarin, the difference between dyslexics and the control group disappeared.

Unlike the non-dyslexics, those with dyslexia failed to demonstrate the advantage of voice recognition in the native language in relation to the foreign. The difficulty in identifying voices is a new finding about dyslexia, which cannot be explained by learning or hearing problems.

Why does our brain work that way?

For the authors, the explanation has historic and evolutionary roots. "The growing complexity of the social world created a selective pressure on brain mechanisms to integrate, rather than isolate, the information gathered in the environment. This functional integration of information (social objective) with the content of the message (linguistic objective) provides the maximum details of the social scene."

If they are right, would the minds of babies work the same way?

A baby of seven months is only able to recognize the exchange of a person speaking if they hear it in the language spoken at home. He/she cannot realize the difference between those conversing in an unknown language.

As babies this age still do not understand the words, the difficulty with the foreign speaker cannot be attributed to lack of understanding, but to the absence of stored memories. These characteristics suggest that the baby's brain stores detailed information about the succession of sounds heard often from the closest people.

When babies listen to foreign languages, at 7 months, they behave like adults with dyslexia in their native language.

At 9 months, they only learn foreign words and syllables if they interact directly with the person who speaks to them. In front of the TV, they cannot manage. At this age, if they have to decide between two parties that speak to them, one in the native language, the other in a foreign language, they prefer to look at the first one.

These observations demonstrate that social interaction affects the processing of language in babies. Speech is an example of what to associate to the source (who) to the content of information and (what) adds value. Babies seem to be born predisposed to learning through social integration and linguistic information.

Now we need to figure out how brain centers located on opposite sides of the brain can transfer information from one to the other, so that activation of one of them puts the other in action. •



Translated from the Portuguese version by:

Lisa Karpova


Subscribe to Pravda.Ru Telegram channel, Facebook, RSS!

Author`s name Oksana Orlovskaya