jueves, 29 de julio de 2010

Of two minds: Listener brain patterns mirror those of the speaker


Of two minds: Listener brain patterns mirror those of the speaker
By R. Douglas Fields

A new study from Princeton University reports that a female student of lead investigator, Uri Hasson, can project her own brain activity onto another person, forcing the person's neural activity to closely mirror that in her own brain. The process is otherwise known as speech.

There have been many functional brain-imaging studies involving language, but never before have researchers examined both the speaker's and the listener's brains while they communicate to see what is happening inside each brain. The researchers found that when the two people communicate, neural activity over wide regions of their brains becomes almost synchronous, with the listener's brain activity patterns mirroring those sweeping through the speaker's brain, albeit with a short lag of about one second. If the listener, however, fails to comprehend what the speaker is trying to communicate, their brain patterns decouple.

Previously, most brain-imaging studies of language used repetition of simple sounds to stimulate a listener's brain to locate regions mediating listening or they involved a speaker repeating simple words to examine cerebral areas involved in speech production. This disjointed approach was necessary because analyzing fMRI (functional magnetic resonance imaging) data requires repeating a stimulus many times during successive brain scans to average the responses and find regions exhibiting heightened or depressed activity. Also, the imaging machines are noisy, which makes it difficult to have a normal conversation. These past approaches, however, are not adequate studies of communication, which requires that the recipient is attentive and comprehends what the speaker is saying. If, for example, a teacher is lecturing and a student who is listening intently has become lost, there is a failure of communication.

In order to find out what happens in the brain when the speaker and listener communicate or fail to connect, Hasson, an assistant professor in Princeton's Department of Psychology, and his team had to first overcome both technical problems using new analytical methods as well as special nonmagnetic noise-canceling microphones. He asked his student to tell an unrehearsed simple story while imaging her brain. Then they played back that story to several listeners and found that the listener's brain patterns closely matched what was happening inside the speaker's head as she told the story.

The better matched the listener's brain patterns were with the speaker's, the better the listener's comprehension, as shown by a test given afterward. There was no mirroring of the speaker’s brain activity patterns if the listeners instead heard a different story recorded previously by the same speaker and played to them as a control experiment. English speakers listening to a story told in Russian did not show higher-level brain coupling. In other words, there is no mirroring of brain activity between two people's brains when there is no effective communication (except for some regions where elementary aspects of sound are detected. When there is communication, large areas of brain activity become coupled between speaker and listener, including cortical areas involved in understanding the meaning and social aspects of the story.).

Interestingly, in part of the prefrontal cortex in the listener's brain, the researchers found that neural activity preceded the activity that was about to occur in the speaker's brain. This only happened when the speaker was fully comprehending the story and anticipating what the speaker would say next.

"Communication is a joint action, by which two brains become coupled," Hasson explained in an e-mail. "It tells us that such coupling is extensive, [a property of the network seen across many brain areas]."

The team is interested in determining if nonverbal communication similarly causes mirrored brain activity in the recipient's brain, and whether communication in the animal world may have similar properties. "We are thinking about fly courtship song and bird songs. In a fly courtship song, only the male can sing. It was discovered however, that females have the capacity to sing, but it is inhibited," Hasson says. This fits with the new findings, because if the female's brain could not mirror activity in the male fly's brain they would not be able to communicate. Language binds brains together and in this melding of minds forms societies.

The results are detailed in the July 26 issue of Proceedings of the National Academy of Sciences.

miércoles, 28 de julio de 2010

Good conversation results in a 'mind meld'


Good conversation results in a 'mind meld'

July 27th, 2010 in Medicine & Health / Neuroscience

(PhysOrg.com) -- Researchers studying human conversation have discovered the brains of listeners and speakers become synchronized, and this "neural coupling" makes for effective communication. In essence, the participants’ brains connect in a kind of "mind meld."

Psychologist Uri Hasson from Princeton University wanted to find out which areas of the brain were active during speaking and listening to a conversation to test a hypothesis that there is more overlap between these brain areas than generally assumed. It has been noted, for example, that people taking part in conversations will often subconsciously imitate each other’s grammar, rates of speaking and even gestures and posture.

In the first part of the experiment, graduate student Lauren Silbert placed her head in a functional magnetic resonance imaging (fMRI) machine for fifteen minutes, while she recounted an unrehearsed story from her high-school years.

The research team recorded the story using a microphone capable of filtering out the noise of the fMRI machine, and then in the second part of the experiment, a volunteer had his or her head scanned by the fMRI machine while listening to the recording.

The team found a great deal of synchronization between the activity in Silbert’s brain and in those of the 11 volunteers, with the same regions of the brains lighting up at or near the same points in the story. This finding was surprising, given the long-held belief that speaking and listening use separate areas of the brain. The areas of the brain affected were linked to language, but their exact functions are as yet unknown.

In most areas of the brain the activation pattern appeared one to three seconds after it had appeared in Silbert’s brain, but in a few other areas, including an area in the frontal lobe, the activation pattern appeared in the listeners’ brains before it appeared in Silbert’s, which the researchers thought could represent the listeners anticipating what was coming next in the story.

The researchers then asked the subjects to re-tell the story they had heard, and found there was a positive correlation between the strength of the neural coupling and the volunteer’s ability to recall the story details. Hasson concluded that the “more similar our brain patterns during a conversation, the better we understand each other.”

A third stage in the experiment was designed to ensure the neural coupling was not an experimental artifact. In this stage 11 volunteers - all English speakers - were asked to listen to a story told in Russian, which none of them understood. In this experiment no neural coupling was seen. A final stage of the experiment was to have the graduate student tell a different story while having her brain scanned. The results were then compared to the brain patterns of the listeners of the original story. As with the Russian story, no coupling was seen.

Hasson said the next step in the research is to design an experimental set up in which two subjects can have their brains scanned by fMRI simultaneously while they are having a conversation. He predicted that this would produce especially strong synchronization, and also speculated that neural coupling would be stronger in people talking face-to-face than in conversations over the phone or by video conferencing.

The results are published in the Proceedings of the National Academy of Sciences journal and the paper is available online.

martes, 27 de julio de 2010

Social vocalizations can release oxytocin in humans


Social vocalizations can release oxytocin in humans

Leslie J. Seltzer 1, Toni E. Ziegler 2 and Seth D. Pollak 1

1Departments of Psychology, Anthropology, and Waisman Center, University of Wisconsin-Madison, Madison, WI 53705, USA
2Wisconsin National Primate Research Center, Madison, WI 53705, USA
*Author for correspondence (lseltzer@wisc.edu).

Abstract
Vocalizations are important components of social behaviour in many vertebrate species, including our own. Less well-understood are the hormonal mechanisms involved in response to vocal cues, and how these systems may influence the course of behavioural evolution. The neurohormone oxytocin (OT) partly governs a number of biological and social processes critical to fitness, such as attachment between mothers and their young, and suppression of the stress response after contact with trusted conspecifics. Rodent studies suggest that OT's release is contingent upon direct tactile contact with such individuals, but we hypothesized that vocalizations might be capable of producing the same effect. To test our hypothesis, we chose human mother–daughter dyads and applied a social stressor to the children, following which we randomly assigned participants into complete contact, speech-only or no-contact conditions. Children receiving a full complement of comfort including physical, vocal and non-verbal contact showed the highest levels of OT and the swiftest return to baseline of a biological marker of stress (salivary cortisol), but a strikingly similar hormonal profile emerged in children comforted solely by their mother's voice. Our results suggest that vocalizations may be as important as touch to the neuroendocrine regulation of social bonding in our species.

jueves, 22 de julio de 2010

Every action has a beginning and an end (and it's all in you brain)


Every action has a beginning and an end (and it's all in you brain)
July 21st, 2010 in Medicine & Health / Neuroscience


Rui Costa, Principal Investigator of the Champalimaud Neuroscience Programme at the Instituto Gulbenkian de Ciencia (Portugal), and Xin Jin, of the National Institute on Alcohol Abuse and Alcoholism, National Institutes of Health (USA), describe in the latest issue of the journal Nature, that the activity of certain neurons in the brain can signal the initiation and termination of behavioural sequences we learn anew. Furthermore, they found that this brain activity is essential for learning and executing novel action sequences, many times compromised in patients suffering from disorders such as Parkinson's or Huntington's.

Animal behaviour, including our own, is very complex and is many times seen as a sequence of particular actions or movements, each with a precise start and stop step. This is evident in a wide range of abilities, from escaping a predator to playing the piano. In all of them there is a first initial step and one that signals the end. In this latest work, the researchers explored the role of certain brain circuits located in the basal ganglia in this process. They looked at the striatum, its dopaminergic input (dopamine-producing neurons that project into the striatum) and its output to the substantia nigra, another area in the basal ganglia, and found that both play an essential role in the initiation and termination of newly learnt behavioural sequences.

Rui Costa and Xin Jin show that when mice are learning to perform a particular behavioural sequence there is a specific neuronal activity that emerges in those brain circuits and signals the initiation and termination steps. Interestingly these are the circuits that degenerate in patients suffering from Parkinson's and Huntington's diseases, who also display impairments both in sequence learning, and in the initiation and termination of voluntary movements. Furthermore, the researchers were able to genetically manipulate those circuits in mice, and showed that this leads to deficits in sequence learning by the mice - again, a feature shared with human patients affected with basal ganglia disorders.

Rui Costa explains the implications of these results: "For the execution of learned skills, like playing a piano or driving a car, it is essential to know when to start and stop each particular sequence of movements, and we found the neuronal circuits that are involved in the initiation and termination of action sequences that are learnt. This can be of particular relevance for patients suffering from Huntington's and Parkinson's disease, but also for people suffering from other disorders like compulsivity".

Xin Jun adds: "This start/stop activity appears during learning and disrupting it genetically severely impairs the learning of new action sequences. These findings may provide a possible insight into the mechanism underlying the sequence learning and execution impairments observed in Parkinson's and Huntington's patients who have lost basal ganglia neurons which may be important in generating initiation and termination activity in their brain".

More information: Xin Jin & Rui M. Costa; 'Start/stop signals emerge in nigrostriatal circuits during sequence learning'; Nature, volume 466, issue 7305, pp 457-462. DOI: 10.1038/nature09263


Provided by Instituto Gulbenkian de Ciencia

miércoles, 21 de julio de 2010

Taking music seriously: How music training primes nervous system and boosts learning



Taking music seriously: How music training primes nervous system and boosts learning
July 20th, 2010 in Medicine & Health / Neuroscience


Those ubiquitous wires connecting listeners to you-name-the-sounds from invisible MP3 players -- whether of Bach, Miles Davis or, more likely today, Lady Gaga -- only hint at music's effect on the soul throughout the ages.

Now a data-driven review by Northwestern University researchers that will be published July 20 in Nature Reviews Neuroscience pulls together converging research from the scientific literature linking musical training to learning that spills over to skills including language, speech, memory, attention and even vocal emotion. The science covered comes from labs all over the world, from scientists of varying scientific philosophies, using a wide range of research methods.

The explosion of research in recent years focusing on the effects of music training on the nervous system, including the studies in the review, have strong implications for education, said Nina Kraus, lead author of the Nature perspective, the Hugh Knowles Professor of Communication Sciences and Neurobiology and director of Northwestern's Auditory Neuroscience Laboratory.

Scientists use the term neuroplasticity to describe the brain's ability to adapt and change as a result of training and experience over the course of a person's life. The studies covered in the Northwestern review offer a model of neuroplasticity, Kraus said. The research strongly suggests that the neural connections made during musical training also prime the brain for other aspects of human communication.

An active engagement with musical sounds not only enhances neuroplasticity, she said, but also enables the nervous system to provide the stable scaffolding of meaningful patterns so important to learning.

"The brain is unable to process all of the available sensory information from second to second, and thus must selectively enhance what is relevant," Kraus said. Playing an instrument primes the brain to choose what is relevant in a complex process that may involve reading or remembering a score, timing issues and coordination with other musicians.

"A musician's brain selectively enhances information-bearing elements in sound," Kraus said. "In a beautiful interrelationship between sensory and cognitive processes, the nervous system makes associations between complex sounds and what they mean." The efficient sound-to-meaning connections are important not only for music but for other aspects of communication, she said.

The Nature article reviews literature showing, for example, that musicians are more successful than non-musicians in learning to incorporate sound patterns for a new language into words. Children who are musically trained show stronger neural activation to pitch changes in speech and have a better vocabulary and reading ability than children who did not receive music training.

And musicians trained to hear sounds embedded in a rich network of melodies and harmonies are primed to understand speech in a noisy background. They exhibit both enhanced cognitive and sensory abilities that give them a distinct advantage for processing speech in challenging listening environments compared with non-musicians.

Children with learning disorders are particularly vulnerable to the deleterious effects of background noise, according to the article. "Music training seems to strengthen the same neural processes that often are deficient in individuals with developmental dyslexia or who have difficulty hearing speech in noise."

Currently what is known about the benefits of music training on sensory processing beyond that involved in musical performance is largely derived from studying those who are fortunate enough to afford such training, Kraus said.

The research review, the Northwestern researchers conclude, argues for serious investing of resources in music training in schools accompanied with rigorous examinations of the effects of such instruction on listening, learning, memory, attention and literacy skills.

"The effect of music training suggests that, akin to physical exercise and its impact on body fitness, music is a resource that tones the brain for auditory fitness and thus requires society to re-examine the role of music in shaping individual development, " the researchers conclude.

More information: "Music training for the development of auditory skills," by Nina Kraus and Bharath Chandrasekaran, will be published July 20 in the journal Nature Reviews Neuroscience.


Provided by Northwestern University

martes, 13 de julio de 2010

Making the invisible visible: Verbal -- not visual -- cues enhance visual detection


Making the invisible visible: Verbal -- not visual -- cues enhance visual detection
July 12th, 2010 in Medicine & Health / Psychology & Psychiatry


Cognitive psychologists at the University of Pennsylvania and University of California have shown that an image displayed too quickly to be seen by an observer can be detected if the participant first hears the name of the object.

Through a series of experiments published in the journal PLoS ONE, researchers found that hearing the name of an object improved participants' ability to see it, even when the object was flashed onscreen in conditions and speeds (50 milliseconds) that would render it invisible. Surprisingly, the effect seemed to be specific to language. A visual preview did not make the invisible target visible. Getting a good look at the object before the experiment did nothing to help participants see it flashed.

The study demonstrated that language can change what we see and can also enhance perceptual sensitivity. Verbal cues can influence even the most elementary visual processing and inform our understanding of how language affects perception.

Researchers led by psychologist Gary Lupyan, assistant professor in the Department of Psychology at Penn, had participants complete an object detection task in which they made an object-presence or -absence decision to briefly presented capital letters.

Other experiments within the study further defined the relationship between auditory cues and identification of visual images. For example, researchers reasoned that if auditory cues help with object detection by encouraging participants to mentally picture the image, then the cuing effect might disappear when the target moved on screen. The study found that verbal cues still clued participants in. No matter what position on screen the target showed up the effect of the auditory cue was not diminished, an advantage over visual cues.

Researchers also found that the magnitude of the cuing effect correlated with each participant's own estimation of the vividness of their mental imagery. Using a common questionnaire, researchers found that those who consider their mental imagery particularly vivid scored higher when provided an auditory cue.

The team went on to determine that the auditory cue improved detection only when the cue was correct that is the target image and the verbal cue had to match. According to researchers, hearing the image labeled evokes an image of the object, strengthening its visual representation and thus making it visible.

"This research speaks to the idea that perception is shaped moment-by-moment by language," said Lupyan. "Although only English speakers were tested, the results suggest that because words in different languages pick out different things in the environment, learning different languages can shape perception in subtle, but pervasive ways."

The single study is part of a greater effort by Lupyan and other Penn psychologists to understand how high-level cognitive expectation can influence low-level sensory processing, in this case verbal cues. For years, cognitive psychologists have known that directing participant's attention to a general location improves reaction times to target objects appearing in that location. More recently, experimental evidence has shown that semantic information can influence what one sees in surprising ways. For instance, hearing words that associate with directions of motion, such as a falling "bomb," can interfere with an observer's ability to quickly recognize the next movement they see. Moreover, hearing a word that labels a target improves the speed and efficiency of the search. For instance, when searching for the number 2 among 5's, participants are faster to find the target when they actually hear "find the two" immediately prior to the search - even when 2 has been the target all along.

Provided by University of Pennsylvania

martes, 6 de julio de 2010

Brain patterns could reveal if children will go on to develop mental illness



Many people who go on to develop mental health problems will have a history of behavioral problems going back to childhood. Now brain patterns could offer further insight (posed)
British scientists have found specific patterns of brain activity in children that could be 'markers' of those who will develop mental illnesses such as schizophrenia.


Researchers from Nottingham University said it may be possible to one day use this information to find youngsters at risk of becoming ill before they develop symptoms.

'If we can identify people who are at particularly high risk of developing schizophrenia, perhaps using neurocognitive brain markers, then we might be able to reduce that risk and also help them to function better,' said study leader Dr Maddie Groom.


'If we give them a better start, they may encounter the illness in a more positive way and not get quite so ill.'


Hundreds of millions of people worldwide are affected by mental, behavioral and neurological illnesses such as schizophrenia, attention deficit hyperactivity disorder (ADHD), depression, epilepsy and dementia.


Many people who go on to develop diverse mental health problems will have a history of behavioral problems going back to childhood, but it is difficult to say early on which children will develop them.


In one study, Dr Groom and her colleagues investigated looked at the healthy siblings of people with schizophrenia, who also have a very slightly increased risk of developing schizophrenia compared with the general population.


Using brain imaging to read activity levels, the scientists asked the siblings to perform task which involved playing an alien-zapping computer game in which they needed to respond quickly, and crucially, halt the urge to respond if the wrong kind of alien popped up. The task was called a 'go, no-go' task.


'When we measured the brain activity of the siblings of people with schizophrenia, their brain activity was reduced at the time when they needed to pay attention to the stimulus, and when they needed to inhibit their response,' Dr Groom explained.


She said this suggested the subtle differences in brain activity may act as a risk marker for the disorder.

Lucie Russell, Campaign Director for mental health charity Young Minds, said: 'The possibility of studying brain activity to predict the risk of young people developing mental illness could be really useful in preventing the suffering mental health problems cause to thousands of young people.


'However, as with the criticisms of gene testing we risk labelling children and young people.

'We also must not forget the vital importance of a stable and loving family background on a child’s wellbeing and how emotional and physical abuse can have a devastating effect on their mental health.’

In a second study, scientists compared brain activity of children with ADHD - a mental disorder that affects up to 12 per cent of children and four per cent of adults worldwide.


The researchers used the same "go, no-go" task in various scenarios, including when the children were taking their medication, Ritalin, and when they were not, and then using an additional system of rewards and penalties.


Dr Groom's results showed that children who were taking medication, and children given an incentive, performed better than those who had neither medicines nor incentives.

This suggests, Groom said, that doctors may be able to find new ways to treat children with ADHD using a combination of behavioral strategies and drugs.

The team presented their study at the Forum for European Neuroscience in Amsterdam.

sábado, 3 de julio de 2010

Our brains are more like birds' than we thought

Our brains are more like birds' than we thought
July 2nd, 2010 in Medicine & Health / Neuroscience


For more than a century, neuroscientists believed that the brains of humans and other mammals differed from the brains of other animals, such as birds (and so were presumably better). This belief was based, in part, upon the readily evident physical structure of the neocortex, the region of the brain responsible for complex cognitive behaviors.

A new study, however, by researchers at the University of California, San Diego School of Medicine finds that a comparable region in the brains of chickens concerned with analyzing auditory inputs is constructed similarly to that of mammals.

"And so ends, perhaps, this claim of mammalian uniqueness," said Harvey J. Karten, MD, professor in the Department of Neurosciences at UCSD's School of Medicine, and lead author of the study, published this week in the Proceedings of the National Academy of Sciences Online Early Edition.

Generally speaking, the brains of mammals have long been presumed to be more highly evolved and developed than the brains of other animals, in part based upon the distinctive structure of the mammalian forebrain and neocortex - a part of the brain's outer layer where complex cognitive functions are centered.

Specifically, the mammalian neocortex features layers of cells (lamination) connected by radially arrayed columns of other cells, forming functional modules characterized by neuronal types and specific connections. Early studies of homologous regions in nonmammalian brains had found no similar arrangement, leading to the presumption that neocortical cells and circuits in mammals were singular in nature.

For 40 years, Karten and colleagues have worked to upend this thinking. In the latest research, they used modern, sophisticated imaging technologies, including a highly sensitive tracer, to map a region of the chicken brain (part of the telencephalon) that is similar to the mammalian auditory cortex. Both regions handle listening duties. They discovered that the avian cortical region was also composed of laminated layers of cells linked by narrow, radial columns of different types of cells with extensive interconnections that form microcircuits that are virtually identical to those found in the mammalian cortex.

The findings indicate that laminar and columnar properties of the neocortex are not unique to mammals, and may in fact have evolved from cells and circuits in much more ancient vertebrates.

"The belief that cortical microcircuitry was a unique property of mammalian brains was largely based on the lack of apparent lamination in other species, and the widespread notion that non-mammalian vertebrates were not capable of performing complex cognitive and analytic processing of sensory information like that associated with the neocortex of mammals," said Karten.

"Animals like birds were viewed as lovely automata capable only of stereotyped activity."

But this kind of thinking presented a serious problem for neurobiologists trying to figure out the evolutionary origins of the mammalian cortex, he said. Namely, where did all of that complex circuitry come from and when did it first evolve?

Karten's research supplies the beginnings of an answer: From an ancestor common to both mammals and birds that dates back at least 300 million years.

The new research has contemporary, practical import as well, said Karten. The similarity between mammalian and avian cortices adds support to the utility of birds as suitable animal models in diverse brain studies.

"Studies indicate that the computational microcircuits underlying complex behaviors are common to many vertebrates," Karten said. "This work supports the growing recognition of the stability of circuits during evolution and the role of the genome in producing stable patterns. The question may now shift from the origins of the mammalian cortex to asking about the changes that occur in the final patterning of the cortex during development."

Provided by University of California - San Diego