martes, 28 de septiembre de 2010

Right or left? Brain stimulation can change which hand you favor

Right or left? Brain stimulation can change which hand you favor
September 27th, 2010 in Medicine & Health / Neuroscience

Each time we perform a simple task, like pushing an elevator button or reaching for a cup of coffee, the brain races to decide whether the left or right hand will do the job. But the left hand is more likely to win if a certain region of the brain receives magnetic stimulation, according to new research from the University of California, Berkeley.

UC Berkeley researchers applied transcranial magnetic stimulation (TMS) to the posterior parietal cortex region of the brain in 33 right-handed volunteers and found that stimulating the left side spurred an increase in their use of the left hand.

The left hemisphere of the brain controls the motor skills of the right side of the body and vice versa. By stimulating the parietal cortex, which plays a key role in processing spatial relationships and planning movement, the neurons that govern motor skills were disrupted.

"You're handicapping the right hand in this competition, and giving the left hand a better chance of winning," said Flavio Oliveira, a UC Berkeley postdoctoral researcher in psychology and neuroscience and lead author of the study, published this week in the journal Proceedings of the National Academy of Sciences.

The study's findings challenge previous assumptions about how we make decisions, revealing a competitive process, at least in the case of manual tasks. Moreover, it shows that TMS can manipulate the brain to change plans for which hand to use, paving the way for clinical advances in the rehabilitation of victims of stroke and other brain injuries.

"By understanding this process, we hope to be able to develop methods to overcome learned limb disuse," said Richard Ivry, UC Berkeley professor of psychology and neuroscience and co-author of the study.

At least 80 percent of the people in the world are right-handed, but most people are ambidextrous when it comes to performing one-handed tasks that do not require fine motor skills.

"Alien hand syndrome," a neurological disorder in which victims report the involuntary use of their hands, inspired researchers to investigate whether the brain initiates several action plans, setting in motion a competitive process before arriving at a decision.

While the study does not offer an explanation for why there is a competition involved in this type of decision making, researchers say it makes sense that we adjust which hand we use based on changing situations.
"In the middle of the decision process, things can change, so we need to change track," Oliveira said.

In TMS, magnetic pulses alter electrical activity in the brain, disrupting the neurons in the underlying brain tissue. While the current findings are limited to hand choice, TMS could, in theory, influence other decisions, such as whether to choose an apple or an orange, or even which movie to see, Ivry said.

With sensors on their fingertips, the study's participants were instructed to reach for various targets on a virtual tabletop while a 3-D motion-tracking system followed the movements of their hands. When the left posterior parietal cortex was stimulated, and the target was located in a spot where they could use either hand, there was a significant increase of the use of the left hand, Oliveira said.

Provided by University of California -- Berkeley

sábado, 25 de septiembre de 2010

Why science can't hold sway

Our biases are overpowering.

By Faye Flam

Inquirer Staff Writer
Why do so many Americans disagree with scientific consensus on issues such as global climate change and the safety of burying nuclear waste? Is it our poor education? Science illiteracy? Innumeracy?

None of the above, according to a new study published in Journal of Risk Research. People's positions on these issues and their willingness to believe or discount scientists depends mostly on ideology, or what the study's authors call "cultural cognition."

After surveying 1,500 people, the researchers found that those who were "egalitarian and resentful of economic inequality" were more likely to assume that there was scientific consensus that human activity is contributing to climate change, but not that it's safe to dispose of nuclear waste underground. Those who were more "hierarchical, individualistic and connected to industry and commerce" were more likely to make the opposite assumptions.

According to reports from the National Academy of Sciences, human activity is contributing to climate change and nuclear waste can be buried safely in certain designated sites.

"It's not that one group is paying more attention to what scientific consensus is," said Dan Kahan, a law professor at Yale and author of the study. But there's a pervasive tendency to form perceptions of scientific consensus that reinforce people's values.

The researchers also confronted subjects with fictional authors - Robert Linden, professor of meteorology at MIT; Oliver Roberts, professor of nuclear engineering at U.C. Berkeley; and James Williams, professor of criminology at Stanford. All had Ivy League Ph.D.s and membership in the National Academy of Sciences.

Subjects were asked whether they'd recommend a book by any of these authors to a friend.

The result: The experts could be seen as sages or stooges depending on whether they were said to agree with a subject's preexisting belief.

Sure, professor Roberts might have a Ph.D. from Princeton but if he's going to panic about nuclear waste he must be a girly man - or if he thinks it's safe to bury it, someone in the nuclear industry must be paying him.

It's not that people don't like science - it's that they selectively attend to evidence in a way that's gratifying to them, said Kahan. "People will do that with our article," he said. "They'll say that's why those people [who disagree with them] are so dumb."

Read more:
Watch sports videos you won't find anywhere else

jueves, 23 de septiembre de 2010

Vegetative state patients may soon be able to communicate

Vegetative state patients may soon be able to communicate
September 22nd, 2010 in Medicine & Health / Neuroscience
Communication in the vegetative state (Owen et al., Science, 2006; Monti et al., NEJM, 2010)

Researchers from Cambridge University in the UK have been able to communicate with brain-injured patients in "locked states" commonly referred to as persistent vegetative states (PVS). They predict such patients will soon be able to communicate and perhaps even move themselves around in motorized wheelchairs.

Neuroscientist Dr. Adrian Owen and colleagues used electroencephalography (EEG) monitors connected to 128 electrodes in a cap placed on the heads of brain injured patients, and were able to understand responses from them. Own thinks a similar system connected to a computer will be able to decode messages from their brains and allow them to communicate via a voice synthesizer and even control a motorized wheelchair. These systems could be available within a decade.

Dr. Owen used functional magnetic resonance imaging (fMRI) brain scans to prove that one PVS victim could understand queries and give “yes” or “no” answers to simple questions. The 29-year-old male patient had suffered brain damage in a car accident in 2003, and had been in a coma for two years before entering a persistent vegetative state. He appeared to be awake and blinked occasionally, but otherwise showed no signs of awareness.

The team used the fMRI scanner to measure the patient’s brain response while asking him questions. Brain signals associated with “yes” and “no” are complex and quite similar, and to overcome this problem the researchers asked the patient to imagine playing tennis for “yes” and walking through his home for “no”. Tennis movements activate regions at the top of the brain associated with spatial activities, while moving around the home is a navigational task that activates areas in the base of the brain. Using this technique the patient was able to correctly answer six test questions.

The team has now shown that similar responses could be achieved using EEG monitors, which measure electrical activity in the brain. EEG has the advantages of being much cheaper, smaller, and more portable than fMRI, which uses magnetic fields and radio waves to detect electrical pulses in the brain. EEG also gives results much more quickly than fMRI, making a conversation possible.

Dr. Owen said “we have seen something that is quite extraordinary” and that we now have a moral and ethical obligation to find ways to help patients in persistent vegetative states to communicate. The communication will be via yes/no questions, but he said “you can get a long way with yes/no questions.”

The research findings suggest about 20 percent of PVS sufferers could be able to communicate, and this may raise questions about switching off life support systems for such patients.

Dr. Owen and several members of team will soon be transferring to research posts at the University of Western Ontario in Canada, where they will receive a grant worth around 20 million US dollars to continue the research.

More information: -- Monti MM, Vanhaudenhuyse A, Coleman MR, Boly M, Pickard JD, Tshibanda L, Owen AM, Laureys S (2010), “Willful modulation of brain activity in disorders of consciousness.” N Engl J Med 362(7):579-89
-- Owen AM, Coleman MR, Davis MH, Boly M, Laureys S, Pickard JD (2006), “Detecting awareness in the vegetative state” Science 313:1402

martes, 21 de septiembre de 2010

For neurons to work as a team, it helps to have a beat

For neurons to work as a team, it helps to have a beat
September 20th, 2010 in Medicine & Health / Neuroscience

When it comes to conducting complex tasks, it turns out that the brain needs rhythm, according to researchers at the University of California, Berkeley.

Specifically, cortical rhythms, or oscillations, can effectively rally groups of neurons in widely dispersed regions of the brain to engage in coordinated activity, much like a conductor will summon up various sections of an orchestra in a symphony.

Even the simple act of catching a ball necessitates an impressive coordination of multiple groups of neurons to perceive the object, judge its speed and trajectory, decide when it's time to catch it and then direct the muscles in the body to grasp it before it whizzes by or drops to the ground.

Until now, neuroscientists had not fully understood how these neuron groups in widely dispersed regions of the brain first get linked together so they can work in concert for such complex tasks.

The UC Berkeley findings are to be published the week of Sept. 20 in the online early edition of the journal Proceedings of the National Academy of Sciences.

"One of the key problems in neuroscience right now is how you go from billions of diverse and independent neurons, on the one hand, to a unified brain able to act and survive in a complex world, on the other," said principal investigator Jose Carmena, UC Berkeley assistant professor at the Department of Electrical Engineering and Computer Sciences, the Program in Cognitive Science, and the Helen Wills Neuroscience Institute. "Evidence from this study supports the idea that neuronal oscillations are a critical mechanism for organizing the activity of individual neurons into larger functional groups."

The idea behind anatomically dispersed but functionally related groups of neurons is credited to neuroscientist Donald Hebb, who put forward the concept in his 1949 book "The Organization of Behavior."

"Hebb basically said that single neurons weren't the most important unit of brain operation, and that it's really the cell assembly that matters," said study lead author Ryan Canolty, a UC Berkeley postdoctoral fellow in the Carmena lab.

It took decades after Hebb's book for scientists to start unraveling how groups of neurons dynamically assemble. Not only do neuron groups need to work together for the task of perception - such as following the course of a baseball as it makes its way through the air - but they then need to join forces with groups of neurons in other parts of the brain, such as in regions responsible for cognition and body control.

At UC Berkeley, neuroscientists examined existing data recorded over the past four years from four macaque monkeys. Half of the subjects were engaged in brain-machine interface tasks, and the other half were participating in working memory tasks. The researchers looked at how the timing of electrical spikes - or action potentials - emitted by nerve cells was related to rhythms occurring in multiple areas across the brain.

Among the squiggly lines, patterns emerged that give literal meaning to the phrase "tuned in." The timing of when individual neurons spiked was synchronized with brain rhythms occurring in distinct frequency bands in other regions of the brain. For example, the high-beta band - 25 to 40 hertz (cycles per second) - was especially important for brain areas involved in motor control and planning.

"Many neurons are thought to respond to a receptive field, so that if I look at one motor neuron as I move my hand to the left, I'll see it fire more often, but if I move my hand to the right, the neuron fires less often," said Carmena. "What we've shown here is that, in addition to these traditional 'external' receptive fields, many neurons also respond to 'internal' receptive fields. Those internal fields focus on large-scale patterns of synchronization involving distinct cortical areas within a larger functional network."

The researchers expressed surprise that this spike dependence was not restricted to the neuron's local environment. It turns out that this local-to-global connection is vital for organizing spatially distributed neuronal groups.

"If neurons only cared about what was happening in their local environment, then it would be difficult to get neurons to work together if they happened to be in different cortical areas," said Canolty. "But when multiple neurons spread all over the brain are tuned in to a specific pattern of electrical activity at a specific frequency, then whenever that global activity pattern occurs, those neurons can act as a coordinated assembly."

The researchers pointed out that this mechanism of cell assembly formation via oscillatory phase coupling is selective. Two neurons that are sensitive to different frequencies or to different spatial coupling patterns will exhibit independent activity, no matter how close they are spatially, and will not be part of the same assembly. Conversely, two neurons that prefer a similar pattern of coupling will exhibit similar spiking activity over time, even if they are widely separated or in different brain areas.

"It is like the radio communication between emergency first responders at an earthquake," Canolty said. "You have many people spread out over a large area, and the police need to be able to talk to each other on the radio to coordinate their action without interfering with the firefighters, and the firefighters need to be able to communicate without disrupting the EMTs. So each group tunes into and uses a different radio frequency, providing each group with an independent channel of communication despite the fact that they are spatially spread out and overlapping."

The authors noted that this local-to-global relationship in brain activity may prove useful for improving the performance of brain-machine interfaces, or lead to novel strategies for regulating dysfunctional brain networks through electrical stimulation. Treatment of movement disorders through deep brain stimulation, for example, usually targets a single area. This study suggests that gentler rhythmic stimulation in several areas at once may also prove effective, the authors said.

Provided by University of California -- Berkeley

viernes, 17 de septiembre de 2010

How does Prozac act?

How does Prozac act? By acting on the microRNA
September 16th, 2010 in Medicine & Health / Research

The adaptation mechanisms of the neurons to antidepressants has, until now, remained enigmatic. Research, published this week by teams of Odile Kellermann and of Jean-Marie Launay (Inserm, Paris), sheds new light on the mechanisms of action of these drugs which have been used for more than 30 years and are heavily consumed.

The response time to antidepressants, such as Prozac, is around three weeks. How can we explain this? The adaptation mechanisms of the neurons to antidepressants has, until now, remained enigmatic. Research, published this week by the teams of Odile Kellermann (Inserm Unit 747 Cellules souches, Signalisation et Prions, Universite Paris-Descartes) and of Jean-Marie Launay (Inserm Unit 942 Hôpital Lariboisicre, Paris and the mental health network, Santé Mentale), sheds new light on the mechanisms of action of these drugs which have been used for more than 30 years and are heavily consumed over the world. In particular, the researchers have revealed, for the first time, a sequence of reactions caused by Prozac at the neuron level, which contributes to an increase in the amounts of serotonin, a chemical "messenger" essential to the brain, and deficient in depressive individuals.

Details of this work are published in the journal Science dated 17 September 2010.

Depressive states are associated with a deficit of serotonin (5-HT), one of the neurotransmitters essential for communication between neurons and particularly involved in eating and sexual behaviours, the sleep-wake cycle, pain, anxiety and mood problems.

Strategies employing antidepressant class I molecules, developed since the 1960s are thus primarily aimed at increasing the quantity of serotonin released in the synaptic gap, the space between two neurons, where the nervous communications take place via the neurotransmitters. Although it has been known for several years that antidepressants like Prozac have the effect of increasing the concentration of serotonin by blocking its recapture by the serotonin transporter (SERT) in the synapses, we did not hitherto know how to explain the delay in their action (3 weeks).

The teams of Odile Kellermann and of Jean-Marie Launay, in close collaboration with Hoffmann-LaRoche (Basel), have now characterised, for the first time, in vitro and then in vivo, the various reactions and intermediate molecules produced in the presence of Prozac, which are eventually responsible for an increased release of serotonin. In particular, the researchers have identified the key role of one particular microRNA in the active mechanisms of the antidepressants on the brain*.

This microRNA, known as miR-16, controls synthesis of the serotonin transporter.

Under normal physiological conditions, this transporter is present in the so-called "serotonergic" neurons, i.e. neurons specialised in production of this neurotransmitter. However, expression of this transporter is reduced to zero by miR-16 in so-called "noradrenaline" neurons, another neurotransmitter involved in attention, emotions, sleep, dreaming and learning.

In response to Prozac, the serotonergic neurons release a signal molecule, which causes the quantity of miR-16 to drop, which unlocks expression of the serotonin transporter in the noradrenaline neurons.

These neurons become sensitive to Prozac. They continue to produce noradrenaline, but they become mixed: they also synthesise serotonin. Ultimately, the quantity of released serotonin is increased both in the serotonergic neurons, via the direct effect of the Prozac which prevents its recapture, and in the noradrenaline neurons through the reduction of miR-16.

Hence, "this will work has revealed, for the first time, that antidepressants are able to activate a new 'source' of serotonin in the brain", explain the researchers "Furthermore, our results demonstrate that the effectiveness of Prozac rests on the 'plastic' properties of the noradrenaline neurons, i.e. their capacity to acquire the functions of serotonergic neurons".

To elucidate the mode of action of Prozac, the researchers from the Ile-de-France region used neuron stem cells which were able to differentiate themselves into neurons for manufacturing serotonin or noradrenaline. The cells, isolated and characterised by the two research teams, allowed them to reveal using pharmacological and molecular approaches, the functional links between Prozac, miR-16, serotonin transporter and the signal-molecule trigger, known as S100Beta. These links observed in vitro have been validated in vivo in mice, in the serotonergic neurons of the raphe and the noradrenaline neurons in the locus coeruleus. Dialogue between these two areas of the brain, situated under the cortex in the brainstem, is therefore one of the keys to Prozac action.

Behavioural tests have moreover confirmed the importance of miR-16 as an intermediary in Prozac action.

These results open up new avenues of research for the treatment of depressive states. Each of the "actors" in the sequence of reactions initiated by Prozac constitutes a potential pharmacological target.

The pharmacological dynamics of antidepressants, i.e. the study of the speed of action of these molecules, should also be the subject of new investigations in light of these new ideas.

More information: "miR-16 Targets the Serotonin Transporter: A New Facet for Adaptive Responses to Antidepressants" Science, September 17th 2010, vol. 329, 5998.

Provided by INSERM

jueves, 16 de septiembre de 2010

Two or three is all we see

Two or three is all we see
September 15th, 2010 in Medicine & Health / Research

The human brain can see only up to three moving objects at a given instant, new research has found.

The discovery has important implications for road design and safety, driver and pilot training, industrial safety, fast-moving sports and other areas of human visual activity affecting safety and performance.

“We have found that there is a limit to the maximum number of directions, and hence distinct objects, that the human brain can see at any given instant in time,” said Dr. Mark Edwards of The Vision Centre and The Australian National University.

“That limit is either two or three, depending upon how the directions are defined.”

“An example of this is when we’re at a road roundabout: although we can see cars coming and going in different directions, we can’t actually keep close tabs on more than three simultaneously. Rather, what the brain does is process them in series, like cars heading from the right, then in the roundabout, followed by cars on the left.”

The same theory also applies to multi-tasking, said Dr. Edwards: “We may assume that people who are good at multitasking can process lots of different things at once, but what may be closer to the reality is that they are actually able to switch their attention faster to the next item.”

Dr. Edwards and Dr. John Greenwood tested the brain’s ability to detect signal directions using random-dot stimuli. These stimuli consisted of a field of scattered dots, with groups of intermingled dots moving in different directions. They found that people could not detect more than two signal directions at once if those signals moved in different directions. The people could detect three signals when they differed in speed or depth.

However, once these limits were exceeded, instead of seeing distinct signals, what they saw was only randomly-moving noise.

Dr. Edwards explained that the limits of two and three occur for different reasons.

“In order to see motion, the signal intensity - proportion of dots moving in a given direction - needs to be of at least a certain value. For the transparent displays used here, that intensity level has to be over 40 per cent, said Dr. Edwards. “What this means is there has to be at least 40 per cent of the dots moving in a given direction to stimulate the brain cells that are processing that motion.”

Given that the dots that move in different directions act as noise for each other, this means that the maximum signal level that can be obtained with three directions, if all of those signals drive the same motion cells, is 33 per cent, a level that is too low to been seen.

However, our brain’s motion cells are also sensitive to different speeds and depth, so if the various signals not only move in different directions, but also at different speeds and depths, then they are processed by different motion cells, and so it is possible to get the signal intensity above 40 per cent even when many different signal directions are used.

However, when this is done, the maximum number of signals that can be perceived can only increase to three: far less than if the limit was due purely to signal-to-noise limitations.

“Clearly, there is another processing limit being imposed by the visual system, most likely an attentional bottleneck. To see more than three different directions of motion, and hence more than three moving objects, you have to selectively attend to them, one by one,” he explained.

Dr. Edwards said that the new insight into the ability of the average person to keep track of numerous moving objects has important health and safety implications for a whole range of areas such as road design and planning, operator training for vehicles, aircraft and for large equipment, sports training and other activities that demand a lot of our visual system.

Provided by Australian National University

miércoles, 15 de septiembre de 2010

Why some memores stick

Why some memories stick
Repetitive neural responses may enhance recall of faces and words.

Janelle Weaver

Faces that activate the same regions of the brain again and again are more likely to be remembered.
Pasieka / Science Photo Library

Practice makes perfect when it comes to remembering things, but exactly how that works has long been a mystery. A study published in Science this week1 indicates that reactivating neural patterns over and over again may etch items into the memory.

People find it easier to recall things if material is presented repeatedly at well-spaced intervals rather than all at once. For example, you're more likely to remember a face that you've seen on multiple occasions over a few days than one that you've seen once in one long period. One reason that a face linked to many different contexts — such as school, work and home — is easier to recognize than one that is associated with just one setting, such as a party, could be that there are multiple ways to access the memory. This idea, called the encoding variability hypothesis, was proposed by psychologists about 40 years ago2.

Each different context or setting activates a distinct set of brain regions; the hypothesis suggests that it is these differing neural responses that improve the memory. But neuroimaging research led by Russell Poldrack, a cognitive neuroscientist at the University of Texas, Austin, now suggests that the opposite is true — items are better remembered when they activate the same neural patterns with each exposure.

Neural rehearsal
Poldrack's team measured brain activity in 24 people using functional magnetic resonance imaging (fMRI). The subjects saw 120 unfamiliar faces, each one repeated four times at varying intervals during the fMRI scan. One hour later, they were shown the faces again, mixed with 120 new ones, and asked to rate the familiarity of each.

The researchers then looked at the brain responses that had been recorded when the subjects were first shown the faces, focusing on 20 brain regions associated with visual perception and memory. Faces that were later recognized evoked similar activation patterns at each repetition in nine of the regions, particularly those associated with object and face perception; faces that were later forgotten did not evoke such pattern to the same extent.

In a separate experiment, subjects in the fMRI scanner were shown 180 words, each repeated three times. Six hours later, they performed two memory tests. The remembered words elicited similar patterns at each repetition in 15 of the 20 brain regions that the researchers examined.

Explaining the brain
But Marvin Chun, a cognitive neuroscientist at Yale University in New Haven, Connecticut, says that the results do not invalidate the encoding variability hypothesis because Poldrack and his team were at a different type of situation. To directly test the hypothesis, the authors should have presented items in different contexts, he says.

What's more, attention-grabbing words or faces may elicit more reproducible patterns of activation when they are presented multiple times than do less striking items, says Rik Henson, a cognitive neuroscientist at the MRC Cognition and Brain Sciences Unit in Cambridge, UK. This effect could explain the results without refuting the encoding variability hypothesis, he adds.

"We can't rule that out," Poldrack says. To address this concern, he would have to further analyse subjects' brain responses to individual items. "It may well be the case that there is a version of the encoding variability hypothesis that is compatible with these data."

"If we push the theorists to think a little harder, and to try to incorporate neuroscience data into these theories, then I think that is a good thing, regardless of whether the encoding variability theory turns out to be right," he adds.

a.. References
1.. Xue, G. et al. Science doi:10.1126/science.1193125 (2010).
2.. Martin, E. Psych. Rev. 75, 421-441 (1968). | Article
Source: Nature

martes, 14 de septiembre de 2010

Children and adults see the world differently

Children and adults see the world differently
September 13th, 2010 in Medicine & Health / Research

Unlike adults, children are able to keep information from their senses separate and may therefore perceive the visual world differently, according to research published today.

Scientists at UCL (University College London) and Birkbeck, University of London have found that children younger than 12 do not combine different sensory information to make sense of the world as adults do. This does not only apply to combining different senses, such as vision and sound, but also to the different information the brain receives when looking at a scene with one eye compared to both eyes.

The results, published today in the Proceedings of the National Academy of Sciences, imply that children's experience of the visual world is very different to that of adults.

Dr Marko Nardini, UCL Institute of Ophthalmology, and lead author said, "To make sense of the world we rely on many different kinds of information. A benefit of combining information across different senses is that we can determine what is out there more accurately than by using any single sense."

He added: "The same is true for different kinds of information within a single sense. Within vision there are several ways to perceive depth. In a normal film, depth is apparent from perspective, for example in an image of a long corridor. This kind of depth can be seen even with one eye shut. In a 3D film, and in real life, there is also binocular depth information given by differences between the two eyes."

The study looked at how children and adults combine perspective and binocular depth information. Results show that being able to use the two kinds of depth information together does not happen until very late in childhood - around the age of 12.

Scientists asked children and adults wearing 3D glasses to compare two slanted surfaces and judge which is the "flattest", given perspective and binocular information separately, or both together. It was not until 12 years that children combined perspective and binocular information to improve the accuracy in their judgements, as adults do. This implies that adults combine different kinds of visual information into a single unified estimate, whereas children do not.

However, combining sensory information can result in an inability to separate the individual pieces of information feeding into the overall percept. This is known as "sensory fusion", an effect that has been documented in adults.

In a second study scientists asked whether children might be able to avoid sensory fusion by keeping visual information separate. Researchers used special 3D discs in which perspective and binocular information sometimes disagreed. Because adults tended to take an average of the perspective and the binocular information, they were poor at determining whether the slant of some discs was the same or different as a comparison disc. By contrast, 6-year-olds had no trouble in spotting differences between discs of this kind. This shows that 6-year-olds can "see" separate kinds of visual information that adults cannot.

Professor Denis Mareschal, from the Centre for Brain and Cognitive Development at Birkbeck, who co-authored the study explained: "Babies have to learn how different senses relate to each other and to the outside world. While children are still developing, the brain must determine the relationships between different kinds of sensory information to know which kinds go together and how. It may be adaptive for children not to integrate information while they are still learning such relationships -.those between vision and sound, or between perspective and binocular visual cues."

A future aim is to use functional magnetic resonance imaging (fMRI) to determine the brain changes that underlie children's abilities to combine visual information in an adult-like way.

More information: The paper 'Fusion of visual cues is not mandatory in children' is published online in Proceedings of the National Academy of Sciences today.

Provided by University College London

lunes, 13 de septiembre de 2010

Conferencia La genesis de Neuroartes

Journal of Consciousness Exploration & Research

Journal of Consciousness Exploration & Research has just published its latest issue V1(6) entitled "Various Aspects of Consciousness & Nature of Time Continued" at We invite you to review the Table of Contents here and then visit JCER web site to review articles and items of interest.

Thanks for the continuing interest in and support of JCER,

Huping Hu
JCER Editor
QuantumDream, Inc.

Journal of Consciousness Exploration & Research
Vol 1, No 6 (2010): Various Aspects of Consciousness & Nature of Time
Table of Contents

Consciousness, Mind and Matter in Indian Philosophy
Syamala Hari

Consciousness, Lack of Imagination & Samapatti
Alan J. Oliver

Interactions among Minds/Brains: Individual Consciousness and
Inter-subjectivity in Dual-Aspect Framework
Ram L. Pandey Vimal

The Great Divide That Separates Humans from Animals
Roger Cook

`Conventional time t' versus `Rhythmic Time T' (Two Faces of One
Peter Beamish

Review Article
Eminent Entities: Short Accounts of Some Major Thinkers in Consciousness
Peter Hankins

Commentary on Nixon's Guest Editorial in JCER V1(5): Consciousness, Mind
and Matter in Indian Philosophy
Syamala Hari

Response to Commentary
Response to the Commentary of Syamala Hari : `Who Can Say Whence It All
Came, and How Creation Happened?'(`Rig Veda', X, 129)
Gregory M. Nixon

Book Review
Review of Charles T. Tart's Book: The End of Materialism: How Evidence of
the Paranormal Is Bringing Science and Spirit Together
Stephen P. Smith

Review of Gregg Braden's Book: The Spontaneous Healing of Belief:
Shattering the Paradigm of False Limits
Stephen P. Smith

Review of B. Alan Wallace & Brian Hodel's Book: Embracing Mind: The Common
Ground of Science and Spirituality
Stephen P. Smith

Review of David Skrbina's Book: Panpsychism in the West
Stephen P. Smith

Review of Manjir Samanta-Laughton's Book: Punk Science: Inside the Mind of
Stephen P. Smith

Journal of Consciousness Exploration & Research

jueves, 9 de septiembre de 2010

Scientists identify new gene for memory

Scientists identify new gene for memory
September 8th, 2010 in Medicine & Health / Neuroscience

A team led by a Scripps Research Institute scientist has for the first time identified a new gene that is required for memory formation in Drosophila, the common fruit fly. The gene may have similar functions in humans, shedding light on neurological disorders such as Alzheimer's disease or human learning disabilities.

The study was published in the September 9, 2010 edition (Vol. 67, No. 5) of the journal Neuron.

"This is the first time we have a new memory and learning gene that lies outside what has been considered the most fundamental signaling pathway that underlies learning in the fruit fly," said Ron Davis, chair of Scripps Research Department of Neuroscience who led the study. "Since many of the learning and memory genes originally identified in the fruit fly are clearly involved in human neurological or psychiatric diseases, this discovery may offer significant new insights into multiple neurological disorders. We're definitely in the right ballpark."

The study shows that different alleles or mutant forms of the gene, known as gilgamesh (gish), are required for short-term memory formation in Drosophila olfactory associative learning - learning that links a specific odor with a negative or positive reinforcer.

Because Drosophila learning genes are known to be conserved in higher organisms including humans, they often provide new insights into human brain disorders. For example, the Drosophila gene known as dunce, which Davis helped identify several years ago, provided clues to the genetics of the devastating psychiatric condition of schizophrenia. Recent studies have revealed that the human version of the dunce gene is a susceptibility determinant for schizophrenia. In a similar way, any new learning gene identified in Drosophila, including gilgamesh, may provide new clues to genes involved in human neurological or psychiatric disorders.

"We're still early in the process of making connections between Drosophila memory and learning genes and the pathology of human disease," Davis said, "but it's already clear that many of these genes will provide important conceptual information and potential insights into human brain disorders. In addition, there is every reason to believe that their gene products will be one day become the target of new drugs to enhance cognition. Uncovering this new gene and its signaling pathway helps bring us that much closer to this goal."

New Gene, New Pathway

To identify the new gene, Davis and his colleagues used a novel screen for new memory mutants, looking for lines that showed abnormal learning when only one of two copies of the gene was mutant.

"We used a dominant screen because we realized that behavior such as learning and memory are very sensitive to gene dosage," Davis said. "That is, the mutation of just one copy of a gene involved in behavior is often sufficient to produce an abnormality."

The formation of new memories occurs, in part, through the activation of molecular signaling pathways within neurons that comprise the neural circuitry for learning, and for storing and retrieving those memories.

One of the things that makes the function of gish so interesting, Davis noted, is the fact that it is independent of mutations of the rutabaga gene, a Drosophila memory-learning pathway that is known to be essential for memory formation. The rutabaga mutants convert ATP, the energy chip of cells, into cyclic AMP or cAMP, which plays a critical role in olfactory learning in Drosophila.

"The cAMP pathway is the major signaling pathway used by Drosophila neurons to turn on other enzymes and genes that are necessary for memories to form," Davis said. "In fruit flies, memory and learning revolves around mutants of this pathway. It is fundamental to the process."

In the new study, gish provided an answer to a longstanding problem in Drosophila learning and memory research - the unexplained residual memory performance of flies carrying rutabaga mutations, which indicated the existence of an independent signaling pathway for memory formation. While other memory mutants have been identified, until the discovery of gish none have been shown to reduce the residual learning of mutant rutabaga flies.

Interestingly, the study found that the gish gene encodes a kind of casein kinase (which help regulate signal pathways in cells) called I? (CKI?). This is the first time that this specific kinase has been cited as having a role in memory formation.

The identification of all signaling pathways that are engaged in specific neurons during memory formation and how they interact with one another to encode memories is an issue of great importance, Davis said, one that needs more exploration for a deeper understanding of memory formation and memory failure in humans.

"The truth is that we have an extremely sketchy understanding of what causes diseases like Alzheimer's," Davis said. "We need to understand a lot more than we do now about normal brain functions like memory and learning before we have a high probability of succeeding in the development of a cure."

More information: "Gilgamesh is required for Rutabaga-independent Olfactory Learning in Drosophila," Ying Tan et al. Neuron.

Provided by The Scripps Research Institute

miércoles, 8 de septiembre de 2010

The brain speaks

The brain speaks: Scientists decode words from brain signals
September 7th, 2010 in Medicine & Health / Neuroscience

This magnetic resonance image (MRI) of an epileptic patient's brain is superimposed with the locations of two kinds of electrodes: conventional ECoG electrodes (yellow) to help locate the source of his seizures so surgeons could operate to prevent them, and two grids (red) of 16 experimental microECoG electrodes used to read speech signals from the brain. University of Utah scientists used the microelectrodes to translate brain signals into words -- a step toward devices that would let severely paralyzed people speak. Credit: Kai Miller, University of Washington.

In an early step toward letting severely paralyzed people speak with their thoughts, University of Utah researchers translated brain signals into words using two grids of 16 microelectrodes implanted beneath the skull but atop the brain.

"We have been able to decode spoken words using only signals from the brain with a device that has promise for long-term use in paralyzed patients who cannot now speak," says Bradley Greger, an assistant professor of bioengineering.

Because the method needs much more improvement and involves placing electrodes on the brain, he expects it will be a few years before clinical trials on paralyzed people who cannot speak due to so-called "locked-in syndrome."

The Journal of Neural Engineering's September issue is publishing Greger's study showing the feasibility of translating brain signals into computer-spoken words.

The University of Utah research team placed grids of tiny microelectrodes over speech centers in the brain of a volunteer with severe epileptic seizures. The man already had a craniotomy - temporary partial skull removal - so doctors could place larger, conventional electrodes to locate the source of his seizures and surgically stop them.

Using the experimental microelectrodes, the scientists recorded brain signals as the patient repeatedly read each of 10 words that might be useful to a paralyzed person: yes, no, hot, cold, hungry, thirsty, hello, goodbye, more and less.

Later, they tried figuring out which brain signals represented each of the 10 words. When they compared any two brain signals - such as those generated when the man said the words "yes" and "no" - they were able to distinguish brain signals for each word 76 percent to 90 percent of the time.

When they examined all 10 brain signal patterns at once, they were able to pick out the correct word any one signal represented only 28 percent to 48 percent of the time - better than chance (which would have been 10 percent) but not good enough for a device to translate a paralyzed person's thoughts into words spoken by a computer.

"This is proof of concept," Greger says, "We've proven these signals can tell you what the person is saying well above chance. But we need to be able to do more words with more accuracy before it is something a patient really might find useful."

This photo shows two kinds of electrodes sitting atop a severely epileptic patient's brain after part of his skull was removed temporarily. The larger, numbered, button-like electrodes are ECoGs used by surgeons to locate and then remove brain areas responsible for severe epileptic seizures. While the patient had to undergo that procedure, he volunteered to let researchers place two small grids -- each with 16 tiny "microECoG" electrodes -- over two brain areas responsible for speech. These grids are at the end of the green and orange wire bundles, and the grids are represented by two sets of 16 white dots since the actual grids cannot be seen easily in the photo. University of Utah scientists used the microelectrodes to translate speech-related brain signals into actual words -- a step toward future machines to allow severely paralyzed people to speak. Credit: University of Utah Department of Neurosurgery.

People who eventually could benefit from a wireless device that converts thoughts into computer-spoken spoken words include those paralyzed by stroke, Lou Gehrig's disease and trauma, Greger says. People who are now "locked in" often communicate with any movement they can make - blinking an eye or moving a hand slightly - to arduously pick letters or words from a list.
University of Utah colleagues who conducted the study with Greger included electrical engineers Spencer Kellis, a doctoral student, and Richard Brown, dean of the College of Engineering; and Paul House, an assistant professor of neurosurgery. Another coauthor was Kai Miller, a neuroscientist at the University of Washington in Seattle.

The research was funded by the National Institutes of Health, the Defense Advanced Research Projects Agency, the University of Utah Research Foundation and the National Science Foundation.

Nonpenetrating Microelectrodes Read Brain's Speech Signals

The study used a new kind of nonpenetrating microelectrode that sits on the brain without poking into it. These electrodes are known as microECoGs because they are a small version of the much larger electrodes used for electrocorticography, or ECoG, developed a half century ago.

For patients with severe epileptic seizures uncontrolled by medication, surgeons remove part of the skull and place a silicone mat containing ECoG electrodes over the brain for days to weeks while the cranium is held in place but not reattached. The button-sized ECoG electrodes don't penetrate the brain but detect abnormal electrical activity and allow surgeons to locate and remove a small portion of the brain causing the seizures.

Last year, Greger and colleagues published a study showing the much smaller microECoG electrodes could "read" brain signals controlling arm movements. One of the epileptic patients involved in that study also volunteered for the new study.

Because the microelectrodes do not penetrate brain matter, they are considered safe to place on speech areas of the brain - something that cannot be done with penetrating electrodes that have been used in experimental devices to help paralyzed people control a computer cursor or an artificial arm.

EEG electrodes used on the skull to record brain waves are too big and record too many brain signals to be used easily for decoding speech signals from paralyzed people.

Translating Nerve Signals into Words

In the new study, the microelectrodes were used to detect weak electrical signals from the brain generated by a few thousand neurons or nerve cells.

Each of two grids with 16 microECoGs spaced 1 millimeter (about one-25th of an inch) apart, was placed over one of two speech areas of the brain: First, the facial motor cortex, which controls movements of the mouth, lips, tongue and face - basically the muscles involved in speaking. Second, Wernicke's area, a little understood part of the human brain tied to language comprehension and understanding.

The study was conducted during one-hour sessions on four consecutive days. Researchers told the epilepsy patient to repeat one of the 10 words each time they pointed at the patient. Brain signals were recorded via the two grids of microelectrodes. Each of the 10 words was repeated from 31 to 96 times, depending on how tired the patient was.

An array of 16 microelectrodes -- known as a microECoG grid -- is arranged in a four-by-four array and shown next to a US quarter-dollar coin with a Utah state design on its "tail" side. University of Utah researchers placed two such microelectrode grids over speech areas of a patient's brain and used them to decode brain signals into words. The technology someday might help severely paralyzed patients "speak" with their thoughts, which would be converted into a computerized voice. Credit: Spencer Kellis, University of Utah

Then the researchers "looked for patterns in the brain signals that correspond to the different words" by analyzing changes in strength of different frequencies within each nerve signal, says Greger.
The researchers found that each spoken word produced varying brain signals, and thus the pattern of electrodes that most accurately identified each word varied from word to word. They say that supports the theory that closely spaced microelectrodes can capture signals from single, column-shaped processing units of neurons in the brain.

One unexpected finding: When the patient repeated words, the facial motor cortex was most active and Wernicke's area was less active. Yet Wernicke's area "lit up" when the patient was thanked by researchers after repeating words. It shows Wernicke's area is more involved in high-level understanding of language, while the facial motor cortex controls facial muscles that help produce sounds, Greger says.

The researchers were most accurate - 85 percent - in distinguishing brain signals for one word from those for another when they used signals recorded from the facial motor cortex. They were less accurate - 76 percent - when using signals from Wernicke's area. Combining data from both areas didn't improve accuracy, showing that brain signals from Wernicke's area don't add much to those from the facial motor cortex.

When the scientists selected the five microelectrodes on each 16-electrode grid that were most accurate in decoding brain signals from the facial motor cortex, their accuracy in distinguishing one of two words from the other rose to almost 90 percent.

In the more difficult test of distinguishing brain signals for one word from signals for the other nine words, the researchers initially were accurate 28 percent of the time - not good, but better than the 10 percent random chance of accuracy. However, when they focused on signals from the five most accurate electrodes, they identified the correct word almost half (48 percent) of the time.

"It doesn't mean the problem is completely solved and we can all go home," Greger says. "It means it works, and we now need to refine it so that people with locked-in syndrome could really communicate."

"The obvious next step - and this is what we are doing right now - is to do it with bigger microelectrode grids" with 121 micro electrodes in an 11-by-11 grid, he says. "We can make the grid bigger, have more electrodes and get a tremendous amount of data out of the brain, which probably means more words and better accuracy."

Provided by University of Utah

jueves, 2 de septiembre de 2010

The neural basis of the depressive self

The neural basis of the depressive self
August 31st, 2010 in Medicine & Health / Psychology & Psychiatry

Depression is actually defined by specific clinical symptoms such as sadness, difficulty to experience pleasure, sleep problems etc., present for at least two weeks, with impairment of psychosocial functioning. These symptoms guide the physician to make a diagnosis and to select antidepressant treatment such as drugs or psychotherapy.

At least 40% of depressed patients actually benefit from antidepressant treatment, whereas 20-30% of patients may suffer from chronic depression that negatively impacts their quality of life. In order to improve the efficiency of treatment and reduce the burden of depressive disorders, depression clearly needs to be defined at the neurobiological level.

Role of neurobiological markers in depression

Current research efforts are devoted to the study of the neural bases of depression and treatment induced changes using modern brain imaging techniques such as functional Magnetic Resonance Imaging (fMRI). Since many years it has become clear that depression is associated with dysfunction of specific brain regions involved in cognitive control and emotional response.

A recent fMRI-study showed that depressed patients had an abnormal activation of the medial prefrontal cortex (Figure 1; Lemogne et al. 2009). During this study, subjects had to judge whether personality traits described them or not (i.e. 'Am I selfish?'), or whether it described a generally desirable trait or not (i.e. 'Is it good or bad to be greedy?'). The dysfunction of the medial prefrontal region may explain specific complaints of depressed patients such as self-blame, rumination and feeling of guilt.

It was observed that this activation pattern was maintained over the course of depression after 8 weeks of antidepressant treatment. These results are difficult to interpret but suggest that, after remission of depression, some patients show persistent abnormalities of specific brain regions. Such abnormalities may indicate the need for complementary treatment such as cognitive behavioural therapy in order to reduce the risk of depressive recurrence.

Overall, these findings suggest that brain imaging studies could provide biomarkers of diagnosis and improve patients´ chances to responding to specific treatment modalities. Such neurobiological markers of depression may help psychiatrists to tailor antidepressant treatment to the brain and the biological needs of the patients.


In the general population, depression is still frequently associated with bad life style, impairment of volition and 'psychological weakness'. However, the results of brain imaging studies clearly have confirmed that depression is a true brain disease associated with dysfunction of specific brain regions involved in cognitive control and emotional response.

Depression needs to be defined at the neurobiological level in order to improve the efficiency of treatment and reduce the burden of depressive disorders.

Neurobiological markers of depression may help psychiatrists to target specific neural processes and regions involved in affective regulation and to tailor antidepressant treatment according to the biological needs of the patients. This could improve patients´ chances to responding to specific treatment modalities.

More information: Lemogne C, le Bastard G, Mayberg H, et al. In search of the depressive self: extended medial prefrontal network during self-referential processing in major depression. Soc Cogn Affect Neurosci 2009;4:305-312

Provided by European College of Neuropsychopharmacology