lunes, 18 de julio de 2011

Your brain on androids.

Your brain on androids.
July 14th, 2011 in Neuroscience

Brain response to videos of a robot, android and human. The researchers say they see, in the android condition, evidence of a mismatch between the human-like appearance of the android and its robotic motion. Credit: Courtesy Ayse Saygin, UC San Diego

Ever get the heebie-jeebies at a wax museum? Feel uneasy with an anthropomorphic robot? What about playing a video game or watching an animated movie, where the human characters are pretty realistic but just not quite right and maybe a bit creepy? If yes, then you've probably been a visitor to what's called the "uncanny valley."

The phenomenon has been described anecdotally for years, but how and why this happens is still a subject of debate in robotics, computer graphics and neuroscience. Now an international team of researchers, led by Ayse Pinar Saygin of the University of California, San Diego, has taken a peek inside the brains of people viewing videos of an uncanny android (compared to videos of a human and a robot-looking robot).

Published in the Oxford University Press journal Social Cognitive and Affective Neuroscience, the functional MRI study suggests that what may be going on is due to a perceptual mismatch between appearance and motion.

The term "uncanny valley" refers to an artificial agent's drop in likeability when it becomes too humanlike. People respond positively to an agent that shares some characteristics with humans – think dolls, cartoon animals, R2D2. As the agent becomes more human-like, it becomes more likeable. But at some point that upward trajectory stops and instead the agent is perceived as strange and disconcerting. Many viewers, for example, find the characters in the animated film "Polar Express" to be off-putting. And most modern androids, including the Japanese Repliee Q2 used in the study here, are also thought to fall into the uncanny valley.

Saygin and her colleagues set out to discover if what they call the "action perception system" in the human brain is tuned more to human appearance or human motion, with the general goal, they write, "of identifying the functional properties of brain systems that allow us to understand others' body movements and actions."

They tested 20 subjects aged 20 to 36 who had no experience working with robots and hadn't spent time in Japan, where there's potentially more cultural exposure to and acceptance of androids, or even had friends or family from Japan.

The subjects were shown 12 videos of Repliee Q2 performing such ordinary actions as waving, nodding, taking a drink of water and picking up a piece of paper from a table. They were also shown videos of the same actions performed by the human on whom the android was modeled and by a stripped version of the android – skinned to its underlying metal joints and wiring, revealing its mechanics until it could no longer be mistaken for a human. That is, they set up three conditions: a human with biological appearance and movement; a robot with mechanical appearance and mechanical motion; and a human-seeming agent with the exact same mechanical movement as the robot.

At the start of the experiment, the subjects were shown each of the videos outside the fMRI scanner and were informed about which was a robot and which human.

The biggest difference in brain response the researchers noticed was during the android condition – in the parietal cortex, on both sides of the brain, specifically in the areas that connect the part of the brain's visual cortex that processes bodily movements with the section of the motor cortex thought to contain mirror neurons (neurons also known as "monkey-see, monkey-do neurons" or "empathy neurons").

According to their interpretation of the fMRI results, the researchers say they saw, in essence, evidence of mismatch. The brain "lit up" when the human-like appearance of the android and its robotic motion "didn't compute."

"The brain doesn't seem tuned to care about either biological appearance or biological motion per se," said Saygin, an assistant professor of cognitive science at UC San Diego and alumna of the same department. "What it seems to be doing is looking for its expectations to be met – for appearance and motion to be congruent."

In other words, if it looks human and moves likes a human, we are OK with that. If it looks like a robot and acts like a robot, we are OK with that, too; our brains have no difficulty processing the information. The trouble arises when – contrary to a lifetime of expectations – appearance and motion are at odds.

"As human-like artificial agents become more commonplace, perhaps our perceptual systems will be re-tuned to accommodate these new social partners," the researchers write. "Or perhaps, we will decide it is not a good idea to make them so closely in our image after all."

Saygin thinks it's "not so crazy to suggest we brain-test-drive robots or animated characters before spending millions of dollars on their development."

It's not too practical, though, to do these test-drives in expensive and hard-to-come-by fMRI scanners. So Saygin and her students are currently on the hunt for an analogous EEG signal. EEG technology is cheap enough that the electrode caps are being developed for home use.

Provided by University of California - San Diego

Risk factors predictive of psychiatric symptoms after traumatic brain injury

Risk factors predictive of psychiatric symptoms after traumatic brain injury.

July 12th, 2011 in Psychology & Psychiatry.

A history of psychiatric illness such as depression or anxiety before a traumatic brain injury (TBI), together with other risk factors, are strongly predictive of post-TBI psychiatric disorders, according to an article published in Journal of Neurotrauma.

In addition to a pre-injury psychiatric disorder, two other factors are early indicators of an increased risk for psychiatric illness one year after a TBI: psychiatric symptoms during the acute post-injury period, and a concurrent limb injury. Kate Rachel Gould, DPsych, Jennie Louise Ponsford, PhD, Lisa Johnston, PhD, and Michael Schönberger, PhD, Epworth Hospital and Monash University, Melbourne, Australia, and University of Freiburg, Baden-Württemberg, Germany, also describe a link between risk of psychiatric symptoms and unemployment, pain, and poor quality of life during the 12-month post-TBI period.

In the presence of a limb injury, patients who suffered a TBI had a 6.4 greater risk of psychiatric disorders at 1 year, and a 4-fold greater risk of depression in particular, compared to patients without a limb injury. The authors report their findings in the article, "Predictive and Associated Factors of Psychiatric Disorders after Traumatic Brain Injury: A Prospective Study."

More information: The article is available free online at

domingo, 17 de julio de 2011

Study demonstrates how memory can be preserved -- and forgetting prevented

Study demonstrates how memory can be preserved -- and forgetting prevented
July 8th, 2011 in Neuroscience

As any student who's had to study for multiple exams can tell you, trying to learn two different sets of facts one after another is challenging. As you study for the physics exam, almost inevitably some of the information for the history exam is forgotten. It's been widely believed that this interference between memories develops because the brain simply doesn't have the capacity necessary to process both memories in quick succession. But is this truly the case?

A new study by researchers at Beth Israel Deaconess Medical Center (BIDMC) suggests that specific brain areas actively orchestrate competition between memories, and that by disrupting targeted brain areas through transcranial magnetic stimulation (TMS), you can preserve memory -- and prevent forgetting.

The findings are described in the June 26 Advance On-line issue of Nature Neuroscience.

"For the last 100 years, it has been appreciated that trying to learn facts and skills in quick succession can be a frustrating exercise," explains Edwin Robertson, MD, DPhil, an Associate Professor of Neurology at Harvard Medical School and BIDMC. "Because no sooner has a new memory been acquired than its retention is jeopardized by learning another fact or skill."

Robertson, together with BIDMC neurologist and coauthor Daniel Cohen, MD, studied a group of 120 college-age students who performed two concurrent memory tests. The first involved a finger-tapping motor skills task, the second a declarative memory task in which participants memorized a series of words. (Half of the group performed the tasks in this order, while a second group learned these same two tasks in reverse order.)

"The study subjects performed these back-to-back exercises in the morning," he explains. "They then returned 12 hours later and re-performed the tests. As predicted, their recall for either the word list or the motor-skill task had decreased when they were re-tested."

In the second part of the study, Robertson and Cohen administered TMS following the initial testing. TMS is a noninvasive technique that uses a magnetic simulator to generate a magnetic field that can create a flow of current in the brain.

"Because brain cells communicate through a process of chemical and electrical signals, applying a mild electrical current to the brain can influence the signals," Robertson explains. In this case, the researchers targeted two specific brain regions, the dorsolateral prefrontal cortex and the primary motor cortex. They discovered that by applying TMS to specific brain areas, they were able to reduce the interference and competition between the motor skill and word-list tasks and both memories remained intact.

"This elegant study provides fundamental new insights into the way our brain copes with the challenge of learning multiple skills and making multiple memories," says Alvaro Pascual-Leone, MD, PhD, Director of the Berenson-Allen Center for Noninvasive Brain Stimulation at BIDMC. "Specific brain structures seem to carefully balance how much we retain and how much we forget. Learning and remembering is a dynamic process and our brain devotes resources to keep the process flexible. By better understanding this process, we may be able to find novel approaches to help enhance learning and treat patients with memory problems and learning disabilities."

"Our observations suggest that distinct mechanisms support the communication between different types of memory processing," adds Robertson. "This provides a more dynamic and flexible account of memory organization than was previously believed. We've demonstrated that the interference between memories is actively mediated by brain areas and so may serve an important function that has previously been overlooked."

Provided by Beth Israel Deaconess Medical Center

Nueva Pagina del Instituto de Neuroartes en Facebook

New gene for intellectual disability discovered

New gene for intellectual disability discovered
July 15th, 2011 in Genetics

A gene linked to intellectual disability was found in a study involving the Centre for Addiction and Mental Health (CAMH) – a discovery that was greatly accelerated by international collaboration and new genetic sequencing technology, which is now being used at CAMH.

CAMH Senior Scientist Dr. John Vincent and colleagues identified defects on the gene, MAN1B1, among five families in which 12 children had intellectual disability. The results will be published in the July issue of the American Journal of Human Genetics.

Intellectual disability is a broad term describing individuals with limitations in mental abilities and in functioning in daily life. It affects one to three per cent of the population, and is often caused by genetic defects.

The individuals affected had similar physical features, and all had delays in walking and speaking. Some learned to care for themselves, while others needed help bathing and dressing. In addition, some had epilepsy or problems with overeating.

All were found to have two copies of a defective MAN1B1 gene, one inherited from each parent. These were different types of mutations on the same gene – yet the outcome, intellectual disability, was the same in different families – confirming that this gene was the cause of the disorder.

"This mutation was seen in five families, which is one of the most seen so far for genes causing this form of recessive intellectual disability," said Dr. Vincent, who last year made a breakthrough by identifying the PTCHD1 gene responsible for autism.

MAN1B1 codes an enzyme that has a quality control function in cells. This enzyme is believed to have a role in "proofreading" specific proteins after they are created in cells, and then recycling faulty ones, rather than allowing them to be released from the cell into the body. With the defective gene, this does not occur.

"This is a process that occurs throughout a person's lifetime, and is probably involved in most tissues in the body, so it is surprising that the children affected didn't have more symptoms," said Dr. Vincent, who is also head of the Molecular Neuropsychiatry and Development Laboratory at CAMH.

The discovery benefited from collaboration and the availability of new technology. Initially, the CAMH-Pakistani research team identified four families in Pakistan with multiple affected family members. As there had been intermarriage among cousins in these families, it enabled the researchers to begin mapping genes in particular regions of risk.

By teaming up with researchers from the Max Planck Institute in Berlin, Germany, conducting similar work on a family in Iran, they were able to focus on three genes of interest. These three genes were identified using next-generation sequencing, which sped the process in identifying the MAN1B1 gene. In addition, a University of Georgia scientist, Dr. Kelley Moremen, recreated one of the mutations in MAN1B1 in cells, which resulted in 1300-fold decrease in enzyme activity.

To date, MAN1B1 is the eighth known gene connected with recessive intellectual disability, but there are likely many more involved. "We would like to screen children with intellectual disability in a western population," said Dr. Vincent.

Provided by Centre for Addiction and Mental Health

"New gene for intellectual disability discovered." July 15th, 2011.

viernes, 15 de julio de 2011

When the brain remembers but the patient doesn't.

When the brain remembers but the patient doesn't
July 14th, 2011 in Neuroscience

Brain damage can cause significant changes in behaviour, such as loss of cognitive skills, but also reveals much about how the nervous system deals with consciousness. New findings reported in the July 2011 issue of Elsevier's Cortex demonstrate how the unconscious brain continues to process information even when the conscious brain is incapacitated.

Dr Stéphane Simon and collaborators in Professor Alan Pegna's laboratory at Geneva University Hospital, studied a patient brain damaged in an accident who had developed prosopagnosia, or face blindness. They measured her non-conscious responses to familiar faces, using different physiological measures of brain activity, including fMRI and EEG. The patient was shown photographs of unknown and famous people, some of whom were famous before the onset of her prosopagnosia (and others who had become famous more recently). Despite the fact that the patient could not recognize any of the famous faces, her brain activity responded to the faces that she would have recognized before the onset of her condition.

"The results of this study demonstrate that implicit processing might continue to occur despite the presence of an apparent impairment in conscious processing," says Professor Pegna, "The study has also shed light on what is required for our brain to understand what we see around us. Together with other research findings, this study suggests that the collaboration of several cerebral structures in a specific temporal order is necessary for visual awareness to arise."

More information: "When the brain remembers, but the patient doesn't: Converging fMRI and EEG evidence for covert recognition in a case of prosopagnosia" Cortex, Volume 47, Issue 7 (July 2010),

jueves, 7 de julio de 2011

An account of the path to realizing tools for controlling brain circuits with light

The Birth of Optogenetics

An account of the path to realizing tools for controlling brain circuits with light

By Edward S. Boyden | July 1, 2011Blue light hits a neuron engineered to express opsin molecules on its surface, opening a channel through which ions pass into the cell—activating the neuron.MIT McGovern Institute, Julie Pryor, Charles Jennings, Sputnik Animation, Ed Boyden

For a few years now, I’ve taught a course at MIT called “Principles of Neuroengineering.” The idea of the class is to get students thinking about how to create neurotechnology innovations—new inventions that can solve outstanding scientific questions or address unmet clinical needs. Designing neurotechnologies is difficult because of the complex properties of the brain: its inaccessibility, heterogeneity, fragility, anatomical richness, and high speed of operation. To illustrate the process, I decided to write a case study about the birth and development of an innovation with which I have been intimately involved: optogenetics—a toolset of genetically encoded molecules that, when targeted to specific neurons in the brain, allow the activity of those neurons to be driven or silenced by light.
A strategy: controlling the brain with light

As an undergraduate at MIT, I studied physics and electrical engineering and got a good deal of firsthand experience in designing methods to control complex systems. By the time I graduated, I had become quite interested in developing strategies for understanding and engineering the brain. After graduating in 1999, I traveled to Stanford to begin a PhD in neuroscience, setting up a home base in Richard Tsien’s lab. In my first year at Stanford I was fortunate enough to meet many nearby biologists willing to do collaborative experiments, ranging from attempting the assembly of complex neural circuits in vitro to behavioral experiments with rhesus macaques. For my thesis work, I joined the labs of Richard Tsien and of Jennifer Raymond in spring 2000, to study how neural circuits adapt in order to control movements of the body as the circumstances in the surrounding world change.

In parallel, I started thinking about new technologies for controlling the electrical activity of specific neuron types embedded within intact brain circuits. That spring, I discussed this problem—during brainstorming sessions that often ran late into the night—with Karl Deisseroth, then a Stanford MD-PhD student also doing research in Tsien’s lab. We started to think about delivering stretch-sensitive ion channels to specific neurons, and then tethering magnetic beads selectively to the channels, so that applying an appropriate magnetic field would result in the bead’s moving and opening the ion channel, thus activating the targeted neurons.

By late spring 2000, however, I had become fascinated by a simpler and potentially easier-to-implement approach: using naturally occurring microbial opsins, which would pump ions into or out of neurons in response to light. Opsins had been studied since the 1970s because of their fascinating biophysical properties, and for the evolutionary insights they offer into how life forms use light as an energy source or sensory cue.1 These membrane-spanning microbial molecules—proteins with seven helical domains—react to light by transporting ions across the lipid membranes of cells in which they are genetically expressed. (See the illustration above.) For this strategy to work, an opsin would have to be expressed in the neuron’s lipid membrane and, once in place, efficiently perform this ion-transport function. One reason for optimism was that bacteriorhodopsin had successfully been expressed in eukaryotic cell membranes—including those of yeast cells and frog oocytes—and had pumped ions in response to light in these heterologous expression systems. And in 1999, researchers had shown that, although many halorhodopsins might work best in the high salinity environments in which their host archaea naturally live (i.e., in very high chloride concentrations), a halorhodopsin from Natronomonas pharaonis (Halo/NpHR) functioned best at chloride levels comparable to those in the mammalian brain.

I was intrigued by this, and in May 2000 I e-mailed the opsin pioneer Janos Lanyi, asking for a clone of the N. pharaonis halorhodopsin, for the purpose of actively controlling neurons with light. Janos kindly asked his collaborator Richard Needleman to send it to me. But the reality of graduate school was setting in: unfortunately, I had already left Stanford for the summer to take a neuroscience class at the Marine Biology Laboratory in Woods Hole. I asked Richard to send the clone to Karl. When I returned to Stanford in the fall, I was so busy learning all the skills I would need for my thesis work on motor control that the opsin project took a backseat for a while.
The channelrhodopsin collaboration

In 2002 a pioneering paper from the lab of Gero Miesenböck showed that genetic expression of a three-gene Drosophila phototransduction cascade in neurons allowed the neurons to be excited by light, and suggested that the ability to activate specific neurons with light could serve as a tool for analyzing neural circuits.3 But the light-driven currents mediated by this system were slow, and this technical issue may have been a factor that limited adoption of the tool.

This paper was fresh in my mind when, in fall 2003, Karl e-mailed me to express interest in revisiting the magnetic-bead stimulation idea as a potential project that we could pursue together later—when he had his own lab, and I had finished my PhD and could join his lab as a postdoc. Karl was then a postdoctoral researcher in Robert Malenka’s lab (also at Stanford), and I was about halfway through my PhD. We explored the magnetic-bead idea between October 2003 and February 2004. Around that time I read a just-published paper by Georg Nagel, Ernst Bamberg, Peter Hegemann, and colleagues, announcing the discovery of channelrhodopsin-2 (ChR2), a light-gated cation channel and noting that the protein could be used as a tool to depolarize cultured mammalian cells in response to light.4

In February 2004, I proposed to Karl that we contact Georg to see if they had constructs they were willing to distribute. Karl got in touch with Georg in March, obtained the construct, and inserted the gene into a neural expression vector. Georg had made several further advances by then: he had created fusion proteins of ChR2 and yellow fluorescent protein, in order to monitor ChR2 expression, and had also found a ChR2 mutant with improved kinetics. Furthermore, Georg commented that in cell culture, ChR2 appeared to require little or no chemical supplementation in order to operate (in microbial opsins, the chemical chromophore all-trans-retinal must be attached to the protein to serve as the light absorber; it appeared to exist at sufficient levels in cell culture).

Finally, we were getting the ball rolling on targetable control of specific neural types. Karl optimized the gene expression conditions, and found that neurons could indeed tolerate ChR2 expression. Throughout July, working in off-hours, I debugged the optics of the Tsien-lab rig that I had often used in the past. Late at night, around 1 a.m. on August 4, 2004, I went into the lab, put a dish of cultured neurons expressing ChR2 into the microscope, patch-clamped a glowing neuron, and triggered the program that I had written to pulse blue light at the neurons. To my amazement, the very first neuron I patched fired precise action potentials in response to blue light. That night I collected data that demonstrated all the core principles we would publish a year later in Nature Neuroscience, announcing that ChR2 could be used to depolarize neurons.5 During that long, exciting first night of experimentation in 2004, I determined that ChR2 was safely expressed and physiologically functional in neurons. The neurons tolerated expression levels of the protein that were high enough to mediate strong neural depolarizations. Even with brief pulses of blue light, lasting just a few milliseconds, the magnitude of expressed-ChR2 photocurrents was large enough to mediate single action potentials in neurons, thus enabling temporally precise driving of spike trains. Serendipity had struck—the molecule was good enough in its wild-type form to be used in neurons right away. I e-mailed Karl, “Tired, but excited.” He shot back, “This is great!!!!!”

Transitions and optical neural silencers

In January 2005, Karl finished his postdoc and became an assistant professor of bioengineering and psychiatry at Stanford. Feng Zhang, then a first-year graduate student in chemistry (and now an assistant professor at MIT and at the Broad Institute), joined Karl’s new lab, where he cloned ChR2 into a lentiviral vector, and produced lentivirus that greatly increased the reliability of ChR2 expression in neurons. I was still working on my PhD, and continued to perform ChR2 experiments in the Tsien lab. Indeed, about half the ChR2 experiments in our first optogenetics paper were done in Richard Tsien’s lab, and I owe him a debt of gratitude for providing an environment in which new ideas could be pursued. I regret that, in our first optogenetics paper, we did not acknowledge that many of the key experiments had been done there. When I started working in Karl’s lab in late March 2005, we carried out experiments to flesh out all the figures for our paper, which appeared in Nature Neuroscience in August 2005, a year after that exhilarating first discovery that the technique worked.

Around that same time, Guoping Feng, then leading a lab at Duke University (and now a professor at MIT), began to make the first transgenic mice expressing ChR2 in neurons.6 Several other groups, including the Yawo, Herlitze, Landmesser, Nagel, Gottschalk, and Pan labs, rapidly published papers demonstrating the use of ChR2 in neurons in the months following.7,8,9,10 Clearly, the idea had been in the air, with many groups chasing the use of channelrhodopsin in neurons. These papers showed, among many other groundbreaking results, that no chemicals were needed to supplement ChR2 function in the living mammalian brain.

Almost immediately after I finished my PhD in October 2005, two months after our ChR2 paper came out, I began the faculty job search process. At the same time, I started a position as a postdoctoral researcher with Karl and with Mark Schnitzer at Stanford. The job-search process ended up consuming much of my time, and being on the road, I began doing bioengineering invention consulting in order to learn about other new technology areas that could be brought to bear on neuroscience. I accepted a faculty job offer from the MIT Media Lab in September 2006, and began the process of setting up a neuroengineering research group there.

Around that time, I began a collaboration with Xue Han, my then girlfriend (and a postdoctoral researcher in the lab of Richard Tsien), to revisit the original idea of using the N. pharaonis halorhodopsin to mediate optical neural silencing. Back in 2000, Karl and I had planned to pursue this jointly; there was now the potential for competition, since we were working separately. Xue and I ordered the gene to be synthesized in codon-optimized form by a DNA synthesis company, and, using the same Tsien-lab rig that had supported the channelrhodopsin paper, Xue acquired data showing that this halorhodopsin could indeed silence neural activity. Our paper11 appeared in the March 2007 issue of PLoS ONE; Karl’s group, working in parallel, published a paper in Nature a few weeks later, independently showing that this halorhodopsin could support light-driven silencing of neurons, and also including an impressive demonstration that it could be used to manipulate behavior in Caenorhabditis elegans.12 Later, both our groups teamed up to file a joint patent on the use of this halorhodopsin to silence neural activity. As a testament to the unanticipated side effects of following innovation where it leads you, Xue and I got married in 2009 (and she is now an assistant professor at Boston University).

I continued to survey a wide variety of microorganisms for better silencing opsins: the inexpensiveness of gene synthesis meant that it was possible to rapidly obtain genes codon-optimized for mammalian expression, and to screen them for new and interesting light-drivable neural functions. Brian Chow (now an assistant professor at the University of Pennsylvania) joined my lab at MIT as a postdoctoral researcher, and began collaborating with Xue. In 2008 they identified a new class of neural silencer, the archaerhodopsins, which were not only capable of high-amplitude neural silencing—the first such opsin that could support 100 percent shutdown of neurons in the awake, behaving animal—but also were capable of rapid recovery after having been illuminated for extended durations, unlike halorhodopsins, which took minutes to recover after long-duration illumination.13 Interestingly, the archaerhodopsins are light-driven outward pumps, similar to bacteriorhodopsin—they hyperpolarize neurons by pumping protons out of the cells. However, the resultant pH changes are as small as those produced by channelrhodopsins (which have proton conductances a million times greater than their sodium conductances), and well within the safe range of neuronal operation. Intriguingly, we discovered that the H. salinarum bacteriorhodopsin, the very first opsin characterized in the early 1970s, was able to mediate decent optical neural silencing, suggesting that perhaps opsins could have been applied to neuroscience decades ago.
Beyond luck: systematic discovery and engineering of optogenetic tools

An essential aspect of furthering this work is the free and open distribution of these optogenetic tools, even prior to publication. To facilitate teaching people how to use these tools, our lab regularly posts white papers on our website* with details on reagents and optical hardware (a complete optogenetics setup costs as little as a few thousand dollars for all required hardware and consumables), and we have also partnered with nonprofit organizations such as Addgene and the University of North Carolina Gene Therapy Center Vector Core to distribute DNA and viruses, respectively. We regularly host visitors to observe experiments being done in our lab, seeking to encourage the community building that has been central to the development of optogenetics from the beginning.

As a case study, the birth of optogenetics offers a number of interesting insights into the blend of factors that can lead to the creation of a neurotechnological innovation. The original optogenetic tools were identified partly through serendipity, guided by a multidisciplinary convergence and a neuroscience-driven knowledge of what might make a good tool. Clearly, the original serendipity that fostered the formation of this concept, and that accompanied the initial quick try to see if it would work in nerve cells, has now given way to the systematized luck of bioengineering, with its machines and algorithms designed to optimize the chances of finding something new. Many labs, driven by genomic mining and mutagenesis, are reporting the discovery of new opsins with improved light and color sensitivities and new ionic properties. It is to be hoped, of course, that as this systematized luck accelerates, we will stumble upon more innovations that can aid in dissecting the enormous complexity of the brain—beginning the cycle of invention again.
Putting the toolbox to work

These optogenetic tools are now in use by many hundreds of neuroscience and biology labs around the world. Opsins have been used to study how neurons contribute to information processing and behavior in organisms including C. elegans, Drosophila, zebrafish, mouse, rat, and nonhuman primate. Light sources such as conventional mercury and xenon lamps, light-emitting diodes, scanning lasers, femtosecond lasers, and other common microscopy equipment suffice for in vitro use.

In vivo mammalian use of these optogenetic reagents has been greatly facilitated by the availability of inexpensive lasers with optical-fiber outputs; the free end of the optical fiber is simply inserted into the brain of the live animal when needed,14 or coupled at the time of experimentation to an implanted optical fiber.

For mammalian systems, viruses bearing genes encoding for opsins have proven popular in experimental use, due to their ease of creation and use. These viruses achieve their specificity either by infecting only specific neurons, or by containing regulatory promoters that constrain opsin expression to certain kinds of neurons.

An increasing number of transgenic mouse lines are also now being created, in which an opsin is expressed in a given neuron type through transgenic methodologies. One popular hybrid strategy is to inject a virus that contains a Cre-activated genetic cassette encoding for the opsin into one of the burgeoning number of mice that express Cre recombinase in specific neuron types, so that the opsin will only be produced in Cre recombinase-expressing neurons. 15

In 2009, in collaboration with the labs of Robert Desimone and Ann Graybiel at MIT, we published the first use of channelrhodopsin-2 in the nonhuman primate brain, showing that it could safely and effectively mediate neuron type–specific activation in the rhesus macaque without provoking neuron death or functional immune reactions. 16 This paper opened up a possibility of translating the technique of optical neural stimulation into the clinic as a treatment modality, although clearly much more work is required to understand this potential application of optogenetics.

Edward Boyden leads the Synthetic Neurobiology Group at MIT, where he is the Benesse Career Development Professor and associate professor of biological engineering and brain and cognitive science at the MIT Media Lab and the MIT McGovern Institute.

D. Oesterhelt, W. Stoeckenius, “Rhodopsin-like protein from the purple membrane of Halobacterium halobium,” Nat New Biol, 233:149-52, 1971.
D. Okuno et al., “Chloride concentration dependency of the electrogenic activity of halorhodopsin,” Biochemistry, 38:5422-29, 1999.
B.V. Zemelman et al., “Selective photostimulation of genetically chARGed neurons,” Neuron, 33:15-22, 2002.
G. Nagel et al., “Channelrhodopsin-2, a directly light-gated cation-selective membrane channel,” PNAS, 100:13940-45, 2003.
E.S. Boyden et al., “Millisecond-timescale, genetically targeted optical control of neural activity,” Nat Neurosci, 8:1263-68, 2005.
B.R. Arenkiel et al., “In vivo light-induced activation of neural circuitry in transgenic mice expressing channelrhodopsin-2,” Neuron, 54:205-18, 2007.
T. Ishizuka et al., “Kinetic evaluation of photosensitivity in genetically engineered neurons expressing green algae light-gated channels,” Neurosci Res, 54:85-94, 2006.
X. Li et al., “Fast noninvasive activation and inhibition of neural and network activity by vertebrate rhodopsin and green algae channelrhodopsin,” PNAS, 102:17816-21, 2005.
G. Nagel et al., “Light activation of channelrhodopsin-2 in excitable cells of Caenorhabditis elegans triggers rapid behavioral responses,” Curr Biol, 15:2279-84, 2005.
A. Bi et al., “Ectopic expression of a microbial-type rhodopsin restores visual responses in mice with photoreceptor degeneration,” Neuron, 50:23-33, 2006.
X. Han, E.S. Boyden, “Multiple-color optical activation, silencing, and desynchronization of neural activity, with single-spike temporal resolution,” PLoS ONE, 2:e299, 2007.
F. Zhang et al., “Multimodal fast optical interrogation of neural circuitry,” Nature, 446:633-39, 2007.
B.Y. Chow et al., “High-performance genetically targetable optical neural silencing by light-driven proton pumps,” Nature, 463:98-102, 2010.
A.M. Aravanis et al., “An optical neural interface: in vivo control of rodent motor cortex with integrated fiberoptic and optogenetic technology,” J Neural Eng, 4:S143-56, 2007.
D.Atasoy et al., “A FLEX switch targets Channelrhodopsin-2 to multiple cell types for imaging and long-range circuit mapping,” J Neurosci, 28:7025-30, 2008.
X. Han et al., “Millisecond-Timescale Optical Control of Neural Dynamics in the Nonhuman Primate Brain,” Neuron, 62:191-98, 2009.

domingo, 3 de julio de 2011

Chimps Are Good Listeners, Too.

Chimps Are Good Listeners, Too.
by Michael Balter on 1 July 2011.

I can talk, too! Panzee can communicate with humans using a board filled with symbols.

Most researchers regard language as unique to humans, something that makes our species special. But they fiercely debate how the ability to speak and listen evolved. Did speech require our species to evolve novel capabilities, or did we simply combine and enhance various abilities that other animals have, too? A new study with a language-trained chimp suggests that when it comes to understanding speech, the basic equipment might already have been present in our apelike ancestors.

The notion that language evolved only in the human lineage and has no parallels in other animals has long been attributed to the linguist Noam Chomsky, who argued beginning in the 1960s that humans had a special "language organ" unique to us. But more recent studies have shown that other species are surprisingly good at communication, and many researchers have abandoned this idea—even Chomsky himself no longer holds to it strictly.

However, some scientists continue to argue that humans have evolved unique ways to perceive and understand speech that allows us to use words as symbols for complex meanings. These contentions are based in part on a notable human talent: We can recognize words and understand entire sentences, even if the sounds of the words have been dramatically altered until they are a pale shadow of their linguistically meaningful selves.

So a team of researchers turned to Panzee, a 25-year-old chimpanzee, to test the assumption that only humans have this talent. Humans raised Panzee from the age of 8 days, and her caregivers exposed her to a rich diet of English language conversation about food, people, objects, and events. Panzee can't talk, so she communicates with those around her using a lexigram board of symbols corresponding to English words (see photo). She can point to 128 different lexigrams when she hears the corresponding spoken word.

A team led by Lisa Heimbauer, a cognitive psychologist at Georgia State University in Atlanta, set out to see how well Panzee could duplicate the human talent of understanding English word sounds when they are so badly distorted that they are difficult to recognize. The team used two electronic methods to distort the words: noise-vocoded (NV) synthesis, which makes words sound very raspy and breathy; the other, known as sine-wave (SW) synthesis, which reduces words to just three tones, is something like converting a rich color photograph into a stripped-down black and white version. (The words included chimp-friendly terms such as banana, potato, tickle, and balloon.)

Panzee performed well above chance when she heard distorted versions of 48 words that she knew and had to choose among four lexigrams, the team reports this week in Current Biology. Thus, while a chance result would have been one out of four correct choices, or 25%, Panzee scored 55% with NV words and about 40% with SW words, which are particularly difficult to understand even for humans. This was almost as good as the performance of 32 human subjects using the same 48 words, who chose the correct NV word 70% of the time but, like Panzee, the correct SW word only 40% of the time.

Heimbauer and her colleagues say that Panzee's strong performance argues against the idea that humans evolved highly attuned speech-recognition abilities only after they split from the chimp line some 5 million to 7 million years ago. The finding that Panzee passed a challenging test for speech recognition implies, the team writes, that "even quite sophisticated human speech perception phenomena may be within reach for some nonhumans." Still, the team says that its experiments don't rule out that humans have evolved additional speech-perception abilities that our ancestors and chimps lacked.

The authors have come up with a "nice result," says biologist Johan Bolhuis of Utrecht University in the Netherlands, but it shouldn't come as "a big surprise." For example, zebra finches have been shown to be able to distinguish very small sound differences in words spoken by humans, including ones that differ by only one vowel. That's a talent Bolhuis considers "even more remarkable" than Panzee's because it so closely parallels the way humans perceive speech.

J.D. Trout, a psychologist and philosopher at Loyola University Chicago in Illinois, thinks that the authors are far from proving their case. "These experiments don't bear on the question of whether speech is a special adaptation of humans," Trout insists, noting that the human subjects had to pull matching words out of their vocabularies of about 30,000 words, whereas Panzee had a much smaller vocabulary to search through. But Heimbauer points out that unlike the human subjects, Panzee had never been exposed to distorted speech before the experiment, making her performance all the more impressive.

jueves, 30 de junio de 2011

Researchers can predict future actions from human brain activity

Researchers can predict future actions from human brain activity

Bringing the real world into the brain scanner, researchers at The University of Western Ontario from The Centre for Brain and Mind can now determine the action a person was planning, mere moments before that action is actually executed.

"Neuroimaging allows us to look at how action planning unfolds within human brain areas without having to insert electrodes directly into the human brain. This is obviously far less intrusive," explains Western Psychology professor Jody Culham, who was the paper's senior from many brain regions, they could predict, better than chance, which of the actions the volunteer was merely intending to do, seconds later.The findings were published this week in the prestigious Journal of Neuroscience, in the paper, "Decoding Action Intentions from Preparatory Brain Activity in Human Parieto-Frontal Networks."

"This is a considerable step forward in our understanding of how the human brain plans actions," says Jason Gallivan, a Western Neuroscience PhD student, who was the first author on the paper.

[This video is not supported by your browser at this time.]

University of Western Ontario researchers Jody Culham and Jason Gallivan describe how they can use a fMRI to determine the action a person was planning, mere moments before that action is actually executed. Credit: The University of Western Ontario
Over the course of the one-year study, human subjects had their brain activity scanned using functional magnetic resonance imaging (fMRI) while they performed one of three hand movements: grasping the top of an object, grasping the bottom of the object, or simply reaching out and touching the object. The team found that by using the signals from many brain regions, they could predict, better than chance, which of the actions the volunteer was merely intending to do, seconds later.

Gallivan says the new findings could also have important clinical implications: "Being able to predict a human's desired movements using brain signals takes us one step closer to using those signals to control prosthetic limbs in movement-impaired patient populations, like those who suffer from spinal cord injuries or locked-in syndrome."

Provided by University of Western Ontario

"Researchers can predict future actions from human brain activity." June 29th, 2011.

martes, 28 de junio de 2011

Exhumation of Shakespeare to determine cause of death and drug test

Director of the Institute for Human Evolution, anthropologist Francis Thackeray has formally petitioned the Church of England to allow him to exhume the body of William Shakespeare in order to determine the cause of his death.

Thackeray is best known for his controversial suggestion nearly a decade ago which pointed to the possibility that Shakespeare had been a regular cannabis smoker. Utilizing forensic techniques, Thackeray examined 24 pipes which had been discovered in Shakespeare’s garden and determined that they had been used to smoke the drug.

Citing that even after 400 years, Shakespeare is still one of the most famous people in history, Thackeray hopes to be able to end the question of how he died and establish a health history. With new state-of-the-art computer equipment he hopes to create a three dimensional reconstruction of Shakespeare. The hope is to be able to determine the kind of life he led, any diseases of medical conditions he may have suffered from and what ultimately caused his death.

The new technology, nondestructive analysis, will not require the remains to be moved but will instead scan the bones. They are also hoping to collect DNA from Shakespeare and his wife and sister, all who are buried at Holy Trinity Church.

Thackeray also hopes to find evidence to back his controversial claims years ago regarding Shakespeare’s marijuana smoking. Examining the teeth could provide the evidence they need. If they are able to discover grooves between the incisor and canine teeth, it could show them he was chewing on a pipe.

This plan however goes against the final wishes of Shakespeare himself who had the following words engraved on his tomb: “Good frend for Jesus sake forebeare, To dig the dust encloased heare, Bleste be the man that spares thes stones, And curst be he that moves my bones.”

The Church of England denies that any requests have been made to exhume Shakespeare’s body but Thackeray and his team hopes to gain approval in time to be able to make the determination before the 400th anniversary of his death in 2016.

A little practice can change the brain in a lasting way: study

A little practice can change the brain in a lasting way: study
June 27th, 2011 in Psychology & Psychiatry

A little practice goes a long way, according to researchers at McMaster University, who have found the effects of practice on the brain have remarkable staying power.

The study, published this month in the journal Psychological Science, found that when participants were shown visual patterns—faces, which are highly familiar objects, and abstract patterns, which are much less frequently encountered—they were able to retain very specific information about those patterns one to two years later.

"We found that this type of learning, called perceptual learning, was very precise and long-lasting," says Zahra Hussain, lead author of the study who is a former McMaster graduate student in the Department of Psychology, Neuroscience & Behaviour and now a Research Fellow at the University of Nottingham. "These long-lasting effects arose out of relatively brief experience with the patterns – about two hours, followed by nothing for several months, or years."

Over the course of two consecutive days, participants were asked to identify a specific face or pattern from a larger group of images. The task was challenging because images were degraded—faces were cropped, for example—and shown very briefly. Participants had difficulty identifying the correct images in the early stages, but accuracy rates steadily climbed with practice.

About one year later, a group of participants were called back and their performance on the task was re-measured, both with the same set of items they'd been exposed to earlier, and with a new set from the same class of images. Researchers found that when they showed participants the original images, accuracy rates were high. When they showed participants new images, accuracy rates plummeted, even though the new images closely resembled the learned ones, and they hadn't seen the original images for at least a year.

"During those months in between visits to our lab, our participants would have seen thousands of faces, and yet somehow maintained information about precisely which faces they had seen over a year ago," says Allison Sekuler, co-author of the study and professor and Canada Research Chair in Cognitive Neuroscience in the Department of Psychology, Neuroscience & Behaviour. "The brain really seems to hold onto specific information, which provides great promise for the development of brain training, but also raises questions about what happens as a function of development. How much information do we store as we grow, older and how does the type of information we store chage across our lifetimes? And what is the impact of storing all that potentially irrelevant information on our ability to learn and remember more relevant information?"

She and her colleagues point to children today who are growing up in a world in which they are bombarded with sensory information, and wonders what will happen.

"We don't yet know the long-term implications of retaining all this information, which is why it is so important to understand the physiological underpinnings," says Patrick Bennett, co-author and professor and Canada Research Chair in Vision Science in the Department of Psychology, Neuroscience & Behaviour. "This result warrants further study on how we can optimize our ability to train the brain to preserve what would be considered the most valuable information."

More information: A pdf of the study can be found at: http://dailynews.m … SciFinal.pdf

Provided by McMaster University

sábado, 18 de junio de 2011

Restoring memory, repairing damaged brains

Restoring memory, repairing damaged brains

June 17th, 2011 in Neuroscience

In the experiment, the researchers had rats learn a task, pressing one lever rather than another to receive a reward. Using embedded electrical probes, the experimental research team recorded changes in the rat's brain activity between the two major internal divisions of the hippocampus, known as subregions CA3 and CA1. The experimenters then blocked the normal neural interactions between the two areas using pharmacological agents. The previously trained rats then no long displayed the long-term learned behavior. But long-term memory capability returned to the pharmacologically blocked rats when the team activated the electronic device programmed to duplicate the memory-encoding function. Credit: USC Viterbi School of Engineering
Scientists have developed a way to turn memories on and off -- literally with the flip of a switch.

Using an electronic system that duplicates the neural signals associated with memory, they managed to replicate the brain function in rats associated with long-term learned behavior, even when the rats had been drugged to forget.

"Flip the switch on, and the rats remember. Flip it off, and the rats forget," said Theodore Berger of the USC Viterbi School of Engineering's Department of Biomedical Engineering.

Berger is the lead author of an article that will be published in the Journal of Neural Engineering. His team worked with scientists from Wake Forest University in the study, building on recent advances in our understanding of the brain area known as the hippocampus and its role in learning.

In the experiment, the researchers had rats learn a task, pressing one lever rather than another to receive a reward. Using embedded electrical probes, the experimental research team, led by Sam A. Deadwyler of the Wake Forest Department of Physiology and Pharmacology, recorded changes in the rat's brain activity between the two major internal divisions of the hippocampus, known as subregions CA3 and CA1. During the learning process, the hippocampus converts short-term memory into long-term memory, the researchers prior work has shown.

"No hippocampus," says Berger, "no long-term memory, but still short-term memory." CA3 and CA1 interact to create long-term memory, prior research has shown.

In a dramatic demonstration, the experimenters blocked the normal neural interactions between the two areas using pharmacological agents. The previously trained rats then no longer displayed the long-term learned behavior.

"The rats still showed that they knew 'when you press left first, then press right next time, and vice-versa,'" Berger said. "And they still knew in general to press levers for water, but they could only remember whether they had pressed left or right for 5-10 seconds."

Using a model created by the prosthetics research team led by Berger, the teams then went further and developed an artificial hippocampal system that could duplicate the pattern of interaction between CA3-CA1 interactions.

Long-term memory capability returned to the pharmacologically blocked rats when the team activated the electronic device programmed to duplicate the memory-encoding function.

In addition, the researchers went on to show that if a prosthetic device and its associated electrodes were implanted in animals with a normal, functioning hippocampus, the device could actually strengthen the memory being generated internally in the brain and enhance the memory capability of normal rats.

"These integrated experimental modeling studies show for the first time that with sufficient information about the neural coding of memories, a neural prosthesis capable of real-time identification and manipulation of the encoding process can restore and even enhance cognitive mnemonic processes," says the paper.

Next steps, according to Berger and Deadwyler, will be attempts to duplicate the rat results in primates (monkeys), with the aim of eventually creating prostheses that might help the human victims of Alzheimer's disease, stroke or injury recover function.

The paper is entitled "A Cortical Neural Prosthesis for Restoring and Enhancing Memory." Besides Deadwyler and Berger, the other authors are, from USC, BME Professor Vasilis Z. Marmarelis and Research Assistant Professor Dong Song, and from Wake Forest, Associate Professor Robert E. Hampson and Post-Doctoral Fellow Anushka Goonawardena.

Berger, who holds the David Packard Chair in Engineering, is the Director of the USC Center for Neural Engineering, Associate Director of the National Science Foundation Biomimetic MicroElectronic Systems Engineering Research Center, and a Fellow of the IEEE, the AAAS, and the AIMBE

jueves, 16 de junio de 2011

A fossil of modern humans, dating back 160,000 years.

A fossil of modern humans, dating back 160,000 years.

At Britain's Royal Society, Dr. Marta Lahr from Cambridge University's Leverhulme Centre for Human Evolutionary Studies presented her findings that the height and brain size of modern-day humans is shrinking.

Looking at human fossil evidence for the past 200,000 years, Lahr looked at the size and structure of the bones and skulls found across Europe, Africa and Asia. What they discovered was that the largest Homo sapiens lived 20,000 to 30,000 years ago with an average weight between 176 and 188 pounds and a brain size of 1,500 cubic centimeters.

They discovered that some 10,000 years ago however, size started getting smaller both in stature and in brain size. Within the last 10 years, the average human size has changed to a weight between 154 and 176 pounds and a brain size of 1,350 cubic centimeters.

While large size remained static for close to 200,000 years, researchers believe the reduction in stature can be connected to a change from the hunter-gatherer way of life to that of agriculture which began some 9,000 years ago.

The fossilized skull of an adult male hominid unearthed in 1997 from a site near the village of Herto, Middle Awash, Ethiopia. The skull, reconstructed by UC Berkeley paleoanthropologist Tim White, is slightly larger than the most extreme adult male humans today, but in other ways is more similar to modern humans than to earlier hominids, such as the neanderthals. White and his team concluded that the 160,000 year old hominid is the oldest known modern human, which they named Homo sapiens idaltu. Image © J. Matternes
While the change to agriculture would have provided a plentiful crop of food, the limiting factor of farming may have created vitamin and mineral deficiencies and resulted in a stunted growth. Early Chinese farmers ate cereals such as rice which lacks the B vitamin niacin which is essential for growth.

Agriculture however does not explain the reduction in brain size. Lahr believes that this may be a result of the energy required to maintain larger brains. The human brain accounts for one quarter of the energy the body uses. This reduction in brain size however does not mean that modern humans are less intelligent. Human brains have evolved to work more efficiently and utilize less energy.

martes, 14 de junio de 2011

Brain structure adapts to environmental change

Brain structure adapts to environmental change
June 13th, 2011 in Neuroscience

Scientists have known for years that neurogenesis takes place throughout adulthood in the hippocampus of the mammalian brain. Now Columbia researchers have found that under stressful conditions, neural stem cells in the adult hippocampus can produce not only neurons, but also new stem cells. The brain stockpiles the neural stem cells, which later may produce neurons when conditions become favorable. This response to environmental conditions represents a novel form of brain plasticity. The findings were published online in Neuron on June 9, 2011.

The hippocampus is involved in memory, learning, and emotion. A research team led by Alex Dranovsky, MD, PhD, assistant professor of clinical psychiatry at Columbia University Medical Center and research scientist in the Division of Integrative Neuroscience at the New York State Psychiatric Institute/Columbia Psychiatry, compared the generation of neural stem cells and neurons in mice housed in isolation and in mice housed in enriched environments. They then used lineage studies, a technique that traces stem cells from their formation to their eventual differentiation into specific cell types, to see what proportion of neural stem cells produced neurons.

Deprived and enriched environments had opposite effects. The brains of the socially isolated mice accumulated neural stem cells but not neurons. The brains of mice housed in enriched environments produced far more neurons, but not more stem cells. The average mouse dentate gyrus, the area of the hippocampus where neurogenesis takes place, has about 500,000 neurons; the enriched environment caused an increase of about 70,000 neurons.

"We already knew that enriching environments are neurogenic, but ours is the first report that neural stem cells, currently thought of as 'quiescent,' can accumulate in the live animal," said Dr. Dranovsky. "Since this was revealed simply by changing the animal's living conditions, we think that it is an adaptation to stressful environments. When conditions turn more favorable, the stockpiled stem cells have the opportunity to produce more neurons—a form of 'neurons on demand.'"

The researchers also looked at neuronal survival. They found that social isolation did not cause it to decrease. Scientists already knew that environmental enrichment increased neuronal survival―further increasing the neuron population.

To a lesser extent, location within the hippocampus affected whether stem cells became neurons. While the ratio of stem cells to neurons remained constant in the lower blade of the dentrate gyrus, it varied in the upper blade.

Age also affected the results. After three months, the brains of the isolated mice stopped accumulating neural stem cells. But the mice in enriched environments continued to produce more neurons.

Dranovsky and his team now want to see whether this hippocampal response is specific to social isolation or is a more general response to stress. Another question is whether all neural stem cells have the same potential to produce neurons.

"The long-term goal." Said Dr. Dranovsky, "is to figure out how to instruct neural stem cells to produce neurons or more stem cells. This could lead to the eventual use of stem cells in neuronal replacement therapy for neurodegenerative diseases and other central nervous system conditions."

Provided by Columbia University

lunes, 13 de junio de 2011

Can Brain Scans Predict Music Sales?

Can Brain Scans Predict Music Sales?
by Greg Miller on 10 June 2011, 11:35 AM

Rock my accumbens. A study inspired by a performance of OneRepublic's hit Apologize finds that activity in the nucleus accumbens correlates with music sales.
Credit: Kevin Winter/Tonight Show/Getty Images
Scientific inspiration sometimes comes from unlikely sources. Two years ago, Gregory Berns, a neuroeconomist at Emory University in Atlanta, was on the couch with his kids watching American Idol. One of the contestants sang the melancholy hit song Apologize by the alternative rock band OneRepublic, and something clicked in Berns's mind.

He'd used the song a few years earlier in a study on the neural mechanisms of peer pressure, in this case, how teenagers' perceptions of a song's popularity influence how they rate the song themselves. At the time, OneRepublic had yet to sign its first record deal. A student in Bern's lab had pulled a clip of Apologize from the band's MySpace page to use in the study. When Berns heard the song on American Idol, he wondered whether anything in the brain scan data his team had collected could have predicted it would become a hit. At the time, all 120 songs used in the experiment were by artists who were unsigned and not widely known. "The next day, in the lab, we talked about it."

To find out what had become of the songs, the lab bought a subscription to Nielsen SoundScan, a service that tracks music sales. The database contained sales data for 87 of the 120 songs (not surprisingly, many songs had languished in MySpace obscurity). Berns reexamined the functional magnetic resonance imaging scans his group had collected from 27 adolescents in 2007, looking for regions of the brain where neural activity during a 15-second clip of a song correlated with the subject's likeability ratings. Two regions stood out: the orbitofrontal cortex and the nucleus accumbens. "That was a good check that we were on the right track, because we knew from a ton of other studies that those regions are heavily linked to reward and anticipation," Berns says.

Next, the researchers looked to see whether the activity in either of these two brain regions, averaged across subjects for each song, correlated with the song's sales through May 2010. It did, Berns and co-author Sara Moore report in a paper in press at the Journal of Consumer Psychology. The correlations were statistically significant but modest. Activity in the nucleus accumbens, the best predictor of song sales, accounts for about 10% of the variance in sales, Berns says. "It's not a hit maker," he cautions.

Intriguingly, the brain scan data predicted commercial success better than the subjects' likeability ratings, which did not correlate with sales. "What is new and interesting about this study is that brain signals predict sales in a situation where the ratings of the participants don't," says John-Dylan Haynes of the Bernstein Center for Computational Neuroscience in Berlin. Although several recent studies have shown it's possible to predict consumer choices from brain activity, Haynes says, it hasn't been clear whether brain scans can reveal anything about people's product preferences that couldn't be gained by simply asking them. In this case, at least, it seems they can.

"This is a really cool result," says Brian Knutson, a cognitive neuroscientist at Stanford University in Palo Alto, California. Showing that brain activity in a small group of people can predict the buying behavior of a much larger group of people is a novel and provocative finding, he says. But how does it work, and why would brain activity be better than the subjects' ratings? Knutson suggests that activity in the nucleus accumbens may provide a more pure indication of how much people actually want something, unencumbered by economic and social considerations that might influence their ratings—for example, whether one's credibility as a hard-rocking heavy metal fan would be undermined by a fondness for, well, Apologize.

There have been many dubious claims about "neuromarketing" strategies for using brain activity to assess consumer sentiment, says Antonio Rangel, a neuroeconomist at the California Institute of Technology in Pasadena. He sees the new study as an exciting proof of principle that in some cases neuroimaging can provide useful information not picked up by traditional methods such as consumer surveys and focus groups. Still, Rangel says, it's a long way from being a viable marketing tool. "I would not invest in a company based on this."

viernes, 10 de junio de 2011

Scientists find gene vital to nerve cell development

Scientists find gene vital to nerve cell development
June 9th, 2011 in Medicine & Health / Genetics

In healthy mice, individual Schwann cells wrap their membranes around a nerve cell’s axon many times. A cross-section of the resulting myelin sheath is visible as a thick band surrounding the axon in the “Normal” image on the left. In mice with a mutation in Gpr126, Schwann cells cannot make myelin and no thick layer surrounding axons is visible in the “Mutant” image on the right. KELLY R. MONK

The body’s ability to perform simple tasks like flex muscles or feel heat, cold and pain depends, in large part, on myelin, an insulating layer of fats and proteins that speeds the propagation of nerve cell signals.

Now, scientists have identified a gene in mice that controls whether certain cells in the peripheral nervous system can make myelin. Called Gpr126, the gene encodes a cellular receptor that could play a role in diseases affecting peripheral nerves, says Kelly R. Monk, PhD, assistant professor of developmental biology at Washington University School of Medicine in St. Louis.

“Researchers knew Gpr126 existed in humans, but no one knew what it did,” says Monk, who did this work while a postdoctoral researcher at Stanford University. “For 30 years or so, scientists have been looking for a cell receptor that controls myelination by raising levels of an important chemical messenger. We found it in zebrafish. And now we’ve shown that it’s present in mammals. It’s the first known function for this receptor, and it solved a decades-old mystery, which is exciting.”

The work is currently available online and will be published in the July 1 issue of the journal Development.

In a paper published in Science in 2009, Monk and her colleagues first showed that zebrafish require Gpr126 to make myelin in their peripheral nerves, but not in the brain or spinal cord of the central nervous system.

When a gene works a certain way in zebrafish, it likely works that way in mammals, according to William S. Talbot, PhD, professor of developmental biology at Stanford University and Monk’s postdoctoral advisor.

“The brain and spinal cord are fine in mice without the Gpr126 gene,” Talbot says. “But there is no myelin in the peripheral nerves, very much like in zebrafish. This is evidence that Gpr126 probably has a general role in myelin formation and nerve development in all vertebrates, including humans.”

The missing gene appears to disrupt specialized cells in the peripheral nervous system called Schwann cells, stopping those cells from enveloping and providing nutrients to the axons of nerves. Healthy Schwann cells wrap their membranes around nerve cell axons many times to form the myelin sheath that speeds the transmission of nerve cell signals.

In zebrafish without Gpr126, Schwann cells appear to develop and arrange themselves with individual axons normally at first. But when it comes time to wrap around the axon and make myelin, they stop short.

“From zebrafish, we thought this gene controlled only one very specific step of Schwann cell development,” Monk says. “But in mice the story is more complex.”

In mice without the gene, problems begin much earlier. The Schwann cells take longer to associate with individual axons and, compared to normal mice, there are many fewer axons. Such evidence leads Monk to speculate that the delayed sorting and failure of Schwann cells to wrap around axons causes the associated neurons to die. Because of these and other problems seen in mice without Gpr126 (including defects in the lungs, kidneys and cardiovascular system), Monk proposes that it plays more diverse roles in mice than in zebrafish. Although mice without Gpr126 never lived beyond two weeks, zebrafish with the same mutation survived to reproduce.

Because of its clear role in forming myelin, Gpr126 could be a possible target for therapies to treat peripheral neuropathies, common conditions where peripheral nerves are damaged. Such damage causes an array of problems including pain and numbness in the hands and feet, muscle weakness and even problems involving functions of internal organs such as digestion. Some peripheral neuropathies are genetic, but many result from diseases of aging and poor health, including complications from diabetes or side effects of chemotherapy.

With these conditions in mind, Monk and Talbot point out that Gpr126 is a member of a large family of cell surface receptors that are common targets for most commercially available drugs, treating conditions as diverse as allergies, ulcers and schizophrenia.

“We don’t know yet whether Gpr126 itself can be a drug target. But the fact that its relatives can,” Talbot says, “makes it especially interesting.”

Ongoing work in Monk’s lab seeks to further define the many roles of Gpr126 in mammals, including whether it could help direct Schwann cells to repair or regrow damaged myelin.

More information: Monk KR, Oshima K, Jors S, Heller S, Talbot WS. Gpr126 is essential for peripheral nerve development and myelination in mammals. Development. 138(13). July 2011.

Monk et al. A G protein-coupled receptor is essential for Schwann cells to initiate myelination. Science. 325. Sept. 2009.

Provided by Washington University School of Medicine in St. Louis

martes, 7 de junio de 2011

Attention and awareness aren't the same

Attention and awareness aren't the same
June 6th, 2011 in Psychology & Psychiatry

Paying attention to something and being aware of it seems like the same thing -they both involve somehow knowing the thing is there. However, a new study, which will be published in an upcoming issue of Psychological Science, a journal of the Association for Psychological Science, finds that these are actually separate; your brain can pay attention to something without you being aware that it's there.

"We wanted to ask, can things attract your attention even when you don't see them at all?" says Po-Jang Hsieh, of Duke-NUS Graduate Medical School in Singapore and MIT. He co-wrote the study with Jaron T. Colas and Nancy Kanwisher of MIT. Usually, when people pay attention to something, they also become aware of it; in fact, many psychologists assume these two concepts are inextricably linked. But more evidence has suggested that's not the case.

To test this, Hsieh and his colleagues came up with an experiment that used the phenomenon called "visual pop-out." They set each participant up with a display that showed a different video to each eye. One eye was shown colorful, shifting patterns; all awareness went to that eye, because that's the way the brain works. The other eye was shown a pattern of shapes that didn't move. Most were green, but one was red. Then subjects were tested to see what part of the screen their attention had gone to. The researchers found that people's attention went to that red shape – even though they had no idea they'd seen it at all.

In another experiment, the researchers found that if people were distracted with a demanding task, the red shape didn't attract attention unconsciously anymore. So people need a little brain power to pay attention to something even if they aren't aware of it, Hsieh and his colleagues concluded.

Hsieh suggests that this could have evolved as a survival mechanism. It might have been useful for an early human to be able to notice and process something unusual on the savanna without even being aware of it, for example. "We need to be able to direct attention to objects of potential interest even before we have become aware of those objects," he says.

Provided by Association for Psychological Science

viernes, 3 de junio de 2011

Examining the brain as a neural information super-highway

Examining the brain as a neural information super-highway
June 2nd, 2011 in Neuroscience

An article demonstrating how tools for modeling traffic on the Internet and telephone systems can be used to study information flow in brain networks will be published in the open-access journal PLoS Computational Biology on 2nd June 2011.

The brain functions as a complex system of regions that must communicate with each other to enable everyday activities such as perception and cognition. This need for networked computation is a challenge common to multiple types of communication systems. Thus, important questions about how information is routed and emitted from individual brain regions may be addressed by drawing parallels with other well-known types of communication systems, such as the Internet.

The authors, from the Rotman Research Institute at Baycrest Centre, Toronto, Canada, showed that – similar to other communication networks – the timing pattern of information emission is highly indicative of information traffic flow through the network. In this study the output of information was sensitive to subtle differences between individual subjects, cognitive states and brain regions.

The researchers recorded electrical activity from the brain and used signal processing techniques to precisely determine exactly when units of information get emitted from different regions. They then showed that the times between successive departures are distributed according to a specific distribution. For instance, when research study participants were asked to open their eyes in order to allow visual input, emission times became significantly more variable in parts of the brain responsible for visual processing, reflecting and indicating increased neural "traffic" through the underlying brain regions.

This method can be broadly applied in neuroscience and may potentially be used to study the effects of neural development and aging, as well as neurodegenerative disease, where traffic flow would be compromised by the loss of certain nodes or disintegration of pathways.

More information: Mišić B, Vakorin VA, Kovačević N, Paus T, McIntosh AR (2011) Extracting Message Inter-Departure Time Distributions from the Human Electroencephalogram. PLoS Comput Biol 7(6): e1002065. doi:10.1371/journal.pcbi.1002065

Provided by Public Library of Science

jueves, 2 de junio de 2011

Researchers map, measure brain's neural connections

Researchers at Brown University have created a computer program to advance analysis of the neural connections in the human brain. The program's special features include a linked view for users to view both the 3-D image (top) and 2-D closeups of the neural bundles. Credit: Radu Jianu, Brown University

Medical imaging systems allow neurologists to summon 3-D color renditions of the brain at a moment's notice, yielding valuable insights. But sometimes there can be too much detail; important elements can go unnoticed.

The bundles of individual nerves that transmit information from one part of the brain to the other, like fiber-optic cables, are so intricate and so interwoven that they can be difficult to trace through standard imaging techniques. To help, computer science researchers at Brown University have produced 2-D maps of the neural circuitry in the human brain.

The goal is simplicity. The planar maps extract the neural bundles from the imaging data and present them in 2-D – a format familiar to medical professionals working with brain models. The Brown researchers also provide a web interface by integrating the neural maps into a geographical digital maps framework that professionals can use seamlessly to explore the data.

"In short, we have developed a new way to make 2-D diagrams that illustrate 3-D connectivity in human brains," said David Laidlaw, professor of computer science at Brown and corresponding author on the paper published in IEEE Transactions on Visualization and Computer Graphics. "You can see everything here that you can't really see with the bigger (3-D) images."

The 2-D neural maps are simplified representations of neural pathways in the brain. These representations are created using a medical imaging protocol that measures the water diffusion within and around nerves of the brain. The sheathing is composed of myelin, a fatty membrane that wraps around axons, the threadlike extensions of neurons that make up nerve fibers.

Medical investigators can use the 2-D neural maps to pinpoint spots where the myelin may be compromised, which could affect the vitality of the neural circuits. That can help identify pathologies, such as autism, that brain scientists increasingly believe manifest themselves in myelinated axons. Diseases associated with the loss of myelin affect more than 2 million people worldwide, according to the Myelin Project, an organization dedicated to advancing myelin-related research.

Researchers can use the 2-D neural maps to help identify whether the structure or the size of neural bundles differs among individuals and how any differences may relate to performance, skills or other traits. "It's an anatomical measure," Laidlaw said. "It's a tool that we hope will help the field."

While zeroing in on the brain's wiring, the team, including graduate students Radu Jianu and Çağatay Demiralp, added a "linked view" so users can toggle back and forth between the neural bundles in the 2-D image and the larger 3-D picture of the brain.

"What you see is what you operate," said Jianu, the paper's lead author. "There's no change in perspective with what you're working with on the screen."

Users can export the 2-D brain representations as images and display them in Web browsers using Google Maps. "The advantage of using this mode of distribution is that users don't have to download a large dataset, put it in the right format, and then use a complicated software to try and look at it, but can simply load a webpage," Jianu explained.

The program is designed to share research. Scientists can use the Web to review brain research in other labs that may be useful to their own work.

Provided by Brown University

martes, 31 de mayo de 2011

Woman can literally feel the noise.

Woman can literally feel the noise.

May 30th, 2011 in Neuroscience -- A case of a 36-year-old woman who began to literally 'feel' noise about a year and a half after suffering a stroke sparked a new research project by neuroscientist Tony Ro from the City College of New York and the Graduate Center of the City University. Research and imagery of the brain revealed that a link had grown between the woman’s auditory region and the somatosensory region, essentially connecting her hearing to her touch sensation.

Ro and his team presented the findings at the Acoustical Society of America’s meeting on May 25. They pointed out that both hearing and touch rely on vibrations and that this connection may be found in the rest of us as well.

Another researcher and neuroscientist Elizabeth Courtenay Wilson from Beth Israel Deaconess Medical Center in Boston agrees that there is a strong connection between the two. Her team believes that the ear evolved from skin in order to create a more finely tuned frequency analysis. She earned her PhD from MIT with a study on whether vibrations could help hearing aid performance. Her studies showed that individuals with normal hearing were better able to detect a weak sound when it was accompanied by a weak vibration to the skin.

Ro himself published another paper in Experimental Brain Research in 2009 focusing on what he calls the mosquito effect. Those pesky little bugs sound frequency makes our skin prickle and he believes that in order for this to work the frequency of sound must match the frequency of the vibrations we feel.

Functional MRI scans of the brain have revealed that the auditory region of the brain can become activated by a touch. It is believed by some researchers that areas of the brain that are designed to understand frequency may be responsible for this wire crossing, though they are not yet sure exactly where the two senses come together.

sábado, 28 de mayo de 2011

How our focus can silence the noisy world around us

How our focus can silence the noisy world around us
May 27th, 2011 in Psychology & Psychiatry

How can someone with perfectly normal hearing become deaf to the world around them when their mind is on something else? New research funded by the Wellcome Trust suggests that focusing heavily on a task results in the experience of deafness to perfectly audible sounds.

In a study published in the journal 'Attention, Perception, & Psychophysics', researchers at UCL (University College London) demonstrate for the first time this phenomenon, which they term 'inattentional deafness'.

"Inattentional deafness is a common everyday experience," explains Professor Nilli Lavie from the Institute of Cognitive Neuroscience at UCL. "For example, when engrossed in a good book or even a captivating newspaper article we may fail to hear the train driver's announcement and miss our stop, or if we're texting whilst walking, we may fail to hear a car approaching and attempt to cross the road without looking."

Professor Lavie and her PhD student James Macdonald devised a series of experiments designed to test for inattentional deafness. In these experiments, over a hundred participants performed tasks on a computer involving a series of cross shapes. Some tasks were easy, asking the participants to distinguish a clear colour difference between the cross arms. Others were much more difficult, involving distinguishing subtle length differences between the cross arms.

Participants wore headphones whilst carrying out the tasks and were told these were to aid their concentration. At some point during task performance a tone was played unexpectedly through the headphones. At this point, immediately after the sound was played, the experiment was stopped and the participants asked if they had heard this sound.

When judging the respective colours of the arms - an easy task that takes relatively little concentration - around two in ten participants missed the tone. However, when focusing on the more difficult task - identifying which of the two arms was the longest - eight out of ten participants failed to notice the tone.

The researchers believe this deafness when attention is fully taken by a purely visual task is the result of our senses of seeing and hearing sharing a limited processing capacity. It is already known that people similarly experience 'inattentional blindness' when engrossed in a task that takes up all of their attentional capacity - for example, the famous Invisible Gorilla Test, where observers engrossed in a basketball game fail to observe a man in a gorilla suit walk past. The new research now shows that being engrossed in a difficult task makes us blind and deaf to other sources of information.

"Hearing is often thought to have evolved as an early warning system that does not depend on attention, yet our work shows that if our attention is taken elsewhere, we can be effectively deaf to the world around us," explains Professor Lavie. "In our task, most people noticed the sound if the task being performed was easy and did not demand their full concentration. However, when the task was harder they experienced deafness to the very same sound."

Other examples or real world situations include inattentional deafness whilst driving. It is well documented that a large number of accidents are caused by a driver's inattention and this new research suggests inattentional deafness is yet another contributing factor. For example, although emergency vehicle sirens are designed to be too loud to ignore, other sounds - such as a lorry beeping while reversing, a cyclist's bell or a scooter horn - may be missed by a driver focusing intently on some interesting visual information such as a roadside billboard, the advert content on the back of the bus in front or the map on a sat nav.

viernes, 27 de mayo de 2011

New imaging method identifies specific mental states

New imaging method identifies specific mental states
May 26th, 2011 in Neuroscience

New clues to the mystery of brain function, obtained through research by scientists at the Stanford University School of Medicine, suggest that distinct mental states can be distinguished based on unique patterns of activity in coordinated "networks" within the brain. These networks consist of brain regions that are synchronously communicating with one another. The Stanford team is using this network approach to develop diagnostic tests in Alzheimer's disease and other brain disorders in which network function is disrupted.

In a novel set of experiments, a team of researchers led by Michael Greicius, MD, assistant professor of neurology and neurological sciences, was able to determine from brain-imaging data whether experimental subjects were recalling events of the day, singing silently to themselves, performing mental arithmetic or merely relaxing. In the study, subjects engaged in these mental activities at their own natural pace, rather than in a controlled, precisely timed fashion as is typically required in experiments involving the brain-imaging technique called functional magnetic resonance imaging. This suggests that the new method — a variation on the fMRI procedure — could help scientists learn more about what the brain is doing during the free-flowing mental states through which individuals move, minute-to-minute, in the real world.

FMRI can pinpoint active brain regions in which nerve cells are firing rapidly. In standard fMRI studies, subjects perform assigned mental tasks on cue in a highly controlled environment. The researcher typically divides the scan into task periods and non-task periods with strict start and stop points for each. Researchers can detect brain regions activated by the task by subtracting signals obtained during non-task periods from those obtained during the task. To identify which part of the brain is involved in, for example, a memory task, traditional fMRI studies require experimenters to control the timing of each recalled event.

"With standard fMRI, you need to know just when your subjects start focusing on a mental task and just when they stop," said Greicius. "But that isn't how real people in the day-to-day world think."

In their analysis, the Stanford team broke free of this scripted approach by looking not for brain regions that showed heightened activity during one mental state versus another, but for coordinated activity between brain regions, defining distinct brain states. This let subjects think in a self-paced manner more closely resembling the way they think in the world outside the MRI scanner. Instead of breaking up a cognitive state into short blocks of task and non-task, Greicius and his team used uninterrupted scan periods ranging from 30 seconds to 10 minutes in length, allowing subjects to follow their own thought cues at their own pace. The scientists were able to accurately capture subjects' mental states even when the duration of the scans was reduced to as little as one minute or less — all the more reflective of real-world cognition.

Greicius is senior author of the new study, to be published online May 26 in Cerebral Cortex. His team obtained images from a group of 14 young men and women who underwent four 10-minute fMRI scans apiece. Importantly, during each of the four scans, the investigators didn't tell subjects exactly when to start doing something — recall events, sing to themselves silently, count back from 5,000 by threes, or just rest — or when to switch to something else, as is typical with standard fMRI research. "We just told them to go at their own pace," Greicius said.

Greicius's team assembled images from each separate scan. Instead of comparing "on-task" images with "off-task" images to see which regions were active during a distinct brain state compared with when the brain wasn't in that state, the researchers focused on which collections, or networks, of brain regions were active in concert with one another throughout a given state.

Greicius and his colleagues have previously shown that the brain operates, at least to some extent, as a composite of separate networks composed a number of distinct but simultaneously active brain regions. They have identified approximately 15 such networks. Different networks are associated with vision, hearing, language, memory, decision-making, emotion and so forth.

>From the scans of those 14 healthy volunteers, the Stanford investigators were able to construct maps of coordinated activity in the brain during each of the four mental activities. In particular, they looked at 90 brain regions distributed across multiple networks, accounting for most of the brain's gray matter.

In their analysis, the Stanford team identified groups of regions throughout the brain whose activity was correlated to form functional networks. The new fMRI method let them view such networks within a single scan, without having to compare it to another scan via subtraction. In the scanning images, different thought processes showed up as different networks or regions communicating with one another. For example, subjects' recollection of the day's events was characterized by synchronous firing of two brain regions called the retrosplenial cortex, or RSC, and medial temporal lobe, or MTL. Standard fMRI, in which the brain's activity during a recall exercise was compared to its activity in the resting state, has already shown that the RSC and MTL are each active during memory-related tasks. But the new study showed that coordinated activity between these two regions indicates that subjects were engaged in recall.

Once they had completed their mapping of the four mental states to specific patterns of connectivity across the 90 brain regions, Greicius and his colleagues tested their ability to determine which state a subject was in by asking a second group of 10 subjects to undergo scanning during the same four mental activities. By comparing the pattern of a subject's image to the patterns assigned to each of the four states from the 14-subject data set, the researchers' analytical tools were, with 85 percent accuracy, able to correctly determine which mental state a particular scanning image corresponded to. The team's ability to correctly determine which of those four mental tasks a subject was performing remained at the 80 percent accuracy level even when scanning sessions were reduced to one minute apiece — a length of time more reflective of real-life mental behavior than the customary 10-minute scanning time.

As an additional test, Greicius's team asked the second participant group to engage in a fifth cognitive activity, spatial navigation, in which subjects were asked to imagine walking through the rooms of their home. The team's analytical tools readily rejected the connectivity pattern reflecting this mental activity as not indicative of one of the four states in question.

The ability to use fMRI in a more casual, true-to-life manner for capturing the mental states of normal volunteers bodes well for assessing patients with cognitive disorders, such as people with Alzheimer's disease or other dementias, who are often unable to follow the precise instructions and timing demands required in traditional fMRI.

In fact, the technique has already begun proving its value in diagnosing brain disorders. In a 2009 study in Neuron, Greicius and his associates showed that different cognitive disorders show up in fMRI scans as having deficiencies specific to different networks. In Alzheimer's disease, for example, the network associated with memory is functionally impaired so that its component brain regions are no longer firing in a coordinated fashion. This network approach to brain function and dysfunction is now being widely applied to the study of numerous neurological and psychiatric conditions.

Provided by Stanford University Medical Center