lunes, 18 de julio de 2011

Your brain on androids.

Your brain on androids.
July 14th, 2011 in Neuroscience



Brain response to videos of a robot, android and human. The researchers say they see, in the android condition, evidence of a mismatch between the human-like appearance of the android and its robotic motion. Credit: Courtesy Ayse Saygin, UC San Diego

Ever get the heebie-jeebies at a wax museum? Feel uneasy with an anthropomorphic robot? What about playing a video game or watching an animated movie, where the human characters are pretty realistic but just not quite right and maybe a bit creepy? If yes, then you've probably been a visitor to what's called the "uncanny valley."

The phenomenon has been described anecdotally for years, but how and why this happens is still a subject of debate in robotics, computer graphics and neuroscience. Now an international team of researchers, led by Ayse Pinar Saygin of the University of California, San Diego, has taken a peek inside the brains of people viewing videos of an uncanny android (compared to videos of a human and a robot-looking robot).

Published in the Oxford University Press journal Social Cognitive and Affective Neuroscience, the functional MRI study suggests that what may be going on is due to a perceptual mismatch between appearance and motion.

The term "uncanny valley" refers to an artificial agent's drop in likeability when it becomes too humanlike. People respond positively to an agent that shares some characteristics with humans – think dolls, cartoon animals, R2D2. As the agent becomes more human-like, it becomes more likeable. But at some point that upward trajectory stops and instead the agent is perceived as strange and disconcerting. Many viewers, for example, find the characters in the animated film "Polar Express" to be off-putting. And most modern androids, including the Japanese Repliee Q2 used in the study here, are also thought to fall into the uncanny valley.

Saygin and her colleagues set out to discover if what they call the "action perception system" in the human brain is tuned more to human appearance or human motion, with the general goal, they write, "of identifying the functional properties of brain systems that allow us to understand others' body movements and actions."

They tested 20 subjects aged 20 to 36 who had no experience working with robots and hadn't spent time in Japan, where there's potentially more cultural exposure to and acceptance of androids, or even had friends or family from Japan.

The subjects were shown 12 videos of Repliee Q2 performing such ordinary actions as waving, nodding, taking a drink of water and picking up a piece of paper from a table. They were also shown videos of the same actions performed by the human on whom the android was modeled and by a stripped version of the android – skinned to its underlying metal joints and wiring, revealing its mechanics until it could no longer be mistaken for a human. That is, they set up three conditions: a human with biological appearance and movement; a robot with mechanical appearance and mechanical motion; and a human-seeming agent with the exact same mechanical movement as the robot.

At the start of the experiment, the subjects were shown each of the videos outside the fMRI scanner and were informed about which was a robot and which human.

The biggest difference in brain response the researchers noticed was during the android condition – in the parietal cortex, on both sides of the brain, specifically in the areas that connect the part of the brain's visual cortex that processes bodily movements with the section of the motor cortex thought to contain mirror neurons (neurons also known as "monkey-see, monkey-do neurons" or "empathy neurons").

According to their interpretation of the fMRI results, the researchers say they saw, in essence, evidence of mismatch. The brain "lit up" when the human-like appearance of the android and its robotic motion "didn't compute."

"The brain doesn't seem tuned to care about either biological appearance or biological motion per se," said Saygin, an assistant professor of cognitive science at UC San Diego and alumna of the same department. "What it seems to be doing is looking for its expectations to be met – for appearance and motion to be congruent."

In other words, if it looks human and moves likes a human, we are OK with that. If it looks like a robot and acts like a robot, we are OK with that, too; our brains have no difficulty processing the information. The trouble arises when – contrary to a lifetime of expectations – appearance and motion are at odds.

"As human-like artificial agents become more commonplace, perhaps our perceptual systems will be re-tuned to accommodate these new social partners," the researchers write. "Or perhaps, we will decide it is not a good idea to make them so closely in our image after all."

Saygin thinks it's "not so crazy to suggest we brain-test-drive robots or animated characters before spending millions of dollars on their development."

It's not too practical, though, to do these test-drives in expensive and hard-to-come-by fMRI scanners. So Saygin and her students are currently on the hunt for an analogous EEG signal. EEG technology is cheap enough that the electrode caps are being developed for home use.

Provided by University of California - San Diego

Risk factors predictive of psychiatric symptoms after traumatic brain injury

Risk factors predictive of psychiatric symptoms after traumatic brain injury.

July 12th, 2011 in Psychology & Psychiatry.

A history of psychiatric illness such as depression or anxiety before a traumatic brain injury (TBI), together with other risk factors, are strongly predictive of post-TBI psychiatric disorders, according to an article published in Journal of Neurotrauma.

In addition to a pre-injury psychiatric disorder, two other factors are early indicators of an increased risk for psychiatric illness one year after a TBI: psychiatric symptoms during the acute post-injury period, and a concurrent limb injury. Kate Rachel Gould, DPsych, Jennie Louise Ponsford, PhD, Lisa Johnston, PhD, and Michael Schönberger, PhD, Epworth Hospital and Monash University, Melbourne, Australia, and University of Freiburg, Baden-Württemberg, Germany, also describe a link between risk of psychiatric symptoms and unemployment, pain, and poor quality of life during the 12-month post-TBI period.

In the presence of a limb injury, patients who suffered a TBI had a 6.4 greater risk of psychiatric disorders at 1 year, and a 4-fold greater risk of depression in particular, compared to patients without a limb injury. The authors report their findings in the article, "Predictive and Associated Factors of Psychiatric Disorders after Traumatic Brain Injury: A Prospective Study."

More information: The article is available free online at www.liebertpub.com/neu

domingo, 17 de julio de 2011

Study demonstrates how memory can be preserved -- and forgetting prevented


Study demonstrates how memory can be preserved -- and forgetting prevented
July 8th, 2011 in Neuroscience

As any student who's had to study for multiple exams can tell you, trying to learn two different sets of facts one after another is challenging. As you study for the physics exam, almost inevitably some of the information for the history exam is forgotten. It's been widely believed that this interference between memories develops because the brain simply doesn't have the capacity necessary to process both memories in quick succession. But is this truly the case?

A new study by researchers at Beth Israel Deaconess Medical Center (BIDMC) suggests that specific brain areas actively orchestrate competition between memories, and that by disrupting targeted brain areas through transcranial magnetic stimulation (TMS), you can preserve memory -- and prevent forgetting.

The findings are described in the June 26 Advance On-line issue of Nature Neuroscience.

"For the last 100 years, it has been appreciated that trying to learn facts and skills in quick succession can be a frustrating exercise," explains Edwin Robertson, MD, DPhil, an Associate Professor of Neurology at Harvard Medical School and BIDMC. "Because no sooner has a new memory been acquired than its retention is jeopardized by learning another fact or skill."

Robertson, together with BIDMC neurologist and coauthor Daniel Cohen, MD, studied a group of 120 college-age students who performed two concurrent memory tests. The first involved a finger-tapping motor skills task, the second a declarative memory task in which participants memorized a series of words. (Half of the group performed the tasks in this order, while a second group learned these same two tasks in reverse order.)

"The study subjects performed these back-to-back exercises in the morning," he explains. "They then returned 12 hours later and re-performed the tests. As predicted, their recall for either the word list or the motor-skill task had decreased when they were re-tested."

In the second part of the study, Robertson and Cohen administered TMS following the initial testing. TMS is a noninvasive technique that uses a magnetic simulator to generate a magnetic field that can create a flow of current in the brain.

"Because brain cells communicate through a process of chemical and electrical signals, applying a mild electrical current to the brain can influence the signals," Robertson explains. In this case, the researchers targeted two specific brain regions, the dorsolateral prefrontal cortex and the primary motor cortex. They discovered that by applying TMS to specific brain areas, they were able to reduce the interference and competition between the motor skill and word-list tasks and both memories remained intact.

"This elegant study provides fundamental new insights into the way our brain copes with the challenge of learning multiple skills and making multiple memories," says Alvaro Pascual-Leone, MD, PhD, Director of the Berenson-Allen Center for Noninvasive Brain Stimulation at BIDMC. "Specific brain structures seem to carefully balance how much we retain and how much we forget. Learning and remembering is a dynamic process and our brain devotes resources to keep the process flexible. By better understanding this process, we may be able to find novel approaches to help enhance learning and treat patients with memory problems and learning disabilities."

"Our observations suggest that distinct mechanisms support the communication between different types of memory processing," adds Robertson. "This provides a more dynamic and flexible account of memory organization than was previously believed. We've demonstrated that the interference between memories is actively mediated by brain areas and so may serve an important function that has previously been overlooked."

Provided by Beth Israel Deaconess Medical Center

Nueva Pagina del Instituto de Neuroartes en Facebook

New gene for intellectual disability discovered


New gene for intellectual disability discovered
July 15th, 2011 in Genetics

A gene linked to intellectual disability was found in a study involving the Centre for Addiction and Mental Health (CAMH) – a discovery that was greatly accelerated by international collaboration and new genetic sequencing technology, which is now being used at CAMH.

CAMH Senior Scientist Dr. John Vincent and colleagues identified defects on the gene, MAN1B1, among five families in which 12 children had intellectual disability. The results will be published in the July issue of the American Journal of Human Genetics.

Intellectual disability is a broad term describing individuals with limitations in mental abilities and in functioning in daily life. It affects one to three per cent of the population, and is often caused by genetic defects.

The individuals affected had similar physical features, and all had delays in walking and speaking. Some learned to care for themselves, while others needed help bathing and dressing. In addition, some had epilepsy or problems with overeating.

All were found to have two copies of a defective MAN1B1 gene, one inherited from each parent. These were different types of mutations on the same gene – yet the outcome, intellectual disability, was the same in different families – confirming that this gene was the cause of the disorder.

"This mutation was seen in five families, which is one of the most seen so far for genes causing this form of recessive intellectual disability," said Dr. Vincent, who last year made a breakthrough by identifying the PTCHD1 gene responsible for autism.

MAN1B1 codes an enzyme that has a quality control function in cells. This enzyme is believed to have a role in "proofreading" specific proteins after they are created in cells, and then recycling faulty ones, rather than allowing them to be released from the cell into the body. With the defective gene, this does not occur.

"This is a process that occurs throughout a person's lifetime, and is probably involved in most tissues in the body, so it is surprising that the children affected didn't have more symptoms," said Dr. Vincent, who is also head of the Molecular Neuropsychiatry and Development Laboratory at CAMH.

The discovery benefited from collaboration and the availability of new technology. Initially, the CAMH-Pakistani research team identified four families in Pakistan with multiple affected family members. As there had been intermarriage among cousins in these families, it enabled the researchers to begin mapping genes in particular regions of risk.

By teaming up with researchers from the Max Planck Institute in Berlin, Germany, conducting similar work on a family in Iran, they were able to focus on three genes of interest. These three genes were identified using next-generation sequencing, which sped the process in identifying the MAN1B1 gene. In addition, a University of Georgia scientist, Dr. Kelley Moremen, recreated one of the mutations in MAN1B1 in cells, which resulted in 1300-fold decrease in enzyme activity.

To date, MAN1B1 is the eighth known gene connected with recessive intellectual disability, but there are likely many more involved. "We would like to screen children with intellectual disability in a western population," said Dr. Vincent.

Provided by Centre for Addiction and Mental Health

"New gene for intellectual disability discovered." July 15th, 2011. http://medicalxpress.com/news/2011-07-gene-intellectual-disability.html

viernes, 15 de julio de 2011

When the brain remembers but the patient doesn't.



When the brain remembers but the patient doesn't
July 14th, 2011 in Neuroscience

Brain damage can cause significant changes in behaviour, such as loss of cognitive skills, but also reveals much about how the nervous system deals with consciousness. New findings reported in the July 2011 issue of Elsevier's Cortex demonstrate how the unconscious brain continues to process information even when the conscious brain is incapacitated.

Dr Stéphane Simon and collaborators in Professor Alan Pegna's laboratory at Geneva University Hospital, studied a patient brain damaged in an accident who had developed prosopagnosia, or face blindness. They measured her non-conscious responses to familiar faces, using different physiological measures of brain activity, including fMRI and EEG. The patient was shown photographs of unknown and famous people, some of whom were famous before the onset of her prosopagnosia (and others who had become famous more recently). Despite the fact that the patient could not recognize any of the famous faces, her brain activity responded to the faces that she would have recognized before the onset of her condition.

"The results of this study demonstrate that implicit processing might continue to occur despite the presence of an apparent impairment in conscious processing," says Professor Pegna, "The study has also shed light on what is required for our brain to understand what we see around us. Together with other research findings, this study suggests that the collaboration of several cerebral structures in a specific temporal order is necessary for visual awareness to arise."

More information: "When the brain remembers, but the patient doesn't: Converging fMRI and EEG evidence for covert recognition in a case of prosopagnosia" Cortex, Volume 47, Issue 7 (July 2010),

jueves, 7 de julio de 2011

An account of the path to realizing tools for controlling brain circuits with light


The Birth of Optogenetics

An account of the path to realizing tools for controlling brain circuits with light

By Edward S. Boyden | July 1, 2011Blue light hits a neuron engineered to express opsin molecules on its surface, opening a channel through which ions pass into the cell—activating the neuron.MIT McGovern Institute, Julie Pryor, Charles Jennings, Sputnik Animation, Ed Boyden

For a few years now, I’ve taught a course at MIT called “Principles of Neuroengineering.” The idea of the class is to get students thinking about how to create neurotechnology innovations—new inventions that can solve outstanding scientific questions or address unmet clinical needs. Designing neurotechnologies is difficult because of the complex properties of the brain: its inaccessibility, heterogeneity, fragility, anatomical richness, and high speed of operation. To illustrate the process, I decided to write a case study about the birth and development of an innovation with which I have been intimately involved: optogenetics—a toolset of genetically encoded molecules that, when targeted to specific neurons in the brain, allow the activity of those neurons to be driven or silenced by light.
A strategy: controlling the brain with light

As an undergraduate at MIT, I studied physics and electrical engineering and got a good deal of firsthand experience in designing methods to control complex systems. By the time I graduated, I had become quite interested in developing strategies for understanding and engineering the brain. After graduating in 1999, I traveled to Stanford to begin a PhD in neuroscience, setting up a home base in Richard Tsien’s lab. In my first year at Stanford I was fortunate enough to meet many nearby biologists willing to do collaborative experiments, ranging from attempting the assembly of complex neural circuits in vitro to behavioral experiments with rhesus macaques. For my thesis work, I joined the labs of Richard Tsien and of Jennifer Raymond in spring 2000, to study how neural circuits adapt in order to control movements of the body as the circumstances in the surrounding world change.

In parallel, I started thinking about new technologies for controlling the electrical activity of specific neuron types embedded within intact brain circuits. That spring, I discussed this problem—during brainstorming sessions that often ran late into the night—with Karl Deisseroth, then a Stanford MD-PhD student also doing research in Tsien’s lab. We started to think about delivering stretch-sensitive ion channels to specific neurons, and then tethering magnetic beads selectively to the channels, so that applying an appropriate magnetic field would result in the bead’s moving and opening the ion channel, thus activating the targeted neurons.

By late spring 2000, however, I had become fascinated by a simpler and potentially easier-to-implement approach: using naturally occurring microbial opsins, which would pump ions into or out of neurons in response to light. Opsins had been studied since the 1970s because of their fascinating biophysical properties, and for the evolutionary insights they offer into how life forms use light as an energy source or sensory cue.1 These membrane-spanning microbial molecules—proteins with seven helical domains—react to light by transporting ions across the lipid membranes of cells in which they are genetically expressed. (See the illustration above.) For this strategy to work, an opsin would have to be expressed in the neuron’s lipid membrane and, once in place, efficiently perform this ion-transport function. One reason for optimism was that bacteriorhodopsin had successfully been expressed in eukaryotic cell membranes—including those of yeast cells and frog oocytes—and had pumped ions in response to light in these heterologous expression systems. And in 1999, researchers had shown that, although many halorhodopsins might work best in the high salinity environments in which their host archaea naturally live (i.e., in very high chloride concentrations), a halorhodopsin from Natronomonas pharaonis (Halo/NpHR) functioned best at chloride levels comparable to those in the mammalian brain.

I was intrigued by this, and in May 2000 I e-mailed the opsin pioneer Janos Lanyi, asking for a clone of the N. pharaonis halorhodopsin, for the purpose of actively controlling neurons with light. Janos kindly asked his collaborator Richard Needleman to send it to me. But the reality of graduate school was setting in: unfortunately, I had already left Stanford for the summer to take a neuroscience class at the Marine Biology Laboratory in Woods Hole. I asked Richard to send the clone to Karl. When I returned to Stanford in the fall, I was so busy learning all the skills I would need for my thesis work on motor control that the opsin project took a backseat for a while.
The channelrhodopsin collaboration

In 2002 a pioneering paper from the lab of Gero Miesenböck showed that genetic expression of a three-gene Drosophila phototransduction cascade in neurons allowed the neurons to be excited by light, and suggested that the ability to activate specific neurons with light could serve as a tool for analyzing neural circuits.3 But the light-driven currents mediated by this system were slow, and this technical issue may have been a factor that limited adoption of the tool.

This paper was fresh in my mind when, in fall 2003, Karl e-mailed me to express interest in revisiting the magnetic-bead stimulation idea as a potential project that we could pursue together later—when he had his own lab, and I had finished my PhD and could join his lab as a postdoc. Karl was then a postdoctoral researcher in Robert Malenka’s lab (also at Stanford), and I was about halfway through my PhD. We explored the magnetic-bead idea between October 2003 and February 2004. Around that time I read a just-published paper by Georg Nagel, Ernst Bamberg, Peter Hegemann, and colleagues, announcing the discovery of channelrhodopsin-2 (ChR2), a light-gated cation channel and noting that the protein could be used as a tool to depolarize cultured mammalian cells in response to light.4

In February 2004, I proposed to Karl that we contact Georg to see if they had constructs they were willing to distribute. Karl got in touch with Georg in March, obtained the construct, and inserted the gene into a neural expression vector. Georg had made several further advances by then: he had created fusion proteins of ChR2 and yellow fluorescent protein, in order to monitor ChR2 expression, and had also found a ChR2 mutant with improved kinetics. Furthermore, Georg commented that in cell culture, ChR2 appeared to require little or no chemical supplementation in order to operate (in microbial opsins, the chemical chromophore all-trans-retinal must be attached to the protein to serve as the light absorber; it appeared to exist at sufficient levels in cell culture).

Finally, we were getting the ball rolling on targetable control of specific neural types. Karl optimized the gene expression conditions, and found that neurons could indeed tolerate ChR2 expression. Throughout July, working in off-hours, I debugged the optics of the Tsien-lab rig that I had often used in the past. Late at night, around 1 a.m. on August 4, 2004, I went into the lab, put a dish of cultured neurons expressing ChR2 into the microscope, patch-clamped a glowing neuron, and triggered the program that I had written to pulse blue light at the neurons. To my amazement, the very first neuron I patched fired precise action potentials in response to blue light. That night I collected data that demonstrated all the core principles we would publish a year later in Nature Neuroscience, announcing that ChR2 could be used to depolarize neurons.5 During that long, exciting first night of experimentation in 2004, I determined that ChR2 was safely expressed and physiologically functional in neurons. The neurons tolerated expression levels of the protein that were high enough to mediate strong neural depolarizations. Even with brief pulses of blue light, lasting just a few milliseconds, the magnitude of expressed-ChR2 photocurrents was large enough to mediate single action potentials in neurons, thus enabling temporally precise driving of spike trains. Serendipity had struck—the molecule was good enough in its wild-type form to be used in neurons right away. I e-mailed Karl, “Tired, but excited.” He shot back, “This is great!!!!!”

Transitions and optical neural silencers

In January 2005, Karl finished his postdoc and became an assistant professor of bioengineering and psychiatry at Stanford. Feng Zhang, then a first-year graduate student in chemistry (and now an assistant professor at MIT and at the Broad Institute), joined Karl’s new lab, where he cloned ChR2 into a lentiviral vector, and produced lentivirus that greatly increased the reliability of ChR2 expression in neurons. I was still working on my PhD, and continued to perform ChR2 experiments in the Tsien lab. Indeed, about half the ChR2 experiments in our first optogenetics paper were done in Richard Tsien’s lab, and I owe him a debt of gratitude for providing an environment in which new ideas could be pursued. I regret that, in our first optogenetics paper, we did not acknowledge that many of the key experiments had been done there. When I started working in Karl’s lab in late March 2005, we carried out experiments to flesh out all the figures for our paper, which appeared in Nature Neuroscience in August 2005, a year after that exhilarating first discovery that the technique worked.

Around that same time, Guoping Feng, then leading a lab at Duke University (and now a professor at MIT), began to make the first transgenic mice expressing ChR2 in neurons.6 Several other groups, including the Yawo, Herlitze, Landmesser, Nagel, Gottschalk, and Pan labs, rapidly published papers demonstrating the use of ChR2 in neurons in the months following.7,8,9,10 Clearly, the idea had been in the air, with many groups chasing the use of channelrhodopsin in neurons. These papers showed, among many other groundbreaking results, that no chemicals were needed to supplement ChR2 function in the living mammalian brain.

Almost immediately after I finished my PhD in October 2005, two months after our ChR2 paper came out, I began the faculty job search process. At the same time, I started a position as a postdoctoral researcher with Karl and with Mark Schnitzer at Stanford. The job-search process ended up consuming much of my time, and being on the road, I began doing bioengineering invention consulting in order to learn about other new technology areas that could be brought to bear on neuroscience. I accepted a faculty job offer from the MIT Media Lab in September 2006, and began the process of setting up a neuroengineering research group there.

Around that time, I began a collaboration with Xue Han, my then girlfriend (and a postdoctoral researcher in the lab of Richard Tsien), to revisit the original idea of using the N. pharaonis halorhodopsin to mediate optical neural silencing. Back in 2000, Karl and I had planned to pursue this jointly; there was now the potential for competition, since we were working separately. Xue and I ordered the gene to be synthesized in codon-optimized form by a DNA synthesis company, and, using the same Tsien-lab rig that had supported the channelrhodopsin paper, Xue acquired data showing that this halorhodopsin could indeed silence neural activity. Our paper11 appeared in the March 2007 issue of PLoS ONE; Karl’s group, working in parallel, published a paper in Nature a few weeks later, independently showing that this halorhodopsin could support light-driven silencing of neurons, and also including an impressive demonstration that it could be used to manipulate behavior in Caenorhabditis elegans.12 Later, both our groups teamed up to file a joint patent on the use of this halorhodopsin to silence neural activity. As a testament to the unanticipated side effects of following innovation where it leads you, Xue and I got married in 2009 (and she is now an assistant professor at Boston University).

I continued to survey a wide variety of microorganisms for better silencing opsins: the inexpensiveness of gene synthesis meant that it was possible to rapidly obtain genes codon-optimized for mammalian expression, and to screen them for new and interesting light-drivable neural functions. Brian Chow (now an assistant professor at the University of Pennsylvania) joined my lab at MIT as a postdoctoral researcher, and began collaborating with Xue. In 2008 they identified a new class of neural silencer, the archaerhodopsins, which were not only capable of high-amplitude neural silencing—the first such opsin that could support 100 percent shutdown of neurons in the awake, behaving animal—but also were capable of rapid recovery after having been illuminated for extended durations, unlike halorhodopsins, which took minutes to recover after long-duration illumination.13 Interestingly, the archaerhodopsins are light-driven outward pumps, similar to bacteriorhodopsin—they hyperpolarize neurons by pumping protons out of the cells. However, the resultant pH changes are as small as those produced by channelrhodopsins (which have proton conductances a million times greater than their sodium conductances), and well within the safe range of neuronal operation. Intriguingly, we discovered that the H. salinarum bacteriorhodopsin, the very first opsin characterized in the early 1970s, was able to mediate decent optical neural silencing, suggesting that perhaps opsins could have been applied to neuroscience decades ago.
Beyond luck: systematic discovery and engineering of optogenetic tools

An essential aspect of furthering this work is the free and open distribution of these optogenetic tools, even prior to publication. To facilitate teaching people how to use these tools, our lab regularly posts white papers on our website* with details on reagents and optical hardware (a complete optogenetics setup costs as little as a few thousand dollars for all required hardware and consumables), and we have also partnered with nonprofit organizations such as Addgene and the University of North Carolina Gene Therapy Center Vector Core to distribute DNA and viruses, respectively. We regularly host visitors to observe experiments being done in our lab, seeking to encourage the community building that has been central to the development of optogenetics from the beginning.

As a case study, the birth of optogenetics offers a number of interesting insights into the blend of factors that can lead to the creation of a neurotechnological innovation. The original optogenetic tools were identified partly through serendipity, guided by a multidisciplinary convergence and a neuroscience-driven knowledge of what might make a good tool. Clearly, the original serendipity that fostered the formation of this concept, and that accompanied the initial quick try to see if it would work in nerve cells, has now given way to the systematized luck of bioengineering, with its machines and algorithms designed to optimize the chances of finding something new. Many labs, driven by genomic mining and mutagenesis, are reporting the discovery of new opsins with improved light and color sensitivities and new ionic properties. It is to be hoped, of course, that as this systematized luck accelerates, we will stumble upon more innovations that can aid in dissecting the enormous complexity of the brain—beginning the cycle of invention again.
Putting the toolbox to work

These optogenetic tools are now in use by many hundreds of neuroscience and biology labs around the world. Opsins have been used to study how neurons contribute to information processing and behavior in organisms including C. elegans, Drosophila, zebrafish, mouse, rat, and nonhuman primate. Light sources such as conventional mercury and xenon lamps, light-emitting diodes, scanning lasers, femtosecond lasers, and other common microscopy equipment suffice for in vitro use.

In vivo mammalian use of these optogenetic reagents has been greatly facilitated by the availability of inexpensive lasers with optical-fiber outputs; the free end of the optical fiber is simply inserted into the brain of the live animal when needed,14 or coupled at the time of experimentation to an implanted optical fiber.

For mammalian systems, viruses bearing genes encoding for opsins have proven popular in experimental use, due to their ease of creation and use. These viruses achieve their specificity either by infecting only specific neurons, or by containing regulatory promoters that constrain opsin expression to certain kinds of neurons.

An increasing number of transgenic mouse lines are also now being created, in which an opsin is expressed in a given neuron type through transgenic methodologies. One popular hybrid strategy is to inject a virus that contains a Cre-activated genetic cassette encoding for the opsin into one of the burgeoning number of mice that express Cre recombinase in specific neuron types, so that the opsin will only be produced in Cre recombinase-expressing neurons. 15

In 2009, in collaboration with the labs of Robert Desimone and Ann Graybiel at MIT, we published the first use of channelrhodopsin-2 in the nonhuman primate brain, showing that it could safely and effectively mediate neuron type–specific activation in the rhesus macaque without provoking neuron death or functional immune reactions. 16 This paper opened up a possibility of translating the technique of optical neural stimulation into the clinic as a treatment modality, although clearly much more work is required to understand this potential application of optogenetics.

Edward Boyden leads the Synthetic Neurobiology Group at MIT, where he is the Benesse Career Development Professor and associate professor of biological engineering and brain and cognitive science at the MIT Media Lab and the MIT McGovern Institute.
References

D. Oesterhelt, W. Stoeckenius, “Rhodopsin-like protein from the purple membrane of Halobacterium halobium,” Nat New Biol, 233:149-52, 1971.
D. Okuno et al., “Chloride concentration dependency of the electrogenic activity of halorhodopsin,” Biochemistry, 38:5422-29, 1999.
B.V. Zemelman et al., “Selective photostimulation of genetically chARGed neurons,” Neuron, 33:15-22, 2002.
G. Nagel et al., “Channelrhodopsin-2, a directly light-gated cation-selective membrane channel,” PNAS, 100:13940-45, 2003.
E.S. Boyden et al., “Millisecond-timescale, genetically targeted optical control of neural activity,” Nat Neurosci, 8:1263-68, 2005.
B.R. Arenkiel et al., “In vivo light-induced activation of neural circuitry in transgenic mice expressing channelrhodopsin-2,” Neuron, 54:205-18, 2007.
T. Ishizuka et al., “Kinetic evaluation of photosensitivity in genetically engineered neurons expressing green algae light-gated channels,” Neurosci Res, 54:85-94, 2006.
X. Li et al., “Fast noninvasive activation and inhibition of neural and network activity by vertebrate rhodopsin and green algae channelrhodopsin,” PNAS, 102:17816-21, 2005.
G. Nagel et al., “Light activation of channelrhodopsin-2 in excitable cells of Caenorhabditis elegans triggers rapid behavioral responses,” Curr Biol, 15:2279-84, 2005.
A. Bi et al., “Ectopic expression of a microbial-type rhodopsin restores visual responses in mice with photoreceptor degeneration,” Neuron, 50:23-33, 2006.
X. Han, E.S. Boyden, “Multiple-color optical activation, silencing, and desynchronization of neural activity, with single-spike temporal resolution,” PLoS ONE, 2:e299, 2007.
F. Zhang et al., “Multimodal fast optical interrogation of neural circuitry,” Nature, 446:633-39, 2007.
B.Y. Chow et al., “High-performance genetically targetable optical neural silencing by light-driven proton pumps,” Nature, 463:98-102, 2010.
A.M. Aravanis et al., “An optical neural interface: in vivo control of rodent motor cortex with integrated fiberoptic and optogenetic technology,” J Neural Eng, 4:S143-56, 2007.
D.Atasoy et al., “A FLEX switch targets Channelrhodopsin-2 to multiple cell types for imaging and long-range circuit mapping,” J Neurosci, 28:7025-30, 2008.
X. Han et al., “Millisecond-Timescale Optical Control of Neural Dynamics in the Nonhuman Primate Brain,” Neuron, 62:191-98, 2009.

domingo, 3 de julio de 2011

Chimps Are Good Listeners, Too.



Chimps Are Good Listeners, Too.
by Michael Balter on 1 July 2011.

I can talk, too! Panzee can communicate with humans using a board filled with symbols.

Most researchers regard language as unique to humans, something that makes our species special. But they fiercely debate how the ability to speak and listen evolved. Did speech require our species to evolve novel capabilities, or did we simply combine and enhance various abilities that other animals have, too? A new study with a language-trained chimp suggests that when it comes to understanding speech, the basic equipment might already have been present in our apelike ancestors.

The notion that language evolved only in the human lineage and has no parallels in other animals has long been attributed to the linguist Noam Chomsky, who argued beginning in the 1960s that humans had a special "language organ" unique to us. But more recent studies have shown that other species are surprisingly good at communication, and many researchers have abandoned this idea—even Chomsky himself no longer holds to it strictly.

However, some scientists continue to argue that humans have evolved unique ways to perceive and understand speech that allows us to use words as symbols for complex meanings. These contentions are based in part on a notable human talent: We can recognize words and understand entire sentences, even if the sounds of the words have been dramatically altered until they are a pale shadow of their linguistically meaningful selves.

So a team of researchers turned to Panzee, a 25-year-old chimpanzee, to test the assumption that only humans have this talent. Humans raised Panzee from the age of 8 days, and her caregivers exposed her to a rich diet of English language conversation about food, people, objects, and events. Panzee can't talk, so she communicates with those around her using a lexigram board of symbols corresponding to English words (see photo). She can point to 128 different lexigrams when she hears the corresponding spoken word.

A team led by Lisa Heimbauer, a cognitive psychologist at Georgia State University in Atlanta, set out to see how well Panzee could duplicate the human talent of understanding English word sounds when they are so badly distorted that they are difficult to recognize. The team used two electronic methods to distort the words: noise-vocoded (NV) synthesis, which makes words sound very raspy and breathy; the other, known as sine-wave (SW) synthesis, which reduces words to just three tones, is something like converting a rich color photograph into a stripped-down black and white version. (The words included chimp-friendly terms such as banana, potato, tickle, and balloon.)

Panzee performed well above chance when she heard distorted versions of 48 words that she knew and had to choose among four lexigrams, the team reports this week in Current Biology. Thus, while a chance result would have been one out of four correct choices, or 25%, Panzee scored 55% with NV words and about 40% with SW words, which are particularly difficult to understand even for humans. This was almost as good as the performance of 32 human subjects using the same 48 words, who chose the correct NV word 70% of the time but, like Panzee, the correct SW word only 40% of the time.

Heimbauer and her colleagues say that Panzee's strong performance argues against the idea that humans evolved highly attuned speech-recognition abilities only after they split from the chimp line some 5 million to 7 million years ago. The finding that Panzee passed a challenging test for speech recognition implies, the team writes, that "even quite sophisticated human speech perception phenomena may be within reach for some nonhumans." Still, the team says that its experiments don't rule out that humans have evolved additional speech-perception abilities that our ancestors and chimps lacked.

The authors have come up with a "nice result," says biologist Johan Bolhuis of Utrecht University in the Netherlands, but it shouldn't come as "a big surprise." For example, zebra finches have been shown to be able to distinguish very small sound differences in words spoken by humans, including ones that differ by only one vowel. That's a talent Bolhuis considers "even more remarkable" than Panzee's because it so closely parallels the way humans perceive speech.

J.D. Trout, a psychologist and philosopher at Loyola University Chicago in Illinois, thinks that the authors are far from proving their case. "These experiments don't bear on the question of whether speech is a special adaptation of humans," Trout insists, noting that the human subjects had to pull matching words out of their vocabularies of about 30,000 words, whereas Panzee had a much smaller vocabulary to search through. But Heimbauer points out that unlike the human subjects, Panzee had never been exposed to distorted speech before the experiment, making her performance all the more impressive.