martes, 31 de agosto de 2010
Eye movements reveal readers' wandering minds
August 30th, 2010 in Medicine & Health / Psychology & Psychiatry
It's not just you... everybody zones out when they're reading. For a new study published in Psychological Science, a journal of the Association for Psychological Science, scientists recorded eye movements during reading and found that the eyes keep moving when the mind wanders -- but they don't move in the same way as they do when you're paying attention.
Erik Reichle, a psychological scientist at the University of Pittsburgh, is interested in how the brain controls eye movements. "The goal is to understand how things like word comprehension and visual attention control eye movements," he says.
Most people who study reading think that the eyes sample the information on the page and the reading mind essentially takes what it's given, without giving much direction back to the eyes.
Reichle suspected that was wrong, and thought looking at mindless reading would be an interesting way to illuminate what happens when the mind is engaged. He cowrote the study with Andrew E. Reineberg of the University of Pittsburgh and Jonathan W. Schooler of the University of California, Santa Barbara.
Four undergraduate students at the University of Pittsburgh volunteered for the project. Each one came to the lab for a dozen or more one-hour reading sessions of Jane Austen's Sense and Sensibility, chosen because it's "fairly easy but a little bit dry," says Reichle. "We started with Kafka's The Trial, but people found it too engaging." While the student read the book on a screen, a computer tracked their eye movements. They were asked to push a button marked "Z" when they noticed themselves "zoning out." The computer also asked every few minutes if they'd just been paying attention or zoning out.
The eyes did different things while a person was paying attention than when their mind was wandering. In normal reading, the eye fixates on a word, then zips to another word. The eye spends longer on words that are less common. But when someone's mind was wandering, the eyes did not follow these patterns. They also fixated for longer on individual words.
"It was almost like they were just mechanically plodding along," Reichle says. This suggests that the prevailing belief in his field is wrong—in fact, when people are reading, eye movements are strongly linked to the language processing going on in the brain.
Provided by Association for Psychological Science
sábado, 28 de agosto de 2010
Healthy ears hear the first sound, ignoring the echoes
August 26th, 2010 in Medicine & Health / Neuroscience
Voices carry, reflect off objects and create echoes. Most people rarely hear the echoes; instead they only process the first sound received. For the hard of hearing, though, being in an acoustically challenging room can be a problem. For them, echoes carry. Ever listen to a lecture recorded in a large room?
That most people only process the first-arriving sound is not new. Physicist Joseph Henry, the first secretary of the Smithsonian Institution, noted it in 1849, dubbing it the precedence effect. Since then, classrooms, lecture halls and public-gathering places have been designed to reduce reverberating sounds. And scientists have been trying to identify a precise neural mechanism that shuts down trailing echoes.
In a new paper published in the Aug. 26 issue of the journal Neuron, University of Oregon scientists Brian S. Nelson, a postdoctoral researcher, and Terry T. Takahashi, professor of biology and member of the UO Institute of Neuroscience, suggest that the filtering process is really simple.
When a sound reaching the ear is loud enough, auditory neurons simply accept that sound and ignore subsequent reverberations, Takahashi said. "If someone were to call out your name from behind you, that caller's voice would reach your ears directly from his or her mouth, but those sound waves will also bounce off your computer monitor and arrive at your ears a little later and get mixed in with the direct sound. You aren't even aware of the echo."
Takahashi studies hearing in barn owls with the goal of understanding the fundamentals of sound processing so that future hearing aids, for example, might be developed. In studying how his owls hear, he usually relies on clicking sounds one at a time.
For the new study, funded by the National Institutes of Deafness and Communication Disorders, Nelson said: "We studied longer sounds, comparable in duration to many of the consonant sounds in human speech. As in previous studies, we showed that the sound that arrives first -- the direct sound -- evokes a neural and behavioral response that is similar to a single source. What makes our new study interesting is that the neural response to the reflection was not decreased in comparison to when two different sounds were presented."
The owls were subjected to two distinct sounds, direct and reflected, with the first-arriving sound causing neurons to discharge. "The owls' auditory neurons are very responsive to the leading edge of the peaks," said Takahashi, "and those leading edges in the echo are masked by the peak in the direct waveform that preceded it. The auditory cells therefore can't respond to the echo."
When the leading sound is not deep enough in modulation and more time passes between sounds, the single filtering process disappears and the owls respond to the sounds coming from different locations, the researchers noted.
The significance, Takahashi said, is that for more than 60 years researchers have sought a physiological mechanism that actively suppresses echoes. "Our results suggest that you might not need such a sophisticated system."
Provided by University of Oregon
viernes, 27 de agosto de 2010
Consider the BBC Domesday Project, undertaken in 1986 to mark the 900th anniversary of the original Domesday Book, a land-use survey of England commissioned by William the Conqueror in 1086. For the latter-day survey of the island, thousands of Britons contributed text, photos and video that were published on two custom laser disks.
But just 15 years later, it was impossible to access those disks without lots of custom hardware and extensive software emulation. Currently the Centre for Computing History in Haverhill, England, has a functional emulation and hopes to post the contents to the Web.
Meanwhile, the original Domesday Book, handwritten on sheepskin, remains in the British archives, usable after nine centuries by anyone literate in Latin.
Anyone with data stored on 5.25-inch floppies or text in WordStar format faces a problem similar to the one that befell the BBC Domesday Project. The digital data we are generating wholesale will very likely become unusable within our lifetimes unless we take steps to preserve it.
The situation cannot be blamed entirely on the computer industry's treadmill of planned obsolescence. In essence, digital storage technology has inherent drawbacks that make paper look immortal.
Source: ComputerWorld, August 2010.
miércoles, 25 de agosto de 2010
Des cornées biosynthétiques implantées chirurgicalement ont permis de restaurer en partie la vue de certains patients, selon un petit essai clinique de deux ans conduit en Suède dont les résultats sont publiés mercredi aux Etats-Unis. Cette étude menée avec dix participants, a montré que l’implantation de cette cornée biosynthétique faite de collagène humain recombiné selon un procédé mis au point par la firme de biotechnologie américaine FibroGen, a contribué à régénérer et à réparer les tissus oculaires endommagés.
Deux ans après avoir été implantées, ces cornées restaient totalement fonctionnelles et ont contribué à la régénérescence, dans l’implant, de cellules provenant de la cornée du sujet ainsi que des nerfs sectionnés durant l’intervention, précisent ces chercheurs dont l’étude paraît dans l’édition du 25 août de la revue médicale Science Translational Medicine, publiée par le journal Science.
En outre, le réflexe de clignement des yeux et le film lacrymal, fine couche liquide maintenue à la surface de la cornée et protégeant l’épithélium, ont été restaurés chez les participants. L’acuité visuelle s’est améliorée chez six patients, a été inchangée chez deux et a diminué pour deux autres.
Aucun n’a subi de réaction de rejet ou de thérapie immunosuppressive, fréquente chez les patients recevant des transplantations d’organe dont des cornées.
Les dix patients de l’étude souffraient d’un kératocône avancé, une déformation conique du centre de la cornée progressive et lente. Ils ont eu une implantation d’une cornée biosynthétique dans un seul œil.
viernes, 20 de agosto de 2010
Brain network links cognition, motivation
August 19th, 2010 in Medicine & Health / Neuroscience
Simply flashing a dollar-sign cue sparked immediate activation in a brain region that coordinates the interaction of cognitive control and motivational functions, effectively putting these areas on alert that there was money to be won in the challenge ahead, the study suggests.
Whether it's sports, poker or the high-stakes world of business, there are those who always find a way to win when there's money on the table.
Now, for the first time, psychology researchers at Washington University in St. Louis are unraveling the workings of a novel brain network that may explain how these "money players" manage to keep their heads in the game.
Findings suggest that a specific brain area helps people use the prospect of success to better prepare their thoughts and actions, thus increasing odds that a reward will be won.
The study, published Aug. 4 in the Journal of Neuroscience, identified a brain region about two inches above the left eyebrow that sprang into action whenever study participants were shown a dollar sign, a predetermined cue that a correct answer on the task at hand would result in a financial reward.
Using what researchers believe are short bursts of dopamine — the brain's chemical reward system — the brain region then began coordinating interactions between the brain's cognitive control and motivation networks, apparently priming the brain for a looming "show me the money" situation.
"The surprising thing we see is that motivation acts in a preparatory manner," says Adam C. Savine, lead author of the study and a doctoral candidate in psychology at Washington University. "This region gears up when the money cue is on."
Savine and colleague Todd S. Braver, PhD, professor of psychology in Arts & Sciences, tested 16 subjects in an experiment that required appropriate preparation for one of two possible tasks, based upon advance information provided at the same time as the money cue. Monetary rewards were offered on trials in which the money cue appeared (which happened randomly on half the trials), provided that the subjects answered accurately and within a specified timeframe. Obtaining the reward was most likely when subjects used the advance task information most effectively.
Using functional magnetic resonance imaging (fMRI), the researchers detected a network of eight different brain regions that responded to the multitasking challenge and two that responded to both the challenge and the motivational cue (a dollar sign, the monetary reward cue for a swift, correct answer).
In particular, Savine and Braver found that the left dorsolateral prefrontal cortex (DLPFC), located in the brain approximately two inches above the left eyebrow, is a key area that both predicts a win, or successful outcome, and prepares the motivational cognitive control network to win again.
Simply flashing the dollar-sign cue sparked immediate activation in the DLPFC region and it began interacting with other cognitive control and motivational functions in the brain, effectively putting these areas on alert that there was money to be won in the challenge ahead.
"In this region (left DLPFC), you can actually see the unique neural signature of the brain activity related to the reward outcome," Savine says. "It predicts a reward outcome and it's preparatory, in an integrative sort of way. The left DLPFC is the only region we found that seems to be primarily engaged when subjects get the motivational cue beforehand, it's the region integrates that information with the task information and leads to the best task performance.
The researchers actually observed increased levels of oxygenated hemoglobin in the brain blood flow in these regions.
The finding provides insight into the way people pursue goals and how motivation drives goal-oriented behavior. It also could provide clues to what might be happening with different populations of people with cognitive deficiencies in pursuing goals.
Savine and Braver sought to determine the way that motivation and cognitive control are represented in the brain. They found two brain networks -- one involved in reward processing, and one involved in the ability to flexibly shift mental goals (often referred to as "cognitive control") -- that were coactive on monetary reward trials. A key question that still needs to be answered is exactly how these two brain networks interact with each other.
Because the brain reward network appears to center on the brain chemical dopamine, the researchers speculate that the interactions between motivation and cognitive control depend upon "phasic bursts of dopamine."
They wanted to see how the brain works when motivation impacts task-switching, how it heightens the importance of a one-rewarding goal while inhibiting the importance of non-rewarding goals.
"We wanted to see what motivates us to pursue one goal in the world above all others," Savine says. "You might think that these mechanisms would have been addressed a long time ago in psychology and neuroscience, but it's not been until the advent of fMRI about 15-20 years ago that we've had the tools to address this question in humans, and any progress in this area has been very, very recent."
In this kind of test, as in the workplace, many distractions exist. In the midst of a deadline project with an "eye on the prize," the phone still rings, background noise of printers and copying machines persist, an interesting world outside the window beckons and colleagues drop in to seek advice. A person's ability to control his or her cognition - all the things a brain takes in - is directly linked to motivation. Time also plays a big factor. A project due in three weeks can be completed with some distraction; a project due tomorrow inhibits a person's response to interrupting friends and colleagues and allows clearer focus on the goal.
The researchers intend to explore the left DLPFC more as a "uniquely predictive measure of pursuing rewarded outcomes in motivated settings," Savine says."Another key research effort will seek to more directly quantify the involvement of dopamine chemical release during these tasks."
And they may test other motivators besides money, such as social rewards, or hunger or thirst, to see "if different motivators are all part of the same reward currency, engaging the same brain network that we've shown to be activated by monetary rewards," Savine says.
Provided by Washington University in St. Louis
jueves, 19 de agosto de 2010
Brain connections break down as we age
August 18th, 2010 in Medicine & Health / Neuroscience
The circled portion of the older adult's brain on the left indicates the cross-talk between the two hemispheres that is not apparent in the younger brain on the right. Credit: Rachael Seidler
It's unavoidable: breakdowns in brain connections slow down our physical response times as we age, a new study suggests.
This slower reactivity is associated with an age-related breakdown in the corpus callosum, a part of the brain that acts as a dam during one-sided motor activities to prevent unwanted connectivity, or cross-talk, between the two halves of the brain, said Rachael Seidler, associate professor in the University of Michigan School of Kinesiology and Department of Psychology, and lead study author.
At other times the corpus callosum acts at a bridge and cross-talk is helpful, such as in certain cognitive functions or two-sided motor skills.
The U-M study is the first known to show that this cross-talk happens even while older adults are at rest, said Seidler, who also has appointments in the Institute of Gerontology and the Neuroscience Graduate Program. This resting cross-talk suggests that it is not helpful or compensatory for the two halves of the brain to communicate during one-sided motor movements because the opposite side of the brain controls the part of the body that is moving. So, when both sides of the brain talk simultaneously while one side of the body tries to move, confusion and slower responses result, Seidler said.
Previous studies have shown that cross-talk in the brain during certain motor tasks increases with age but it wasn't clear if that cross-talk helped or hindered brain function, said Seidler.
"Cross-talk is not a function of task difficulty, because we see these changes in the brain when people are not moving," Seidler said.
In some diseases where the corpus callosum is very deteriorated, such as in people with multiple sclerosis, you can see "mirror movements" during one sided-motor tasks, where both sides move in concert because there is so much communication between the two hemispheres of the brain, Seidler said. These mirror movements also happen normally in very young children before the corpus callosum is fully developed.
In the study, researchers gave joysticks to adults between the ages of 65 and 75 and measured and compared their response times against a group approximately 20-25 years old.
Researchers then used a functional MRI to image the blood-oxygen levels in different parts of the brain, a measurement of brain activity.
"The more they recruited the other side of the brain, the slower they responded," Seidler said.
However there is hope, and just because we inevitably age doesn't mean it's our fate to react more slowly. Seidler's group is working on developing and piloting motor training studies that might rebuild or maintain the corpus callosum to limit overflow between hemispheres, she said.
A previous study done by another group showed that doing aerobic training for three months helped to rebuild the corpus callosum, she said, which suggests that physical activity can help to counteract the effects of the age-related degeneration.
Seidler's group also has a study in review that uses the same brain imaging techniques to examine disease related brain changes in Parkinson's patients.
More information: The study appeared in the journal Frontiers in Systems Neuroscience.
Provided by University of Michigan
martes, 17 de agosto de 2010
Language as a window into sociability
August 16th, 2010 in Medicine & Health / Neuroscience
People with Williams syndrome-known for their indiscriminate friendliness and ease with strangers-process spoken language differently from people with autism spectrum disorders-characterized by social withdrawal and isolation-found researchers at the Salk Institute for Biological Studies.
Their findings, to be published in a forthcoming issue of Social Cognitive and Affective Neuroscience, will help to generate more specific hypotheses regarding language perception and processing in both Williams syndrome and autism spectrum disorders, as well as the core mechanisms involved in the development of communication and social skills.
"Spoken language is probably the most important form of social interaction between people and, maybe not surprisingly, we found that the way the brain processes language mirrors the contrasting social phenotypes of Williams syndrome and autism spectrum disorders," says lead author Inna Fishman, Ph.D., a neuropsychologist in the Laboratory of Cognitive Neuroscience at the Salk, who conceived the study together with Debra Mills, Ph.D., currently a reader at Bangor University in UK.
Autism spectrum disorders and Williams syndrome are both neurodevelopmental disorders but their manifestations couldn't be more different: While autistic individuals live in a world where objects make much more sense than people do, people with Williams syndrome are social butterflies who bask in other people's attention.
Despite myriad health problems, generally low IQs and severe spatial problems, people with Williams syndrome are irresistibly drawn to strangers, look intently at people's faces, remember names and faces with ease, and are colorful and skillful storytellers.
"The discrepancy between their language ability and IQ is startling," says co-author Ursula Bellugi, professor and director of the Laboratory of Cognitive Neuroscience at the Salk Institute, who has been studying the behavioral aspects of Williams syndrome for more than 20 years. "Children with Williams syndrome have elaborate and rich vocabularies and use very descriptive, affect-rich expressive language, which makes their speech very engaging."
In contrast, many people with autism struggle to learn and use language effectively, especially when talking to other people. Chit-chat and gossip, the social glue that binds people together, mean nothing to them. Although there is considerable variation in linguistic ability-from the absence of functional speech to near normal language skills-deficits in semantic processing, especially interpreting language in context, are common across the whole spectrum of autistic disorders, including Asperger syndrome.
"It is this divide in language skills and use, which mirrors the opposite social profiles, that led us to explore how brains of individuals with Williams syndrome and autistic spectrum disorders process language," says Fishman.
For their study, she and her colleagues compared brain response patterns linked to language processing in individuals with Williams syndrome, autism spectrum disorders and healthy controls. They focused on the so-called N400, a distinct pattern of electrical brain activity that can be measured by electrodes placed on the scalp. Known as ERP or event-related potential, the N400 is part of the normal brain response to words and other meaningful or potentially meaningful stimuli and peaks about 400 milliseconds after the stimulus.
When presented with a typical sentence that finished with an odd ending ("I take my coffee with sugar and shoes"), individuals with Williams syndrome exhibited an abnormally large N400 response indicating that they are particularly sensitive and attuned to semantic aspects of language. In contrast, individuals with ASD did not show this negativity, suggesting that the inability to integrate lexical information into the ongoing context may underlie their communicative and language impairments. Healthy people fell between those two extremes.
"The N400 reflects the cognitive demand incurred by the integration of a meaningful stimulus such as a word into a more general semantic context such as a sentence," explains Fishman. The smaller N400 effect found in the ASD group suggests that they make less use of contextual information, which makes it harder for them to grasp the meaning of words.
"Our results suggest that language skills, or their brain correlates, go hand-in-hand with the level of sociability, potentially mediating the likelihood of interaction and communication with others," she says. In fact, Fishman and her colleagues have preliminary data supporting this association between the sociability and the magnitude of one's N400 response, among individuals with WS.
To gain a better understanding of the neural and genetic correlates of social behavior in different social phenotypes Bellugi's team is now integrating these findings with the exquisitely mapped genetic profile of Williams syndrome. They hypothesize that specific genes in the Williams syndrome region may be involved in the dysregulation of specific neuropeptide and hormonal systems, which could explain the observed hypersocial behavior.
Provided by Salk Institute
sábado, 14 de agosto de 2010
Busy Brains Make for Deeper Sleep
by Gisela Telis on August 9, 2010 12:01 PM
Sound sleepers share a surprising secret: a bustling brain. A new study reports that people who can sleep through anything show more frequent bursts of brain activity called sleep spindles than do their light-sleeping counterparts. Researchers say the discovery could lead to spindle-enhancing techniques that offer lighter sleepers a chance at dead-to-the-world rest.
Sleep spindles happen only during sleep, when brain waves slow. Scientists first spotted them in the 1930s, but they didn't suspect they were involved in how deeply people sleep. For decades, researchers instead chalked up the vast variability between light and heavy sleepers to differences in sleep stage; sound sleepers were thought to spend more of their repose in the deeper stages of sleep.
Then in the 1990s, scientists tracked down the spindle's source: the thalamus, a brain region that regulates sleep and also processes and relays sensory information to the cerebral cortex. The spindle-thalamus link made it "logical that the sleep spindle would play a role in regulating sensory input while we sleep," says Jeffrey Ellenbogen, a sleep researcher at Harvard Medical School and Massachusetts General Hospital in Boston. "But no one had actually shown this."
So Ellenbogen and colleagues invited 12 people to spend 3 nights in his lab's cushy digs. Presented with comfy beds and soundproof rooms, the subjects slept peacefully through the first night while the researchers measured their baseline brain waves. During the next 2 nights, the team played an assortment of 14 different sounds, including flushing toilets, loud conversations, ringing phones, and car traffic, 40 to 50 times throughout the night, gradually raising the volume of each sound until each sleeper stirred.
When the researchers matched the sleepers' spindle production—which ranged from three to six spindles per minute and remained consistent for each sleeper across the nights—to the loudness required to rouse them, they found that sleepers with higher spindle rates were harder to wake up. The spindles seem to indicate when the thalamus is blocking noise from reaching the cortex and disrupting sleep, the team reports in the 10 August issue of Current Biology.
"This is a very elegant study," says Mathias Basner, a sleep researcher at the University of Pennsylvania School of Medicine. "We see huge variability in noise sensitivity, and this gives us a marker to predict that sensitivity." That marker could be used to gauge sleep quality in problem sleepers and assess how well sleep therapies are working for them, adds neuroscientist Matthew Walker of the University of California, Berkeley.
Knowing more about spindles could also help researchers design drugs or behavioral techniques that deepen sleep, says Ellenbogen. In the meantime, some questions linger. Researchers don't yet know why some people produce more spindles than others, or how exactly the thalamus shields stable sleepers from sound. Ellenbogen plans future studies to lay these mysteries to rest.
jueves, 12 de agosto de 2010
Synaesthetic Colour in the Brain: Beyond Colour Areas. A Functional Magnetic Resonance Imaging Study of Synaesthetes and Matched Controls
Synaesthetic Colour in the Brain: Beyond Colour Areas. A Functional Magnetic Resonance Imaging Study of Synaesthetes and Matched Controls
Tessa M. van Leeuwen 1, Karl Magnus Petersson 1,2, Peter Hagoort 1,2
1 Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands,
2 Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
In synaesthesia, sensations in a particular modality cause additional experiences in a second, unstimulated modality (e.g., letters elicit colour). Understanding how synaesthesia is mediated in the brain can help to understand normal processes of perceptual awareness and multisensory integration. In several neuroimaging studies, enhanced brain activity for grapheme-colour synaesthesia has been found in ventral-occipital areas that are also involved in real colour processing. Our question was whether the neural correlates of synaesthetically induced colour and real colour experience are truly shared.
First, in a free viewing functional magnetic resonance imaging (fMRI) experiment, we located main effects of synaesthesia in left superior parietal lobule and in colour related areas. In the left superior parietal lobe, individual differences between synaesthetes (projector-associator distinction) also influenced brain activity, confirming the importance of the left superior parietal lobe for synaesthesia. Next, we applied a repetition suppression paradigm in fMRI, in which a decrease in the BOLD (blood-oxygenated-level-dependent) response is generally observed for repeated stimuli. We hypothesized that synaesthetically induced colours would lead to a reduction in BOLD response for subsequently presented real colours, if the neural correlates were overlapping. We did find BOLD suppression effects induced by synaesthesia, but not within the colour areas.
Because synaesthetically induced colours were not able to suppress BOLD effects for real colour, we conclude that the neural correlates of synaesthetic colour experience and real colour experience are not fully shared. We propose that synaesthetic colour experiences are mediated by higher-order visual pathways that lie beyond the scope of classical, ventral-occipital visual areas. Feedback from these areas, in which the left parietal cortex is likely to play an important role, may induce V4 activation and the percept of synaesthetic colour.
martes, 10 de agosto de 2010
Daniel M Wolpert1 and J Randall Flanagan2
1 Department of Engineering, University of Cambridge, Trumpington Street, Cambridge CB2 1PZ, UK
2 Department of Psychology and Centre for Neuroscience Studies, Queen's University, Kingston, ON K7L 3N6, Canada
BMC Biology 2010, 8:92doi:10.1186/1741-7007-8-92
Received: 8 June 2010 Accepted: 28 June 2010 Published: 23 July 2010
© 2010 Wolpert and Flanagan; licensee BioMed Central Ltd.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
What type of robots are we talking about?
Although humanoid robots are often in the press, most robotic devices found in neuroscience labs around the world are specialized devices for controlling stimuli and creating virtual environments. Most robots consist of a series of links that allow the end of the robotic interface to move either in a two-dimensional plane or three-dimensional space, and look more like a fancy Anglepoise lamp than a human. The configuration of the robot is tracked with sensors at a high rate and computer-controlled motors can change the configuration of the robot. In this way the neuroscientist can control the position of the robot and the forces applied by the robotic interface.
Figure 1. A robot used in a recent experiment on motor control. The schematic shows a Wrist-bot being used to simulate a virtual hammer manipulated in the horizontal plane. The robotic interface consists of a linked structure actuated by two motors (not shown) that can translate the handle in the horizontal plane. In addition a third motor drives a cable system to rotate the handle. In this way both the forces and torques at the handle can be controlled depending on the handle's position and orientation (and higher time derivatives) to simulate arbitrary dynamics - in this case a virtual hammer is simulated. Modified from Current Biology, Vol. 20, Ingram et al., Multiple grasp-specific representations of tool dynamics mediate skillful manipulation, Copyright (2010), with permission from Elsevier.
What can these robots do?
Robots have been particularly important in areas of neuroscience that focus on physical interactions with the world, including haptics (the study of touch) and sensorimotor control (the study of movement). Indeed, robots have done for these areas what computer monitors have done for visual neuroscience. For decades, visual neuroscientists had a substantial advantage because generating visual stimuli is straightforward using computers and monitors. This allowed the precise experimental control over visual inputs necessary to test between hypotheses in visual neuroscience. However, when it came to haptics and sensorimotor control, it has been far harder to control the stimuli. For example, to study haptics one might want to create arbitrary physical objects for tactile exploration, whereas to study motor learning one might want to generate physical objects that have novel dynamical properties and change these properties in real time. Robotic interfaces allow precisely this type of manipulation. In many motor control experiments, the participant holds and moves the end of a robotic interface (Figure 1) and the forces delivered by the robot to the participant's hand depend on the hand's position and velocity (the hand's state). The mapping between the hand's state and the forces applied by the robot is computer controlled and, within the capabilities of the robots, the type of mapping is only limited by the experimenter's imagination.
sábado, 7 de agosto de 2010
Highlight: The brain seconds that emotion
August 6th, 2010 in Medicine & Health / Neuroscience
Smells from your childhood kitchen, the sight of friends and family in old photographs, the feel of a well-worn flannel shirt…all these sensory experiences can conjure up powerful memories.
This happens because sensory information is tightly bound with emotional information when the brain stores an emotional memory, as a new study shows.
The brain regions that receive signals from our eyes, nose and skin are divided into subsections that play different roles in processing this input.
By training rats to associate tonal sounds, flashing lights or the smell of vinegar with the experience of receiving an electric shock, Tiziana Sacco and Benedetto Sacchetti determined that Pavlovian fear memories are stored in the secondary auditory, visual and olfactory cortices, respectively.
Creating lesions in these brain regions appeared to disrupt already established memories, but it didn’t prevent the formation of new ones, suggesting that the secondary sensory cortices are essential for storing emotional memories.
The authors propose that sights, sounds and smells associated with a highly charged emotional situation take on the affective qualities of that situation when sensory stimuli are woven into memories by the secondary sensory cortices.
The connections between these cortices may then provide an “integrated view of the whole emotional experience during memory recall,” the authors write.
The findings are published in today’s edition of the journal Science.
More information: "Role of Secondary Sensory Cortices in Emotional Memory Storage and Retrieval in Rats," by T. Sacco; B. Sacchetti at University of Turin in Turin, Italy; B. Sacchetti at National Institute of Neuroscience in Turin, Italy. Science, Aug 6, 2010.
martes, 3 de agosto de 2010
When memory-related region of brain is damaged, other areas compensate, study finds
August 2nd, 2010 in Medicine & Health / Neuroscience
Image source: University of Wisconsin and Michigan State Comparative Mammalian Brain Collections and the National Museum of Health and Medicine.
Many neuroscientists believe the loss of the brain region known as the amygdala would result in the brain's inability to form new memories with emotional content. New UCLA research indicates this is not so and suggests that when one brain region is damaged, other regions can compensate.
The research appears this week in the early online edition of the journal Proceedings of the National Academy of Sciences (PNAS).
"Our findings show that when the amygdala is not available, another brain region called the bed nuclei can compensate for the loss of the amygdala," said the study's senior author, Michael Fanselow, a UCLA professor of psychology and a member of the UCLA Brain Research Institute.
"The bed nuclei are much slower at learning, and form memories only when the amygdala is not learning," he said. "However, when you do not have an amygdala, if you have an emotional experience, it is like neural plasticity (the memory-forming ability of brain cells) and the bed nuclei spring into action. Normally, it is as if the amygdala says, 'I'm doing my job, so you shouldn't learn.' With the amygdala gone, the bed nuclei do not receive that signal and are freed to learn."
The amygdala is believed to be critical for learning about and storing the emotional aspects of experience, Fanselow said, and it also serves as an alarm to activate a cascade of biological systems to protect the body in times of danger. The bed nuclei are a set of forebrain gray matter surrounding the stria terminalis; neurons here receive information from the prefrontal cortex and hippocampus and communicate with several lower brain regions that control stress responses and defensive behaviors.
"Our results suggest some optimism that when a particular brain region that is thought to be essential for a function is lost, other brain regions suddenly are freed to take on the task," Fanselow said. "If we can find ways of promoting this compensation, then we may be in a better position to help patients who have lost memory function due to brain damage, such as those who have had a stroke or have Alzheimer's disease.
"Perhaps this research can eventually lead to new drugs and teaching regimens that facilitate plasticity in the regions that have the potential to compensate for the damaged areas," he said.
While the current study shows this relationship for emotional learning, additional research in Fanselow's laboratory is beginning to suggest this is a general property of memory.
Provided by University of California - Los Angeles