martes, 31 de mayo de 2011

Woman can literally feel the noise.



Woman can literally feel the noise.

May 30th, 2011 in Neuroscience -- A case of a 36-year-old woman who began to literally 'feel' noise about a year and a half after suffering a stroke sparked a new research project by neuroscientist Tony Ro from the City College of New York and the Graduate Center of the City University. Research and imagery of the brain revealed that a link had grown between the woman’s auditory region and the somatosensory region, essentially connecting her hearing to her touch sensation.

Ro and his team presented the findings at the Acoustical Society of America’s meeting on May 25. They pointed out that both hearing and touch rely on vibrations and that this connection may be found in the rest of us as well.

Another researcher and neuroscientist Elizabeth Courtenay Wilson from Beth Israel Deaconess Medical Center in Boston agrees that there is a strong connection between the two. Her team believes that the ear evolved from skin in order to create a more finely tuned frequency analysis. She earned her PhD from MIT with a study on whether vibrations could help hearing aid performance. Her studies showed that individuals with normal hearing were better able to detect a weak sound when it was accompanied by a weak vibration to the skin.

Ro himself published another paper in Experimental Brain Research in 2009 focusing on what he calls the mosquito effect. Those pesky little bugs sound frequency makes our skin prickle and he believes that in order for this to work the frequency of sound must match the frequency of the vibrations we feel.

Functional MRI scans of the brain have revealed that the auditory region of the brain can become activated by a touch. It is believed by some researchers that areas of the brain that are designed to understand frequency may be responsible for this wire crossing, though they are not yet sure exactly where the two senses come together.

sábado, 28 de mayo de 2011

How our focus can silence the noisy world around us

How our focus can silence the noisy world around us
May 27th, 2011 in Psychology & Psychiatry

How can someone with perfectly normal hearing become deaf to the world around them when their mind is on something else? New research funded by the Wellcome Trust suggests that focusing heavily on a task results in the experience of deafness to perfectly audible sounds.

In a study published in the journal 'Attention, Perception, & Psychophysics', researchers at UCL (University College London) demonstrate for the first time this phenomenon, which they term 'inattentional deafness'.

"Inattentional deafness is a common everyday experience," explains Professor Nilli Lavie from the Institute of Cognitive Neuroscience at UCL. "For example, when engrossed in a good book or even a captivating newspaper article we may fail to hear the train driver's announcement and miss our stop, or if we're texting whilst walking, we may fail to hear a car approaching and attempt to cross the road without looking."

Professor Lavie and her PhD student James Macdonald devised a series of experiments designed to test for inattentional deafness. In these experiments, over a hundred participants performed tasks on a computer involving a series of cross shapes. Some tasks were easy, asking the participants to distinguish a clear colour difference between the cross arms. Others were much more difficult, involving distinguishing subtle length differences between the cross arms.

Participants wore headphones whilst carrying out the tasks and were told these were to aid their concentration. At some point during task performance a tone was played unexpectedly through the headphones. At this point, immediately after the sound was played, the experiment was stopped and the participants asked if they had heard this sound.

When judging the respective colours of the arms - an easy task that takes relatively little concentration - around two in ten participants missed the tone. However, when focusing on the more difficult task - identifying which of the two arms was the longest - eight out of ten participants failed to notice the tone.

The researchers believe this deafness when attention is fully taken by a purely visual task is the result of our senses of seeing and hearing sharing a limited processing capacity. It is already known that people similarly experience 'inattentional blindness' when engrossed in a task that takes up all of their attentional capacity - for example, the famous Invisible Gorilla Test, where observers engrossed in a basketball game fail to observe a man in a gorilla suit walk past. The new research now shows that being engrossed in a difficult task makes us blind and deaf to other sources of information.

"Hearing is often thought to have evolved as an early warning system that does not depend on attention, yet our work shows that if our attention is taken elsewhere, we can be effectively deaf to the world around us," explains Professor Lavie. "In our task, most people noticed the sound if the task being performed was easy and did not demand their full concentration. However, when the task was harder they experienced deafness to the very same sound."

Other examples or real world situations include inattentional deafness whilst driving. It is well documented that a large number of accidents are caused by a driver's inattention and this new research suggests inattentional deafness is yet another contributing factor. For example, although emergency vehicle sirens are designed to be too loud to ignore, other sounds - such as a lorry beeping while reversing, a cyclist's bell or a scooter horn - may be missed by a driver focusing intently on some interesting visual information such as a roadside billboard, the advert content on the back of the bus in front or the map on a sat nav.

viernes, 27 de mayo de 2011

New imaging method identifies specific mental states

New imaging method identifies specific mental states
May 26th, 2011 in Neuroscience

New clues to the mystery of brain function, obtained through research by scientists at the Stanford University School of Medicine, suggest that distinct mental states can be distinguished based on unique patterns of activity in coordinated "networks" within the brain. These networks consist of brain regions that are synchronously communicating with one another. The Stanford team is using this network approach to develop diagnostic tests in Alzheimer's disease and other brain disorders in which network function is disrupted.

In a novel set of experiments, a team of researchers led by Michael Greicius, MD, assistant professor of neurology and neurological sciences, was able to determine from brain-imaging data whether experimental subjects were recalling events of the day, singing silently to themselves, performing mental arithmetic or merely relaxing. In the study, subjects engaged in these mental activities at their own natural pace, rather than in a controlled, precisely timed fashion as is typically required in experiments involving the brain-imaging technique called functional magnetic resonance imaging. This suggests that the new method — a variation on the fMRI procedure — could help scientists learn more about what the brain is doing during the free-flowing mental states through which individuals move, minute-to-minute, in the real world.

FMRI can pinpoint active brain regions in which nerve cells are firing rapidly. In standard fMRI studies, subjects perform assigned mental tasks on cue in a highly controlled environment. The researcher typically divides the scan into task periods and non-task periods with strict start and stop points for each. Researchers can detect brain regions activated by the task by subtracting signals obtained during non-task periods from those obtained during the task. To identify which part of the brain is involved in, for example, a memory task, traditional fMRI studies require experimenters to control the timing of each recalled event.

"With standard fMRI, you need to know just when your subjects start focusing on a mental task and just when they stop," said Greicius. "But that isn't how real people in the day-to-day world think."

In their analysis, the Stanford team broke free of this scripted approach by looking not for brain regions that showed heightened activity during one mental state versus another, but for coordinated activity between brain regions, defining distinct brain states. This let subjects think in a self-paced manner more closely resembling the way they think in the world outside the MRI scanner. Instead of breaking up a cognitive state into short blocks of task and non-task, Greicius and his team used uninterrupted scan periods ranging from 30 seconds to 10 minutes in length, allowing subjects to follow their own thought cues at their own pace. The scientists were able to accurately capture subjects' mental states even when the duration of the scans was reduced to as little as one minute or less — all the more reflective of real-world cognition.

Greicius is senior author of the new study, to be published online May 26 in Cerebral Cortex. His team obtained images from a group of 14 young men and women who underwent four 10-minute fMRI scans apiece. Importantly, during each of the four scans, the investigators didn't tell subjects exactly when to start doing something — recall events, sing to themselves silently, count back from 5,000 by threes, or just rest — or when to switch to something else, as is typical with standard fMRI research. "We just told them to go at their own pace," Greicius said.

Greicius's team assembled images from each separate scan. Instead of comparing "on-task" images with "off-task" images to see which regions were active during a distinct brain state compared with when the brain wasn't in that state, the researchers focused on which collections, or networks, of brain regions were active in concert with one another throughout a given state.

Greicius and his colleagues have previously shown that the brain operates, at least to some extent, as a composite of separate networks composed a number of distinct but simultaneously active brain regions. They have identified approximately 15 such networks. Different networks are associated with vision, hearing, language, memory, decision-making, emotion and so forth.

>From the scans of those 14 healthy volunteers, the Stanford investigators were able to construct maps of coordinated activity in the brain during each of the four mental activities. In particular, they looked at 90 brain regions distributed across multiple networks, accounting for most of the brain's gray matter.

In their analysis, the Stanford team identified groups of regions throughout the brain whose activity was correlated to form functional networks. The new fMRI method let them view such networks within a single scan, without having to compare it to another scan via subtraction. In the scanning images, different thought processes showed up as different networks or regions communicating with one another. For example, subjects' recollection of the day's events was characterized by synchronous firing of two brain regions called the retrosplenial cortex, or RSC, and medial temporal lobe, or MTL. Standard fMRI, in which the brain's activity during a recall exercise was compared to its activity in the resting state, has already shown that the RSC and MTL are each active during memory-related tasks. But the new study showed that coordinated activity between these two regions indicates that subjects were engaged in recall.

Once they had completed their mapping of the four mental states to specific patterns of connectivity across the 90 brain regions, Greicius and his colleagues tested their ability to determine which state a subject was in by asking a second group of 10 subjects to undergo scanning during the same four mental activities. By comparing the pattern of a subject's image to the patterns assigned to each of the four states from the 14-subject data set, the researchers' analytical tools were, with 85 percent accuracy, able to correctly determine which mental state a particular scanning image corresponded to. The team's ability to correctly determine which of those four mental tasks a subject was performing remained at the 80 percent accuracy level even when scanning sessions were reduced to one minute apiece — a length of time more reflective of real-life mental behavior than the customary 10-minute scanning time.

As an additional test, Greicius's team asked the second participant group to engage in a fifth cognitive activity, spatial navigation, in which subjects were asked to imagine walking through the rooms of their home. The team's analytical tools readily rejected the connectivity pattern reflecting this mental activity as not indicative of one of the four states in question.

The ability to use fMRI in a more casual, true-to-life manner for capturing the mental states of normal volunteers bodes well for assessing patients with cognitive disorders, such as people with Alzheimer's disease or other dementias, who are often unable to follow the precise instructions and timing demands required in traditional fMRI.

In fact, the technique has already begun proving its value in diagnosing brain disorders. In a 2009 study in Neuron, Greicius and his associates showed that different cognitive disorders show up in fMRI scans as having deficiencies specific to different networks. In Alzheimer's disease, for example, the network associated with memory is functionally impaired so that its component brain regions are no longer firing in a coordinated fashion. This network approach to brain function and dysfunction is now being widely applied to the study of numerous neurological and psychiatric conditions.

Provided by Stanford University Medical Center

jueves, 26 de mayo de 2011

Brain cell networks recreated with new view of activity behind memory formation



A fluorescent image of the neural network model developed at Pitt reveals the interconnection (red) between individual brain cells (blue). Adhesive proteins (green) allow the network to be constructed on silicon discs for experimentation. Credit: U. Pittsburgh

University of Pittsburgh researchers have reproduced the brain's complex electrical impulses onto models made of living brain cells that provide an unprecedented view of the neuron activity behind memory formation.

The team fashioned ring-shaped networks of brain cells that were not only capable of transmitting an electrical impulse, but also remained in a state of persistent activity associated with memory formation, said lead researcher Henry Zeringue [zuh-rang], a bioengineering professor in Pitt's Swanson School of Engineering. Magnetic resonance images have suggested that working memories are formed when the cortex, or outer layer of the brain, launches into extended electrical activity after the initial stimulus, Zeringue explained. But the brain's complex structure and the diminutive scale of neural networks mean that observing this activity in real time can be nearly impossible, he added.

The Pitt team, however, was able to generate and prolong this excited state in groups of 40 to 60 brain cells harvested from the hippocampus of rats—the part of the brain associated with memory formation. In addition, the researchers produced the networks on glass slides that allowed them to observe the cells' interplay. The work was conducted in Zeringue's lab by Pitt bioengineering doctoral student Ashwin Vishwanathan, who most recently reported the work in the Royal Society of Chemistry (UK) journal, Lab on a Chip. Vishwanathan coauthored the paper with Zeringue and Guo-Qiang Bi, a neurobiology professor in Pitt's School of Medicine. The work was conducted through the Center for the Neural Basis of Cognition, which is jointly operated by Pitt and Carnegie Mellon University.

To produce the models, the Pitt team stamped adhesive proteins onto silicon discs. Once the proteins were cultured and dried, cultured hippocampus cells from embryonic rats were fused to the proteins and then given time to grow and connect to form a natural network. The researchers disabled the cells' inhibitory response and then excited the neurons with an electrical pulse.

Zeringue and his colleagues were able to sustain the resulting burst of network activity for up to what in neuronal time is 12 long seconds. Compared to the natural duration of .25 seconds at most, the model's 12 seconds permitted an extensive observation of how the neurons transmitted and held the electrical charge, Zeringue said.

Unraveling the mechanics of this network communication is key to understanding the cellular and molecular basis of memory creation, Zeringue said. The format developed at Pitt makes neural networks more accessible for experimentation. For instance, the team found that when activity in one neuron is suppressed, the others respond with greater excitement.

"We can look at neurons as individuals, but that doesn't reveal a lot," Zeringue said. "Neurons are more connected and interdependent than any other cell in the body. Just because we know how one neuron reacts to something, a whole network can react not only differently, but sometimes in the complete opposite manner predicted."

Zeringue will next work to understand the underlying factors that govern network communication and stimulation, such as the various electrical pathways between cells and the genetic makeup of individual cells.

Provided by University of Pittsburgh

martes, 24 de mayo de 2011

Eggs, butter, milk -- memory is not just a shopping list

Eggs, butter, milk -- memory is not just a shopping list
May 23rd, 2011 in Psychology & Psychiatry


Often, the goal of science is to show that things are not what they seem to be. But now, in an article which will be published in an upcoming issue of Perspectives on Psychological Science, a journal of the Association for Psychological Science, a veteran cognitive psychologist exhorts his colleagues in memory research to consult the truth of their own experience.

"Cognitive psychologists are trying to be like physicists and chemists, which means doing controlled laboratory experiments, getting numbers out of them and explaining the numbers," says Douglas L. Hintzman, now retired from the University of Oregon. The lion's share of experiments, he says, involve giving people lists of words and asking them to remember the words.

"Researchers often completely forget that they have memories and they can see how their memories work from the inside," he continues, "—and that this may be very relevant to the theory they are developing."

Reviewing the literature in his field and the experimental models that have come in and gone out of fashion over the last half-century, Hintzman concludes that these simple experimental tasks, observed in isolation from one another, yield theories that are so oversimplified as to fundamentally misrepresent the nature of memory.

For instance, he says, these word-list tasks make it look as if we only remember when we intentionally put our minds to it— yet we all experience spontaneous memories, many times every day.

Also, because these experiments take place in short sessions, researchers ignore the obvious fact that memory is about personal history, and history is laid out in time. Memory, then, is basic to our understanding of time.

The preference for so-called theoretical parsimony—the idea that a theory should be no more complex than necessary—leads memory scientists up the wrong path, he writes: "The breadth of a theory is at least as important as its precision. Indeed, if we take the theory of evolution as our standard, breadth would appear to be far more important."

Contemplating evolution, Hintzman has come to believe that a crucial role is played by what he calls "involuntary reminding"—the process by which current experiences evoke memories of earlier experiences, creating a coherent record of our interactions with the environment.

"Animals—mammals in particular—evolved in a complex world in which patterns of related events are distributed over time. It's essential for survival that you learn about these patterns." Humans have developed the additional ability to learn and retrieve memories deliberately, he continues. But "the evolutionary purpose of memory is revealed" by these everyday remindings, "not by what typically goes on in the lab."

In this article, Hintzman does not outline a research program for the future, but urges memory researchers and theorists to consider the wide variety of things that memory does for us. "Our ancestors' survival," he writes, "did not hinge on their ability to remember shopping lists. Hunter-gatherers take what they can find."

Provided by Association for Psychological Science

domingo, 22 de mayo de 2011

Scientists cultivate human brain's most ubiquitous cell in lab dish



Scientists cultivate human brain's most ubiquitous cell in lab dish
May 22nd, 2011 in Biology / Biotechnology
Astrocytes are star-shaped cells that are the most common cell in the human brain and have now been grown from embryonic and induced stem cells in the laboratory of UW-Madison neuroscientist Su-Chun Zhang. Once considered mere putty or glue in the brain, astrocytes are of growing interest to biomedical research as they appear to play key roles in many of the brain's basic functions, as well as neurological disorders ranging from headaches to dementia. In this picture astrocyte progenitors and immature astrocytes cluster to form an "astrosphere." Photo provided by Robert Krencik/ UW-Madison
Pity the lowly astrocyte, the most common cell in the human nervous system.

Long considered to be little more than putty in the brain and spinal cord, the star-shaped astrocyte has found new respect among neuroscientists who have begun to recognize its many functions in the brain, not to mention its role in a range of disorders of the central nervous system.

Now, writing in the current (May 22) issue of the journal Nature Biotechnology, a group led by University of Wisconsin-Madison stem cell researcher Su-Chun Zhang reports it has been able to direct embryonic and induced human stem cells to become astrocytes in the lab dish.

The ability to make large, uniform batches of astrocytes, explains Zhang, opens a new avenue to more fully understanding the functional roles of the brain's most commonplace cell, as well as its involvement in a host of central nervous system disorders ranging from headaches to dementia. What's more, the ability to culture the cells gives researchers a powerful tool to devise new therapies and drugs for neurological disorders.

"Not a lot of attention has been paid to these cells because human astrocytes have been hard to get," says Zhang, a researcher at UW-Madison's Waisman Center and a professor of neuroscience in the UW-Madison School of Medicine and Public Health. "But we can make billions or trillions of them from a single stem cell."

Although astrocytes have gotten short shrift from science compared to neurons, the large filamentous cells that process and transmit information, scientists are turning their attention to the more common cells as their roles in the brain become better understood. There are a variety of astrocyte cell types and they perform such basic housekeeping tasks as helping to regulate blood flow, soaking up excess chemicals produced by interacting neurons and controlling the blood-brain barrier, a protective filter that keeps dangerous molecules from entering the brain.

Astrocytes, some studies suggest, may even play a role in human intelligence given that their volume is much greater in the human brain than any other species of animal.

"Without the astrocyte, neurons can't function," Zhang notes. "Astrocytes wrap around nerve cells to protect them and keep them healthy. They participate in virtually every function or disorder of the brain."

The ability to forge astrocytes in the lab has several potential practical outcomes, according to Zhang. They could be used as screens to identify new drugs for treating diseases of the brain, they can be used to model disease in the lab dish and, in the more distant future, it may be possible to transplant the cells to treat a variety of neurological conditions, including brain trauma, Parkinson's disease and spinal cord injury. It is possible that astrocytes prepared for clinical use could be among the first cells transplanted to intervene in a neurological condition as the motor neurons affected by the fatal amyotrophic lateral sclerosis, also known as Lou Gehrig's disease, are swathed in astrocytes.

"With an injury or neurological condition, neurons in the brain have to work harder, and doing so they make more neurotransmitters," chemicals that in excess can be toxic to other cells in the brain, Zhang says.

"One idea is that it may be possible to rescue motor neurons by putting normal, healthy astrocytes in the brain," according to Zhang. "These cells are really useful as a therapeutic target."

The technology developed by the Wisconsin group lays a foundation to make all the different species of astrocytes. What's more, it is possible to genetically engineer them to mimic disease so that previously inaccessible neurological conditions can be studied in the lab.

Provided by University of Wisconsin-Madison

viernes, 20 de mayo de 2011

'Mind reading' brain scans reveal secrets of human vision


'Mind reading' brain scans reveal secrets of human vision
May 19th, 2011 in Neuroscience


Researchers were able to determine that study participants were looking at this street scene even when the participants were only looking at the outline. Credit: Fei-Fei Li

Researchers call it mind reading. One at a time, they show a volunteer – who's resting in an MRI scanner – a series of photos of beaches, city streets, forests, highways, mountains and offices. The subject looks at the photos, but says nothing.

The researchers, however, can usually tell which photo the volunteer is watching at any given moment, aided by sophisticated software that interprets the signals coming from the scan. They glean clues not only by noting what part of the brain is especially active, but also by analyzing the patterns created by the firing neurons. They call it decoding.

Now, psychologists and computer scientists at Stanford, Ohio State University and the University of Illinois at Urbana–Champaign have taken mind reading a step further, with potential impact on how both computers and the visually impaired make sense of the world they see.

The researchers, including Stanford computer scientist Fei-Fei Li, removed almost all of the detail from the color photographs, leaving only sparse line drawings of the assorted scenes. When they ran the experiment again with just the outlines, the researchers were still able to read the minds of the participants – with as much accuracy as before.

The research was focused on the parahippocampal place area, a region of the brain that plays an important role in recognition of scenes such as rooms, landscapes and city streets.

The results demonstrate that outlines play a crucial role in how the human eye and mind interpret what is seen. The bare outlines of the photos shown to the participants seemingly offered the brain almost as many clues as the original photo. This "impoverished" signal sent to the brain was enough, Li said.

The significance of the work? "By noting what is driving the brain, you will be learning the way the brain works," Li said, "why certain cues are more important than other cues."

"Mind reading" could prove helpful in assessing patients in comas. "Inferring what people are seeing is clinically important," Li said.




The drawing of a dog as part of the Nazca Lines geoglyphs, Peru, ca. 700-200 B.C. suggests the power of outlines throughout time.

Credit: Steve Taylor / Creative Commons

The power of outlines seems backed up by history and common experience. As the authors wrote in their research paper, published in the Proceedings of the National Academy of Sciences, early cave dwellers drew outline figures on the walls of their homes; Chinese calligraphy revolves around lines and strokes; and children draw outlines as they attempt to describe the world unfolding before them.

"The representations in our brain for categorizing these scenes seem to be a bit more abstract than some may have thought – we don't need features such as texture and color to tell a beach from a street scene," said Dirk Bernhardt-Walther, a psychologist at Ohio State University who was a member of the research team.

Even when the software made errors reading the black-and-white line drawing of, for example, the beach, the mistakes closely resembled the mistakes made with the color photo of the beach, underscoring the conclusion that line drawings stimulate the mind in almost the same way as color photographs.

As researchers began removing parts of the line drawings piece by piece before showing them to the participants, they learned that the longer contours created by the lines, which formed the structure of the scene, were the most important.

"Lines capture really important structure, and you can find evidence of that in the brain," Li said.

Provided by Stanford University

jueves, 19 de mayo de 2011

The odds are against extra-sensory perception


The odds are against extra-sensory perception
May 18th, 2011 in Psychology & Psychiatry

Can people truly feel the future? Researchers remain skeptical, according to a new study by Jeffrey Rouder and Richard Morey from the University of Missouri in the US, and the University of Groningen in the Netherlands, respectively. Their work appears online in the Psychonomic Bulletin & Review, published by Springer.

Although extra-sensory perception (ESP) seems impossible given our current scientific knowledge, and certainly runs counter to our everyday experience, a leading psychologist, Daryl Bem of Cornell University, is claiming evidence for ESP. Rouder and Morey look at the strength of the evidence in Dr. Bem's experiments.

Their application of a relatively new statistical method that quantifies how beliefs should change in light of data, suggests that there is only modest evidence behind Dr. Bem's findings (that people can feel, or sense, salient events in the future that could not otherwise be anticipated, and cannot be explained by chance alone), certainly not enough to sway the beliefs of a skeptic.

They highlight the limitations of conventional statistical significance testing (p values), and apply a new technique (meta-analytical Bayes factor) to Dr. Bem's data, which overcomes some of these limitations. According to Rouder and Morey, in order to accurately assess the total evidence in Bem's data, it is necessary to combine the evidence across several of his experiments, not look at each one in isolation, which is what researchers have done up till now. They find there is some evidence for ESP – people should update their beliefs by a factor of 40.

In other words, beliefs are odds. For example, a skeptic might hold odds that ESP is a long shot at a million-to-one, while a believer might believe it is as possible as not (one-to-one odds). Whatever one's beliefs, Rouder and Morey show that Bem's experiments indicate they should change by a factor of 40 in favor of ESP. The believer should now be 40-to-1 sure of ESP, while the skeptic should be 25000-to-1 sure against it.

Rouder and Morey conclude that the skeptics odds are appropriate: "We remain unconvinced of the viability of ESP. There is no plausible mechanism for it, and it seems contradicted by well-substantiated theories in both physics and biology. Against this background, a change in odds of 40 is negligible."

More information: Rouder JN & Morey RD (2011). A Bayes factor meta-analysis of Bem's ESP claim. Psychonomic Bulletin & Review. DOI 10.3758/s13423-011-0088-7

domingo, 15 de mayo de 2011

Tiny variation in one gene may have led to crucial changes in human brain


Tiny variation in one gene may have led to crucial changes in human brain
May 15th, 2011 in Genetics


On the left, the occipital region of a normal human brain is circled. On the right, the same area of the brain of a subject with mutation of LAMC3 gene is smooth, and lacks normal folds and convolutions. Credit: courtesy of Yale University

The human brain has yet to explain the origin of one its defining features – the deep fissures and convolutions that increase its surface area and allow for rational and abstract thoughts.

An international collaboration of scientists from the Yale School of Medicine and Turkey may have discovered humanity's beneficiary – a tiny variation within a single gene that determines the formation of brain convolutions – they report online May 15 in the journal Nature Genetics.

A genetic analysis of a Turkish patient whose brain lacks the characteristic convolutions in part of his cerebral cortex revealed that the deformity was caused by the deletion of two genetic letters from 3 billion in the human genetic alphabet. Similar variations of the same gene, called laminin gamma3 (LAMC3), were discovered in two other patients with similar abnormalities.

"The demonstration of the fundamental role of this gene in human brain development affords us a step closer to solve the mystery of the crown jewel of creation, the cerebral cortex," said Murat Gunel, senior author of the paper and the Nixdorff-German Professor of Neurosurgery, co-director of the Neurogenetics Program and professor of genetics and neurobiology at Yale.

The folding of the brain is seen only in mammals with larger brains, such as dolphins and apes, and is most pronounced in humans. These fissures expand the surface area of the cerebral cortex and allow for complex thought and reasoning without taking up more space in the skull. Such foldings aren't seen in mammals such as rodents or other animals. Despite the importance of these foldings, no one has been able to explain how the brain manages to create them. The LAMC3 gene – involved in cell adhesion that plays a key role in embryonic development – may be crucial to the process.

An analysis of the gene shows that it is expressed during the embryonic period that is vital to the formation of dendrites, which form synapses or connections between brain cells. "Although the same gene is present in lower organisms with smooth brains such as mice, somehow over time, it has evolved to gain novel functions that are fundamental for human occipital cortex formation and its mutation leads to the loss of surface convolutions, a hallmark of the human brain," Gunel said.

Provided by Yale University