martes, 10 de agosto de 2010

Robotics as a tool to understand the brain



Daniel M Wolpert1 and J Randall Flanagan2

1 Department of Engineering, University of Cambridge, Trumpington Street, Cambridge CB2 1PZ, UK

2 Department of Psychology and Centre for Neuroscience Studies, Queen's University, Kingston, ON K7L 3N6, Canada

BMC Biology 2010, 8:92doi:10.1186/1741-7007-8-92



Received: 8 June 2010 Accepted: 28 June 2010 Published: 23 July 2010

© 2010 Wolpert and Flanagan; licensee BioMed Central Ltd.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

What type of robots are we talking about?
Although humanoid robots are often in the press, most robotic devices found in neuroscience labs around the world are specialized devices for controlling stimuli and creating virtual environments. Most robots consist of a series of links that allow the end of the robotic interface to move either in a two-dimensional plane or three-dimensional space, and look more like a fancy Anglepoise lamp than a human. The configuration of the robot is tracked with sensors at a high rate and computer-controlled motors can change the configuration of the robot. In this way the neuroscientist can control the position of the robot and the forces applied by the robotic interface.


Figure 1. A robot used in a recent experiment on motor control. The schematic shows a Wrist-bot being used to simulate a virtual hammer manipulated in the horizontal plane. The robotic interface consists of a linked structure actuated by two motors (not shown) that can translate the handle in the horizontal plane. In addition a third motor drives a cable system to rotate the handle. In this way both the forces and torques at the handle can be controlled depending on the handle's position and orientation (and higher time derivatives) to simulate arbitrary dynamics - in this case a virtual hammer is simulated. Modified from Current Biology, Vol. 20, Ingram et al., Multiple grasp-specific representations of tool dynamics mediate skillful manipulation, Copyright (2010), with permission from Elsevier.

What can these robots do?
Robots have been particularly important in areas of neuroscience that focus on physical interactions with the world, including haptics (the study of touch) and sensorimotor control (the study of movement). Indeed, robots have done for these areas what computer monitors have done for visual neuroscience. For decades, visual neuroscientists had a substantial advantage because generating visual stimuli is straightforward using computers and monitors. This allowed the precise experimental control over visual inputs necessary to test between hypotheses in visual neuroscience. However, when it came to haptics and sensorimotor control, it has been far harder to control the stimuli. For example, to study haptics one might want to create arbitrary physical objects for tactile exploration, whereas to study motor learning one might want to generate physical objects that have novel dynamical properties and change these properties in real time. Robotic interfaces allow precisely this type of manipulation. In many motor control experiments, the participant holds and moves the end of a robotic interface (Figure 1) and the forces delivered by the robot to the participant's hand depend on the hand's position and velocity (the hand's state). The mapping between the hand's state and the forces applied by the robot is computer controlled and, within the capabilities of the robots, the type of mapping is only limited by the experimenter's imagination.

No hay comentarios:

Publicar un comentario