475 research outputs found

    Brain-Machine Interactions for Assessing the Dynamics of Neural Systems

    Get PDF
    A critical advance for brain–machine interfaces is the establishment of bi-directional communications between the nervous system and external devices. However, the signals generated by a population of neurons are expected to depend in a complex way upon poorly understood neural dynamics. We report a new technique for the identification of the dynamics of a neural population engaged in a bi-directional interaction with an external device. We placed in vitro preparations from the lamprey brainstem in a closed-loop interaction with simulated dynamical devices having different numbers of degrees of freedom. We used the observed behaviors of this composite system to assess how many independent parameters − or state variables − determine at each instant the output of the neural system. This information, known as the dynamical dimension of a system, allows predicting future behaviors based on the present state and the future inputs. A relevant novelty in this approach is the possibility to assess a computational property – the dynamical dimension of a neuronal population – through a simple experimental technique based on the bi-directional interaction with simulated dynamical devices. We present a set of results that demonstrate the possibility of obtaining stable and reliable measures of the dynamical dimension of a neural preparation

    Geometric Structure of the Adaptive Controller of the Human Arm

    Get PDF
    The objects with which the hand interacts with may significantly change the dynamics of the arm. How does the brain adapt control of arm movements to this new dynamic? We show that adaptation is via composition of a model of the task's dynamics. By exploring generalization capabilities of this adaptation we infer some of the properties of the computational elements with which the brain formed this model: the elements have broad receptive fields and encode the learned dynamics as a map structured in an intrinsic coordinate system closely related to the geometry of the skeletomusculature. The low--level nature of these elements suggests that they may represent asset of primitives with which a movement is represented in the CNS

    The separate neural control of hand movements and contact forces

    Get PDF
    To manipulate an object, we must simultaneously control the contact forces exerted on the object and the movements of our hand. Two alternative views for manipulation have been proposed: one in which motions and contact forces are represented and controlled by separate neural processes, and one in which motions and forces are controlled jointly, by a single process. To evaluate these alternatives, we designed three tasks in which subjects maintained a specified contact force while their hand was moved by a robotic manipulandum. The prescribed contact force and hand motions were selected in each task to induce the subject to attain one of three goals: (1) exerting a regulated contact force, (2) tracking the motion of the manipulandum, and (3) attaining both force and motion goals concurrently. By comparing subjects' performances in these three tasks, we found that behavior was captured by the summed actions of two independent control systems: one applying the desired force, and the other guiding the hand along the predicted path of the manipulandum. Furthermore, the application of transcranial magnetic stimulation impulses to the posterior parietal cortex selectively disrupted the control of motion but did not affect the regulation of static contact force. Together, these findings are consistent with the view that manipulation of objects is performed by independent brain control of hand motions and interaction forces

    Learning to push and learning to move: The adaptive control of contact forces

    Get PDF
    To be successful at manipulating objects one needs to apply simultaneously well controlled movements and contact forces. We present a computational theory of how the brain may successfully generate a vast spectrum of interactive behaviors by combining two independent processes. One process is competent to control movements in free space and the other is competent to control contact forces against rigid constraints. Free space and rigid constraints are singularities at the boundaries of a continuum of mechanical impedance. Within this continuum, forces and motions occur in \u201ccompatible pairs\u201d connected by the equations of Newtonian dynamics. The force applied to an object determines its motion. Conversely, inverse dynamics determine a unique force trajectory from a movement trajectory. In this perspective, we describe motor learning as a process leading to the discovery of compatible force/motion pairs. The learned compatible pairs constitute a local representation of the environment's mechanics. Experiments on force field adaptation have already provided us with evidence that the brain is able to predict and compensate the forces encountered when one is attempting to generate a motion. Here, we tested the theory in the dual case, i.e., when one attempts at applying a desired contact force against a simulated rigid surface. If the surface becomes unexpectedly compliant, the contact point moves as a function of the applied force and this causes the applied force to deviate from its desired value. We found that, through repeated attempts at generating the desired contact force, subjects discovered the unique compatible hand motion. When, after learning, the rigid contact was unexpectedly restored, subjects displayed after effects of learning, consistent with the concurrent operation of a motion control system and a force control system. Together, theory and experiment support a new and broader view of modularity in the coordinated control of forces and motions

    Learning Redundant Motor Tasks With and Without Overlapping Dimensions: Facilitation and Interference Effects

    Get PDF
    Prior learning of a motor skill creates motor memories that can facilitate or interfere with learning of new, but related, motor skills. One hypothesis of motor learning posits that for a sensorimotor task with redundant degrees of freedom, the nervous system learns the geometric structure of the task and improves performance by selectively operating within that task space. We tested this hypothesis by examining if transfer of learning between two tasks depends on shared dimensionality between their respective task spaces. Human participants wore a data glove and learned to manipulate a computer cursor by moving their fingers. Separate groups of participants learned two tasks: a prior task that was unique to each group and a criterion task that was common to all groups. We manipulated the mapping between finger motions and cursor positions in the prior task to define task spaces that either shared or did not share the task space dimensions (x-y axes) of the criterion task. We found that if the prior task shared task dimensions with the criterion task, there was an initial facilitation in criterion task performance. However, if the prior task did not share task dimensions with the criterion task, there was prolonged interference in learning the criterion task due to participants finding inefficient task solutions. These results show that the nervous system learns the task space through practice, and that the degree of shared task space dimensionality influences the extent to which prior experience transfers to subsequent learning of related motor skills

    Sensory Motor Remapping of Space in Human-Machine Interfaces

    Get PDF
    Studies of adaptation to patterns of deterministic forces have revealed the ability of the motor control system to form and use predictive representations of the environment. These studies have also pointed out that adaptation to novel dynamics is aimed at preserving the trajectories of a controlled endpoint, either the hand of a subject or a transported object. We review some of these experiments and present more recent studies aimed at understanding how the motor system forms representations of the physical space in which actions take place. An extensive line of investigations in visual information processing has dealt with the issue of how the Euclidean properties of space are recovered from visual signals that do not appear to possess these properties. The same question is addressed here in the context of motor behavior and motor learning by observing how people remap hand gestures and body motions that control the state of an external device. We present some theoretical considerations and experimental evidence about the ability of the nervous system to create novel patterns of coordination that are consistent with the representation of extrapersonal space. We also discuss the perspective of endowing human–machine interfaces with learning algorithms that, combined with human learning, may facilitate the control of powered wheelchairs and other assistive devices

    The dynamics of motor learning through the formation of internal models

    Get PDF
    A medical student learning to perform a laparoscopic procedure or a recently paralyzed user of a powered wheelchair must learn to operate machinery via interfaces that translate their actions into commands for an external device. Since the user\u2019s actions are selected from a number of alternatives that would result in the same effect in the control space of the external device, learning to use such interfaces involves dealing with redundancy. Subjects need to learn an externally chosen many-to-one map that transforms their actions into device commands. Mathematically, we describe this type of learning as a deterministic dynamical process, whose state is the evolving forward and inverse internal models of the interface. The forward model predicts the outcomes of actions, while the inverse model generates actions designed to attain desired outcomes. Both the mathematical analysis of the proposed model of learning dynamics and the learning performance observed in a group of subjects demonstrate a first-order exponential convergence of the learning process toward a particular state that depends only on the initial state of the inverse and forward models and on the sequence of targets supplied to the users. Noise is not only present but necessary for the convergence of learning through the minimization of the difference between actual and predicted outcomes

    New Perspectives on the Dialogue between Brains and Machines

    Get PDF
    Brain-machine interfaces (BMIs) are mostly investigated as a means to provide paralyzed people with new communication channels with the external world. However, the communication between brain and artificial devices also offers a unique opportunity to study the dynamical properties of neural systems. This review focuses on bidirectional interfaces, which operate in two ways by translating neural signals into input commands for the device and the output of the device into neural stimuli. We discuss how bidirectional BMIs help investigating neural information processing and how neural dynamics may participate in the control of external devices. In this respect, a bidirectional BMI can be regarded as a fancy combination of neural recording and stimulation apparatus, connected via an artificial body. The artificial body can be designed in virtually infinite ways in order to observe different aspects of neural dynamics and to approximate desired control policies
    • 

    corecore