196 research outputs found

    Statistics of Natural Movements Are Reflected in Motor Errors

    Get PDF
    Humans use their arms to engage in a wide variety of motor tasks during everyday life. However, little is known about the statistics of these natural arm movements. Studies of the sensory system have shown that the statistics of sensory inputs are key to determining sensory processing. We hypothesized that the statistics of natural everyday movements may, in a similar way, influence motor performance as measured in laboratory-based tasks. We developed a portable motion-tracking system that could be worn by subjects as they went about their daily routine outside of a laboratory setting. We found that the well-documented symmetry bias is reflected in the relative incidence of movements made during everyday tasks. Specifically, symmetric and antisymmetric movements are predominant at low frequencies, whereas only symmetric movements are predominant at high frequencies. Moreover, the statistics of natural movements, that is, their relative incidence, correlated with subjects' performance on a laboratory-based phase-tracking task. These results provide a link between natural movement statistics and motor performance and confirm that the symmetry bias documented in laboratory studies is a natural feature of human movement. </jats:p

    Pain: A Statistical Account

    Get PDF
    Perception is seen as a process that utilises partial and noisy information to construct a coherent understanding of the world. Here we argue that the experience of pain is no different; it is based on incomplete, multimodal information, which is used to estimate potential bodily threat. We outline a Bayesian inference model, incorporating the key components of cue combination, causal inference, and temporal integration, which highlights the statistical problems in everyday perception. It is from this platform that we are able to review the pain literature, providing evidence from experimental, acute, and persistent phenomena to demonstrate the advantages of adopting a statistical account in pain. Our probabilistic conceptualisation suggests a principles-based view of pain, explaining a broad range of experimental and clinical findings and making testable prediction

    A Bayesian Model of Sensory Adaptation

    Get PDF
    Recent studies reported two opposite types of adaptation in temporal perception. Here, we propose a Bayesian model of sensory adaptation that exhibits both types of adaptation. We regard adaptation as the adaptive updating of estimations of time-evolving variables, which determine the mean value of the likelihood function and that of the prior distribution in a Bayesian model of temporal perception. On the basis of certain assumptions, we can analytically determine the mean behavior in our model and identify the parameters that determine the type of adaptation that actually occurs. The results of our model suggest that we can control the type of adaptation by controlling the statistical properties of the stimuli presented

    A deep active inference model of the rubber-hand illusion

    Full text link
    Understanding how perception and action deal with sensorimotor conflicts, such as the rubber-hand illusion (RHI), is essential to understand how the body adapts to uncertain situations. Recent results in humans have shown that the RHI not only produces a change in the perceived arm location, but also causes involuntary forces. Here, we describe a deep active inference agent in a virtual environment, which we subjected to the RHI, that is able to account for these results. We show that our model, which deals with visual high-dimensional inputs, produces similar perceptual and force patterns to those found in humans.Comment: 8 pages, 3 figures, Accepted in 1st International Workshop on Active Inference, in Conjunction with European Conference of Machine Learning 2020. The final authenticated publication is available online at https://doi.org/10.1007/978-3-030-64919-7_1

    Robots that can adapt like animals

    Get PDF
    As robots leave the controlled environments of factories to autonomously function in more complex, natural environments, they will have to respond to the inevitable fact that they will become damaged. However, while animals can quickly adapt to a wide variety of injuries, current robots cannot "think outside the box" to find a compensatory behavior when damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes, and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots. Here we introduce an intelligent trial and error algorithm that allows robots to adapt to damage in less than two minutes, without requiring self-diagnosis or pre-specified contingency plans. Before deployment, a robot exploits a novel algorithm to create a detailed map of the space of high-performing behaviors: This map represents the robot's intuitions about what behaviors it can perform and their value. If the robot is damaged, it uses these intuitions to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a compensatory behavior that works in spite of the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new technique will enable more robust, effective, autonomous robots, and suggests principles that animals may use to adapt to injury

    Optimal Compensation for Temporal Uncertainty in Movement Planning

    Get PDF
    Motor control requires the generation of a precise temporal sequence of control signals sent to the skeletal musculature. We describe an experiment that, for good performance, requires human subjects to plan movements taking into account uncertainty in their movement duration and the increase in that uncertainty with increasing movement duration. We do this by rewarding movements performed within a specified time window, and penalizing slower movements in some conditions and faster movements in others. Our results indicate that subjects compensated for their natural duration-dependent temporal uncertainty as well as an overall increase in temporal uncertainty that was imposed experimentally. Their compensation for temporal uncertainty, both the natural duration-dependent and imposed overall components, was nearly optimal in the sense of maximizing expected gain in the task. The motor system is able to model its temporal uncertainty and compensate for that uncertainty so as to optimize the consequences of movement

    Bayesian Integration and Non-Linear Feedback Control in a Full-Body Motor Task

    Get PDF
    A large number of experiments have asked to what degree human reaching movements can be understood as being close to optimal in a statistical sense. However, little is known about whether these principles are relevant for other classes of movements. Here we analyzed movement in a task that is similar to surfing or snowboarding. Human subjects stand on a force plate that measures their center of pressure. This center of pressure affects the acceleration of a cursor that is displayed in a noisy fashion (as a cloud of dots) on a projection screen while the subject is incentivized to keep the cursor close to a fixed position. We find that salient aspects of observed behavior are well-described by optimal control models where a Bayesian estimation model (Kalman filter) is combined with an optimal controller (either a Linear-Quadratic-Regulator or Bang-bang controller). We find evidence that subjects integrate information over time taking into account uncertainty. However, behavior in this continuous steering task appears to be a highly non-linear function of the visual feedback. While the nervous system appears to implement Bayes-like mechanisms for a full-body, dynamic task, it may additionally take into account the specific costs and constraints of the task

    Grasping Objects with Environmentally Induced Position Uncertainty

    Get PDF
    Due to noisy motor commands and imprecise and ambiguous sensory information, there is often substantial uncertainty about the relative location between our body and objects in the environment. Little is known about how well people manage and compensate for this uncertainty in purposive movement tasks like grasping. Grasping objects requires reach trajectories to generate object-fingers contacts that permit stable lifting. For objects with position uncertainty, some trajectories are more efficient than others in terms of the probability of producing stable grasps. We hypothesize that people attempt to generate efficient grasp trajectories that produce stable grasps at first contact without requiring post-contact adjustments. We tested this hypothesis by comparing human uncertainty compensation in grasping objects against optimal predictions. Participants grasped and lifted a cylindrical object with position uncertainty, introduced by moving the cylinder with a robotic arm over a sequence of 5 positions sampled from a strongly oriented 2D Gaussian distribution. Preceding each reach, vision of the object was removed for the remainder of the trial and the cylinder was moved one additional time. In accord with optimal predictions, we found that people compensate by aligning the approach direction with covariance angle to maintain grasp efficiency. This compensation results in higher probability to achieve stable grasps at first contact than non-compensation strategies in grasping objects with directional position uncertainty, and the results provide the first demonstration that humans compensate for uncertainty in a complex purposive task

    Collective Animal Behavior from Bayesian Estimation and Probability Matching

    Get PDF
    Animals living in groups make movement decisions that depend, among other factors, on social interactions with other group members. Our present understanding of social rules in animal collectives is based on empirical fits to observations and we lack first-principles approaches that allow their derivation. Here we show that patterns of collective decisions can be derived from the basic ability of animals to make probabilistic estimations in the presence of uncertainty. We build a decision-making model with two stages: Bayesian estimation and probabilistic matching.&#xd;&#xa;In the first stage, each animal makes a Bayesian estimation of which behavior is best to perform taking into account personal information about the environment and social information collected by observing the behaviors of other animals. In the probability matching stage, each animal chooses a behavior with a probability given by the Bayesian estimation that this behavior is the most appropriate one. This model derives very simple rules of interaction in animal collectives that depend only on two types of reliability parameters, one that each animal assigns to the other animals and another given by the quality of the non-social information. We test our model by obtaining theoretically a rich set of observed collective patterns of decisions in three-spined sticklebacks, Gasterosteus aculeatus, a shoaling fish species. The quantitative link shown between probabilistic estimation and collective rules of behavior allows a better contact with other fields such as foraging, mate selection, neurobiology and psychology, and gives predictions for experiments directly testing the relationship between estimation and collective behavior

    Inferring Visuomotor Priors for Sensorimotor Learning

    Get PDF
    Sensorimotor learning has been shown to depend on both prior expectations and sensory evidence in a way that is consistent with Bayesian integration. Thus, prior beliefs play a key role during the learning process, especially when only ambiguous sensory information is available. Here we develop a novel technique to estimate the covariance structure of the prior over visuomotor transformations – the mapping between actual and visual location of the hand – during a learning task. Subjects performed reaching movements under multiple visuomotor transformations in which they received visual feedback of their hand position only at the end of the movement. After experiencing a particular transformation for one reach, subjects have insufficient information to determine the exact transformation, and so their second reach reflects a combination of their prior over visuomotor transformations and the sensory evidence from the first reach. We developed a Bayesian observer model in order to infer the covariance structure of the subjects' prior, which was found to give high probability to parameter settings consistent with visuomotor rotations. Therefore, although the set of visuomotor transformations experienced had little structure, the subjects had a strong tendency to interpret ambiguous sensory evidence as arising from rotation-like transformations. We then exposed the same subjects to a highly-structured set of visuomotor transformations, designed to be very different from the set of visuomotor rotations. During this exposure the prior was found to have changed significantly to have a covariance structure that no longer favored rotation-like transformations. In summary, we have developed a technique which can estimate the full covariance structure of a prior in a sensorimotor task and have shown that the prior over visuomotor transformations favor a rotation-like structure. Moreover, through experience of a novel task structure, participants can appropriately alter the covariance structure of their prior
    • …
    corecore