18,600 research outputs found

    The Inactivation Principle: Mathematical Solutions Minimizing the Absolute Work and Biological Implications for the Planning of Arm Movements

    Get PDF
    An important question in the literature focusing on motor control is to determine which laws drive biological limb movements. This question has prompted numerous investigations analyzing arm movements in both humans and monkeys. Many theories assume that among all possible movements the one actually performed satisfies an optimality criterion. In the framework of optimal control theory, a first approach is to choose a cost function and test whether the proposed model fits with experimental data. A second approach (generally considered as the more difficult) is to infer the cost function from behavioral data. The cost proposed here includes a term called the absolute work of forces, reflecting the mechanical energy expenditure. Contrary to most investigations studying optimality principles of arm movements, this model has the particularity of using a cost function that is not smooth. First, a mathematical theory related to both direct and inverse optimal control approaches is presented. The first theoretical result is the Inactivation Principle, according to which minimizing a term similar to the absolute work implies simultaneous inactivation of agonistic and antagonistic muscles acting on a single joint, near the time of peak velocity. The second theoretical result is that, conversely, the presence of non-smoothness in the cost function is a necessary condition for the existence of such inactivation. Second, during an experimental study, participants were asked to perform fast vertical arm movements with one, two, and three degrees of freedom. Observed trajectories, velocity profiles, and final postures were accurately simulated by the model. In accordance, electromyographic signals showed brief simultaneous inactivation of opposing muscles during movements. Thus, assuming that human movements are optimal with respect to a certain integral cost, the minimization of an absolute-work-like cost is supported by experimental observations. Such types of optimality criteria may be applied to a large range of biological movements

    Optimality Principles and Motion Planning of Human-Like Reaching Movements

    Get PDF
    The paper deals with modeling of human-like reaching movements. Several issues are under study. First, we consider a model of unconstrained reaching movements that corresponds to the minimization of control effort. It is shown that this model can be represented by the wellknown Beta function. The representation can be used for the construction of fractional order models and also for modeling of asymmetric velocity proï¬les. Next, we address the formation of boundary conditions in a natural way. From the mathematical point of view, the structure of the optimal solution is deï¬ned not only by the form of the optimality criterion but also by the boundary conditions of the optimization task. The natural boundary conditions, deï¬ned in this part of the paper, can also be used in modeling asymmetric velocity proï¬les. Finally, addressing the modeling of reaching movements with bounded control actions, we consider the minimum time formulation of the optimization problem and (for the n-th order integrator) ï¬nd its analytical solution

    Review of real brain-controlled wheelchairs

    Get PDF
    This paper presents a review of the state of the art regarding wheelchairs driven by a brain-computer interface (BCI). Using a brain-controlled wheelchair (BCW), disabled users could handle a wheelchair through their brain activity, granting autonomy to move through an experimental environment. A classification is established, based on the characteristics of the BCW, such as the type of electroencephalographic (EEG) signal used, the navigation system employed by the wheelchair, the task for the participants, or the metrics used to evaluate the performance. Furthermore, these factors are compared according to the type of signal used, in order to clarify the differences among them. Finally, the trend of current research in this field is discussed, as well as the challenges that should be solved in the future

    Information-theoretic Sensorimotor Foundations of Fitts' Law

    Get PDF
    © 2019 ACM. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published is accessible via https://doi.org/10.1145/3290607.3313053We propose a novel, biologically plausible cost/fitness function for sensorimotor control, formalized with the information-theoretic principle of empowerment, a task-independent universal utility. Empowerment captures uncertainty in the perception-action loop of different nature (e.g. noise, delays, etc.) in a single quantity. We present the formalism in a Fitts' law type goal-directed arm movement task and suggest that empowerment is one potential underlying determinant of movement trajectory planning in the presence of signal-dependent sensorimotor noise. Simulation results demonstrate the temporal relation of empowerment and various plausible control strategies for this specific task

    The Volitive and the Executive Function of Intentions

    Get PDF
    Many philosophers of action conceive intentions functionally, as executive states: intentions are mental states that represent an action and tend to cause this action. In the philosophical tradition another function of intentions, which may be called "volitive", played a much more prominent role: intentions are mental states that represent what kind of actions we want and prefer to be realized and thus synthesize in a possibly rational way our motivational, desiderative and perhaps affective as well as cognitive attitudes towards this action. In the paper it is argued that intentions must fulfil both functions. Then a concept of ‘intention’ is developed that integrates both functions

    Inverse Reinforcement Learning in Large State Spaces via Function Approximation

    Get PDF
    This paper introduces a new method for inverse reinforcement learning in large-scale and high-dimensional state spaces. To avoid solving the computationally expensive reinforcement learning problems in reward learning, we propose a function approximation method to ensure that the Bellman Optimality Equation always holds, and then estimate a function to maximize the likelihood of the observed motion. The time complexity of the proposed method is linearly proportional to the cardinality of the action set, thus it can handle large state spaces efficiently. We test the proposed method in a simulated environment, and show that it is more accurate than existing methods and significantly better in scalability. We also show that the proposed method can extend many existing methods to high-dimensional state spaces. We then apply the method to evaluating the effect of rehabilitative stimulations on patients with spinal cord injuries based on the observed patient motions.Comment: Experiment update

    Changing ideas about others' intentions: updating prior expectations tunes activity in the human motor system

    Get PDF
    Predicting intentions from observing another agent’s behaviours is often thought to depend on motor resonance – i.e., the motor system’s response to a perceived movement by the activation of its stored motor counterpart, but observers might also rely on prior expectations, especially when actions take place in perceptually uncertain situations. Here we assessed motor resonance during an action prediction task using transcranial magnetic stimulation to probe corticospinal excitability (CSE) and report that experimentally-induced updates in observers’ prior expectations modulate CSE when predictions are made under situations of perceptual uncertainty. We show that prior expectations are updated on the basis of both biomechanical and probabilistic prior information and that the magnitude of the CSE modulation observed across participants is explained by the magnitude of change in their prior expectations. These findings provide the first evidence that when observers predict others’ intentions, motor resonance mechanisms adapt to changes in their prior expectations. We propose that this adaptive adjustment might reflect a regulatory control mechanism that shares some similarities with that observed during action selection. Such a mechanism could help arbitrate the competition between biomechanical and probabilistic prior information when appropriate for prediction
    corecore