6,207 research outputs found

    New control strategies for neuroprosthetic systems

    Get PDF
    The availability of techniques to artificially excite paralyzed muscles opens enormous potential for restoring both upper and lower extremity movements with\ud neuroprostheses. Neuroprostheses must stimulate muscle, and control and regulate the artificial movements produced. Control methods to accomplish these tasks include feedforward (open-loop), feedback, and adaptive control. Feedforward control requires a great deal of information about the biomechanical behavior of the limb. For the upper extremity, an artificial motor program was developed to provide such movement program input to a neuroprosthesis. In lower extremity control, one group achieved their best results by attempting to meet naturally perceived gait objectives rather than to follow an exact joint angle trajectory. Adaptive feedforward control, as implemented in the cycleto-cycle controller, gave good compensation for the gradual decrease in performance observed with open-loop control. A neural network controller was able to control its system to customize stimulation parameters in order to generate a desired output trajectory in a given individual and to maintain tracking performance in the presence of muscle fatigue. The authors believe that practical FNS control systems must\ud exhibit many of these features of neurophysiological systems

    Phase correction for Learning Feedforward Control

    Get PDF
    Intelligent mechatronics makes it possible to compensate for effects that are difficult to compensate for by construction or by linear control, by including some intelligence into the system. The compensation of state dependent effects, e.g. friction, cogging and mass deviation, can be realised by learning feedforward control. This method identifies these disturbing effects as function of their states and compensates for these, before they introduce an error. Because the effects are learnt as function of their states, this method can be used for non-repetitive motions. The learning of state dependent effects relies on the update signal that is used. In previous work, the feedback control signal was used as an error measure between the approximation and the true state dependent effect. If the effects introduce a signal that contains frequencies near the bandwidth, the phase shift between this signal and the feedback signal might seriously degenerate the performance of the approximation. The use of phase correction overcomes this problem. This is validated by a set of simulations and experiments that show the necessity of the phase corrected scheme

    Intelligent flight control systems

    Get PDF
    The capabilities of flight control systems can be enhanced by designing them to emulate functions of natural intelligence. Intelligent control functions fall in three categories. Declarative actions involve decision-making, providing models for system monitoring, goal planning, and system/scenario identification. Procedural actions concern skilled behavior and have parallels in guidance, navigation, and adaptation. Reflexive actions are spontaneous, inner-loop responses for control and estimation. Intelligent flight control systems learn knowledge of the aircraft and its mission and adapt to changes in the flight environment. Cognitive models form an efficient basis for integrating 'outer-loop/inner-loop' control functions and for developing robust parallel-processing algorithms

    Sim-to-Real Transfer of Robotic Control with Dynamics Randomization

    Full text link
    Simulations are attractive environments for training agents as they provide an abundant source of data and alleviate certain safety concerns during the training process. But the behaviours developed by agents in simulation are often specific to the characteristics of the simulator. Due to modeling error, strategies that are successful in simulation may not transfer to their real world counterparts. In this paper, we demonstrate a simple method to bridge this "reality gap". By randomizing the dynamics of the simulator during training, we are able to develop policies that are capable of adapting to very different dynamics, including ones that differ significantly from the dynamics on which the policies were trained. This adaptivity enables the policies to generalize to the dynamics of the real world without any training on the physical system. Our approach is demonstrated on an object pushing task using a robotic arm. Despite being trained exclusively in simulation, our policies are able to maintain a similar level of performance when deployed on a real robot, reliably moving an object to a desired location from random initial configurations. We explore the impact of various design decisions and show that the resulting policies are robust to significant calibration error
    corecore