95,623 research outputs found

    Stepping motor control circuit Patent

    Get PDF
    Stepping motor control apparatus exciting windings in proper time sequence to cause motor to rotate in either directio

    Electronic motor control system Patent

    Get PDF
    Electronic circuit system for controlling electric motor spee

    Motor Output Variability Impairs Driving Ability in Older Adults

    Get PDF
    Background: The functional declines with aging relate to deficits in motor control and strength. In this study, we determine whether older adults exhibit impaired driving as a consequence of declines in motor control or strength. Methods: Young and older adults performed the following tasks: (i) maximum voluntary contractions of ankle dorsiflexion and plantarflexion; (ii) sinusoidal tracking with isolated ankle dorsiflexion; and (iii) a reactive driving task that required responding to unexpected brake lights of the car ahead. We quantified motor control with ankle force variability, gas position variability, and brake force variability. We quantified reactive driving performance with a combination of gas pedal error, premotor and motor response times, and brake pedal error. Results: Reactive driving performance was ~30% more impaired (t = 3.38; p \u3c .01) in older adults compared with young adults. Older adults exhibited greater motor output variability during both isolated ankle dorsiflexion contractions (t = 2.76; p \u3c .05) and reactive driving (gas pedal variability: t = 1.87; p \u3c .03; brake pedal variability: t = 4.55; p \u3c .01). Deficits in reactive driving were strongly correlated to greater motor output variability (R 2 = .48; p \u3c .01) but not strength (p \u3e .05). Conclusions: This study provides novel evidence that age-related declines in motor control but not strength impair reactive driving. These findings have implications on rehabilitation and suggest that interventions should focus on improving motor control to enhance driving-related function in older adults

    Scaling Reinforcement Learning Paradigms for Motor Control

    Get PDF
    Reinforcement learning offers a general framework to explain reward related learning in artificial and biological motor control. However, current reinforcement learning methods rarely scale to high dimensional movement systems and mainly operate in discrete, low dimensional domains like game-playing, artificial toy problems, etc. This drawback makes them unsuitable for application to human or bio-mimetic motor control. In this poster, we look at promising approaches that can potentially scale and suggest a novel formulation of the actor-critic algorithm which takes steps towards alleviating the current shortcomings. We argue that methods based on greedy policies are not likely to scale into high-dimensional domains as they are problematic when used with function approximation – a must when dealing with continuous domains. We adopt the path of direct policy gradient based policy improvements since they avoid the problems of unstabilizing dynamics encountered in traditional value iteration based updates. While regular policy gradient methods have demonstrated promising results in the domain of humanoid notor control, we demonstrate that these methods can be significantly improved using the natural policy gradient instead of the regular policy gradient. Based on this, it is proved that Kakade’s ‘average natural policy gradient’ is indeed the true natural gradient. A general algorithm for estimating the natural gradient, the Natural Actor-Critic algorithm, is introduced. This algorithm converges with probability one to the nearest local minimum in Riemannian space of the cost function. The algorithm outperforms nonnatural policy gradients by far in a cart-pole balancing evaluation, and offers a promising route for the development of reinforcement learning for truly high-dimensionally continuous state-action systems. Keywords: Reinforcement learning, neurodynamic programming, actorcritic methods, policy gradient methods, natural policy gradien
    corecore