23 research outputs found

    Rene\u27s Husband; The Shawl

    Get PDF
    Frederick Feirstein, a New Yorker, most recently published Manhattan Carnival: A Dramatic Monologue. The Shawl is from a new manuscript, Stubborn Spring; Renee\u27s Husband, from a book-length poem in progress, The Psychiatrist at the Cocktail Party

    Reinforcement learning of potential fields to achieve limit-cycle walking

    No full text
    Reinforcement learning is a powerful tool to derive controllers for systems where no models are available. Particularly policy search algorithms are suitable for complex systems, to keep learning time manageable and account for continuous state and action spaces. However, these algorithms demand more insight into the system to choose a suitable controller parameterization. This paper investigates a type of policy parameterization for impedance control that allows energy input to be implicitly bounded: Potential fields. In this work, a methodology for generating a potential field-constrained impedance controller via approximation of example trajectories, and subsequently improving the control policy using Reinforcement Learning, is presented. The potential field-const rained approximation is used as a policy parameterization for policy search reinforcement learning and is compared to its unconstrained counterpart. Simulations on a simple biped walking model show the learned controllers are able to surpass the potential field of gravity by generating a stable limit-cycle gait on flat ground for both parameterizations. The potential field-constrained controller provides safety with a known energy bound while performing equally well as the unconstrained policy.</p

    Reinforcement learning of potential fields to achieve limit-cycle walking

    No full text
    Reinforcement learning is a powerful tool to derive controllers for systems where no models are available. Particularly policy search algorithms are suitable for complex systems, to keep learning time manageable and account for continuous state and action spaces. However, these algorithms demand more insight into the system to choose a suitable controller parameterization. This paper investigates a type of policy parameterization for impedance control that allows energy input to be implicitly bounded: Potential fields. In this work, a methodology for generating a potential field-constrained impedance controller via approximation of example trajectories, and subsequently improving the control policy using Reinforcement Learning, is presented. The potential field-const rained approximation is used as a policy parameterization for policy search reinforcement learning and is compared to its unconstrained counterpart. Simulations on a simple biped walking model show the learned controllers are able to surpass the potential field of gravity by generating a stable limit-cycle gait on flat ground for both parameterizations. The potential field-constrained controller provides safety with a known energy bound while performing equally well as the unconstrained policy.Biomechatronics & Human-Machine ControlOLD Intelligent Control & RoboticsBiorobotic

    Effects of punishing elements of a simple instrumental-consummatory response chain

    No full text
    Rats were trained to press a lever, with every response reinforced with water. After responding was established, nine rats were administered a brief shock after each lever press, and nine others were shocked after drinking. The two procedures resulted in similar suppression of responding, and examination of the latency data when responding was partially suppressed indicated that under both conditions response suppression was due primarily to an increase in the latency of the instrumental response, rather than to pausing between the instrumental and consummatory responses. Thus, punishment following either the instrumental or consummatory component of the simple response sequence reduced the number of sequences initiated, rather than selectively suppressing the punished behavior
    corecore