5,211 research outputs found

    Robot pain: a speculative review of its functions

    Get PDF
    Given the scarce bibliography dealing explicitly with robot pain, this chapter has enriched its review with related research works about robot behaviours and capacities in which pain could play a role. It is shown that all such roles ¿ranging from punishment to intrinsic motivation and planning knowledge¿ can be formulated within the unified framework of reinforcement learning.Peer ReviewedPostprint (author's final draft

    Temporal-Difference Learning to Assist Human Decision Making during the Control of an Artificial Limb

    Full text link
    In this work we explore the use of reinforcement learning (RL) to help with human decision making, combining state-of-the-art RL algorithms with an application to prosthetics. Managing human-machine interaction is a problem of considerable scope, and the simplification of human-robot interfaces is especially important in the domains of biomedical technology and rehabilitation medicine. For example, amputees who control artificial limbs are often required to quickly switch between a number of control actions or modes of operation in order to operate their devices. We suggest that by learning to anticipate (predict) a user's behaviour, artificial limbs could take on an active role in a human's control decisions so as to reduce the burden on their users. Recently, we showed that RL in the form of general value functions (GVFs) could be used to accurately detect a user's control intent prior to their explicit control choices. In the present work, we explore the use of temporal-difference learning and GVFs to predict when users will switch their control influence between the different motor functions of a robot arm. Experiments were performed using a multi-function robot arm that was controlled by muscle signals from a user's body (similar to conventional artificial limb control). Our approach was able to acquire and maintain forecasts about a user's switching decisions in real time. It also provides an intuitive and reward-free way for users to correct or reinforce the decisions made by the machine learning system. We expect that when a system is certain enough about its predictions, it can begin to take over switching decisions from the user to streamline control and potentially decrease the time and effort needed to complete tasks. This preliminary study therefore suggests a way to naturally integrate human- and machine-based decision making systems.Comment: 5 pages, 4 figures, This version to appear at The 1st Multidisciplinary Conference on Reinforcement Learning and Decision Making, Princeton, NJ, USA, Oct. 25-27, 201

    Neuro-mechanical entrainment in a bipedal robotic walking platform

    No full text
    In this study, we investigated the use of van der Pol oscillators in a 4-dof embodied bipedal robotic platform for the purposes of planar walking. The oscillator controlled the hip and knee joints of the robot and was capable of generating waveforms with the correct frequency and phase so as to entrain with the mechanical system. Lowering its oscillation frequency resulted in an increase to the walking pace, indicating exploitation of the global natural dynamics. This is verified by its operation in absence of entrainment, where faster limb motion results in a slower overall walking pace

    Neuro-mechanical entrainment in a bipedal robotic walking platform

    No full text
    In this study, we investigated the use of van der Pol oscillators in a 4-dof embodied bipedal robotic platform for the purposes of planar walking. The oscillator controlled the hip and knee joints of the robot and was capable of generating waveforms with the correct frequency and phase so as to entrain with the mechanical system. Lowering its oscillation frequency resulted in an increase to the walking pace, indicating exploitation of the global natural dynamics. This is verified by its operation in absence of entrainment, where faster limb motion results in a slower overall walking pace

    Chaotic exploration and learning of locomotion behaviours

    Get PDF
    We present a general and fully dynamic neural system, which exploits intrinsic chaotic dynamics, for the real-time goal-directed exploration and learning of the possible locomotion patterns of an articulated robot of an arbitrary morphology in an unknown environment. The controller is modeled as a network of neural oscillators that are initially coupled only through physical embodiment, and goal-directed exploration of coordinated motor patterns is achieved by chaotic search using adaptive bifurcation. The phase space of the indirectly coupled neural-body-environment system contains multiple transient or permanent self-organized dynamics, each of which is a candidate for a locomotion behavior. The adaptive bifurcation enables the system orbit to wander through various phase-coordinated states, using its intrinsic chaotic dynamics as a driving force, and stabilizes on to one of the states matching the given goal criteria. In order to improve the sustainability of useful transient patterns, sensory homeostasis has been introduced, which results in an increased diversity of motor outputs, thus achieving multiscale exploration. A rhythmic pattern discovered by this process is memorized and sustained by changing the wiring between initially disconnected oscillators using an adaptive synchronization method. Our results show that the novel neurorobotic system is able to create and learn multiple locomotion behaviors for a wide range of body configurations and physical environments and can readapt in realtime after sustaining damage
    corecore