11,153 research outputs found

    Learning Feedback Terms for Reactive Planning and Control

    Full text link
    With the advancement of robotics, machine learning, and machine perception, increasingly more robots will enter human environments to assist with daily tasks. However, dynamically-changing human environments requires reactive motion plans. Reactivity can be accomplished through replanning, e.g. model-predictive control, or through a reactive feedback policy that modifies on-going behavior in response to sensory events. In this paper, we investigate how to use machine learning to add reactivity to a previously learned nominal skilled behavior. We approach this by learning a reactive modification term for movement plans represented by nonlinear differential equations. In particular, we use dynamic movement primitives (DMPs) to represent a skill and a neural network to learn a reactive policy from human demonstrations. We use the well explored domain of obstacle avoidance for robot manipulation as a test bed. Our approach demonstrates how a neural network can be combined with physical insights to ensure robust behavior across different obstacle settings and movement durations. Evaluations on an anthropomorphic robotic system demonstrate the effectiveness of our work.Comment: 8 pages, accepted to be published at ICRA 2017 conferenc

    Towards modeling complex robot training tasks through system identification

    Get PDF
    Previous research has shown that sensor-motor tasks in mobile robotics applications can be modelled automatically, using NARMAX system identi�cation, where the sensory perception of the robot is mapped to the desired motor commands using non-linear polynomial functions, resulting in a tight coupling between sensing and acting | the robot responds directly to the sensor stimuli without having internal states or memory. However, competences such as for instance sequences of actions, where actions depend on each other, require memory and thus a representation of state. In these cases a simple direct link between sensory perception and the motor commands may not be enough to accomplish the desired tasks. The contribution to knowledge of this paper is to show how fundamental, simple NARMAX models of behaviour can be used in a bootstrapping process to generate complex behaviours that were so far beyond reach. We argue that as the complexity of the task increases, it is important to estimate the current state of the robot and integrate this information into the system identification process. To achieve this we propose a novel method which relates distinctive locations in the environment to the state of the robot, using an unsupervised clustering algorithm. Once we estimate the current state of the robot accurately, we combine the state information with the perception of the robot through a bootstrapping method to generate more complex robot tasks: We obtain a polynomial model which models the complex task as a function of predefined low level sensor motor controllers and raw sensory data. The proposed method has been used to teach Scitos G5 mobile robots a number of complex tasks, such as advanced obstacle avoidance, or complex route learning

    Learning Sensor Feedback Models from Demonstrations via Phase-Modulated Neural Networks

    Full text link
    In order to robustly execute a task under environmental uncertainty, a robot needs to be able to reactively adapt to changes arising in its environment. The environment changes are usually reflected in deviation from expected sensory traces. These deviations in sensory traces can be used to drive the motion adaptation, and for this purpose, a feedback model is required. The feedback model maps the deviations in sensory traces to the motion plan adaptation. In this paper, we develop a general data-driven framework for learning a feedback model from demonstrations. We utilize a variant of a radial basis function network structure --with movement phases as kernel centers-- which can generally be applied to represent any feedback models for movement primitives. To demonstrate the effectiveness of our framework, we test it on the task of scraping on a tilt board. In this task, we are learning a reactive policy in the form of orientation adaptation, based on deviations of tactile sensor traces. As a proof of concept of our method, we provide evaluations on an anthropomorphic robot. A video demonstrating our approach and its results can be seen in https://youtu.be/7Dx5imy1KcwComment: 8 pages, accepted to be published at the International Conference on Robotics and Automation (ICRA) 201

    Understanding the agility of running birds: Sensorimotor and mechanical factors in avian bipedal locomotion

    Get PDF
    Birds are a diverse and agile lineage of vertebrates that all use bipedal locomotion for at least part of their life. Thus birds provide a valuable opportunity to investigate how biomechanics and sensorimotor control are integrated for agile bipedal locomotion. This review summarizes recent work using terrain perturbations to reveal neuromechanical control strategies used by ground birds to achieve robust, stable and agile running. Early experiments in running guinea fowl aimed to reveal the immediate intrinsic mechanical response to an unexpected drop ('pothole') in terrain. When navigating the pothole, guinea fowl experience large changes in leg posture in the perturbed step, which correlates strongly with leg loading and perturbation recovery. Analysis of simple theoretical models of running has further confirmed the crucial role of swing-leg trajectory control for regulating foot contact timing and leg loading in uneven terrain. Coupling between body and leg dynamics results in an inherent trade-off in swing leg retraction rate for fall avoidance versus injury avoidance. Fast leg retraction minimizes injury risk, but slow leg retraction minimizes fall risk. Subsequent experiments have investigated how birds optimize their control strategies depending on the type of perturbation (pothole, step, obstacle), visibility of terrain, and with ample practice negotiating terrain features. Birds use several control strategies consistently across terrain contexts: 1) independent control of leg angular cycling and leg length actuation, which facilitates dynamic stability through simple control mechanisms, 2) feedforward regulation of leg cycling rate, which tunes foot-contact timing to maintain consistent leg loading in uneven terrain (minimizing fall and injury risks), 3) load-dependent muscle actuation, which rapidly adjusts stance push-off and stabilizes body mechanical energy, and 4) multi-step recovery strategies that allow body dynamics to transiently vary while tightly regulating leg loading to minimize risks of fall and injury. In future work, it will be interesting to investigate the learning and adaptation processes that allow animals to adjust neuromechanical control mechanisms over short and long timescales

    Physically Embedded Genetic Algorithm Learning in Multi-Robot Scenarios: The PEGA algorithm

    Get PDF
    We present experiments in which a group of autonomous mobile robots learn to perform fundamental sensor-motor tasks through a collaborative learning process. Behavioural strategies, i.e. motor responses to sensory stimuli, are encoded by means of genetic strings stored on the individual robots, and adapted through a genetic algorithm (Mitchell, 1998) executed by the entire robot collective: robots communicate their own strings and corresponding fitness to each other, and then execute a genetic algorithm to improve their individual behavioural strategy. The robots acquired three different sensormotor competences, as well as the ability to select one of two, or one of three behaviours depending on context ("behaviour management"). Results show that fitness indeed increases with increasing learning time, and the analysis of the acquired behavioural strategies demonstrates that they are effective in accomplishing the desired task

    Fuzzy logic applications to expert systems and control

    Get PDF
    A considerable amount of work on the development of fuzzy logic algorithms and application to space related control problems has been done at the Johnson Space Center (JSC) over the past few years. Particularly, guidance control systems for space vehicles during proximity operations, learning systems utilizing neural networks, control of data processing during rendezvous navigation, collision avoidance algorithms, camera tracking controllers, and tether controllers have been developed utilizing fuzzy logic technology. Several other areas in which fuzzy sets and related concepts are being considered at JSC are diagnostic systems, control of robot arms, pattern recognition, and image processing. It has become evident, based on the commercial applications of fuzzy technology in Japan and China during the last few years, that this technology should be exploited by the government as well as private industry for energy savings
    corecore