2,912 research outputs found

    Online Learning of a Memory for Learning Rates

    Full text link
    The promise of learning to learn for robotics rests on the hope that by extracting some information about the learning process itself we can speed up subsequent similar learning tasks. Here, we introduce a computationally efficient online meta-learning algorithm that builds and optimizes a memory model of the optimal learning rate landscape from previously observed gradient behaviors. While performing task specific optimization, this memory of learning rates predicts how to scale currently observed gradients. After applying the gradient scaling our meta-learner updates its internal memory based on the observed effect its prediction had. Our meta-learner can be combined with any gradient-based optimizer, learns on the fly and can be transferred to new optimization tasks. In our evaluations we show that our meta-learning algorithm speeds up learning of MNIST classification and a variety of learning control tasks, either in batch or online learning settings.Comment: accepted to ICRA 2018, code available: https://github.com/fmeier/online-meta-learning ; video pitch available: https://youtu.be/9PzQ25FPPO

    Learning Sensor Feedback Models from Demonstrations via Phase-Modulated Neural Networks

    Full text link
    In order to robustly execute a task under environmental uncertainty, a robot needs to be able to reactively adapt to changes arising in its environment. The environment changes are usually reflected in deviation from expected sensory traces. These deviations in sensory traces can be used to drive the motion adaptation, and for this purpose, a feedback model is required. The feedback model maps the deviations in sensory traces to the motion plan adaptation. In this paper, we develop a general data-driven framework for learning a feedback model from demonstrations. We utilize a variant of a radial basis function network structure --with movement phases as kernel centers-- which can generally be applied to represent any feedback models for movement primitives. To demonstrate the effectiveness of our framework, we test it on the task of scraping on a tilt board. In this task, we are learning a reactive policy in the form of orientation adaptation, based on deviations of tactile sensor traces. As a proof of concept of our method, we provide evaluations on an anthropomorphic robot. A video demonstrating our approach and its results can be seen in https://youtu.be/7Dx5imy1KcwComment: 8 pages, accepted to be published at the International Conference on Robotics and Automation (ICRA) 201

    A New Data Source for Inverse Dynamics Learning

    Full text link
    Modern robotics is gravitating toward increasingly collaborative human robot interaction. Tools such as acceleration policies can naturally support the realization of reactive, adaptive, and compliant robots. These tools require us to model the system dynamics accurately -- a difficult task. The fundamental problem remains that simulation and reality diverge--we do not know how to accurately change a robot's state. Thus, recent research on improving inverse dynamics models has been focused on making use of machine learning techniques. Traditional learning techniques train on the actual realized accelerations, instead of the policy's desired accelerations, which is an indirect data source. Here we show how an additional training signal -- measured at the desired accelerations -- can be derived from a feedback control signal. This effectively creates a second data source for learning inverse dynamics models. Furthermore, we show how both the traditional and this new data source, can be used to train task-specific models of the inverse dynamics, when used independently or combined. We analyze the use of both data sources in simulation and demonstrate its effectiveness on a real-world robotic platform. We show that our system incrementally improves the learned inverse dynamics model, and when using both data sources combined converges more consistently and faster.Comment: IROS 201

    Learning Feedback Terms for Reactive Planning and Control

    Full text link
    With the advancement of robotics, machine learning, and machine perception, increasingly more robots will enter human environments to assist with daily tasks. However, dynamically-changing human environments requires reactive motion plans. Reactivity can be accomplished through replanning, e.g. model-predictive control, or through a reactive feedback policy that modifies on-going behavior in response to sensory events. In this paper, we investigate how to use machine learning to add reactivity to a previously learned nominal skilled behavior. We approach this by learning a reactive modification term for movement plans represented by nonlinear differential equations. In particular, we use dynamic movement primitives (DMPs) to represent a skill and a neural network to learn a reactive policy from human demonstrations. We use the well explored domain of obstacle avoidance for robot manipulation as a test bed. Our approach demonstrates how a neural network can be combined with physical insights to ensure robust behavior across different obstacle settings and movement durations. Evaluations on an anthropomorphic robotic system demonstrate the effectiveness of our work.Comment: 8 pages, accepted to be published at ICRA 2017 conferenc

    Statutory Retirement Age and Lifelong Learning

    Get PDF
    The employability of an aging population in a world of continuous technical change is top of the political agenda. Due to endogenous human capital depreciation, the effective retirement age is often below statutory retirement age resulting in unemployment among older workers. We analyze this phenomenon in a putty-putty human capital vintage model and focus on education and the speed of human capital depreciation. Introducing a two-stage education system with initial schooling and lifelong learning, not even lifelong learning turns out to be capable of aligning economic and statutory retirement. However, lifelong learning can increase the number of people reaching statutory retirement age and hence reduce the problem of old age unemployment in an aging society.lifelong learning, retirement, unemployment, education system

    Which hedge fund indices suit best for investors?

    Get PDF
    In contrast to most traditional assets, alternatives, especially hedge funds, do not have a distinct universe. This complicates proper performance measurement since most benchmarks suffer from statistical biases, deceiving investors about the "true” return an average hedge fund would have achieved. We investigate these influences, present an index for Swiss registered hedge funds which aims to avoid common biases and conclude that the systematic underperformance of funds of hedge funds compared to single hedge funds is mostly a result of bias mitigation. This indicates that the returns of fund of hedge funds indices are the most accurate for benchmarking both single- and funds of hedge funds.In contrast to most traditional assets, alternatives, especially hedge funds, do not have a distinct universe. This complicates proper performance measurement since most benchmarks suffer from statistical biases, deceiving investors about the "true” return an average hedge fund would have achieved. We investigate these influences, present an index for Swiss registered hedge funds which aims to avoid common biases and conclude that the systematic underperformance of funds of hedge funds compared to single hedge funds is mostly a result of bias mitigation. This indicates that the returns of fund of hedge funds indices are the most accurate for benchmarking both single- and funds of hedge funds
    corecore