134 research outputs found

    Machine learning for helicopter dynamics models

    Get PDF
    Abstract Machine Learning for Helicopter Dynamics Models by Ali Punjani Master of Science in Computer Science University of California, Berkeley Professor Pieter Abbeel, Chair We consider the problem of system identification of helicopter dynamics. Helicopters are complex systems, coupling rigid body dynamics with aerodynamics, engine dynamics, vibration, and other phenomena. Resultantly, they pose a challenging system identification problem, especially when considering non-stationary flight regimes. We pose the dynamics modeling problem as direct high-dimensional regression, and take inspiration from recent results in Deep Learning to represent the helicopter dynamics with a Rectified Linear Unit (ReLU) Network Model, a hierarchical neural network model. We provide a simple method for initializing the parameters of the model, and optimization details for training. We describe three baseline models and show that they are significantly outperformed by the ReLU Network Model in experiments on real data, indicating the power of the model to capture useful structure in system dynamics across a rich array of aerobatic maneuvers. Specifically, the ReLU Network Model improves 58% overall in RMS acceleration prediction over state-of-the-art methods. Predicting acceleration along the helicopter's up-down axis is empirically found to be the most difficult, and the ReLU Network Model improves by 60% over the prior state-ofthe-art. We discuss explanations of these performance gains, and also investigate the impact of hyperparameters in the novel model.

    An Intelligent Autopilot System that learns piloting skills from human pilots by imitation

    Get PDF
    An Intelligent Autopilot System (IAS) that can learn piloting skills by observing and imitating expert human pilots is proposed. IAS is a potential solution to the current problem of Automatic Flight Control Systems of being unable to handle flight uncertainties, and the need to construct control models manually. A robust Learning by Imitation approach is proposed which uses human pilots to demonstrate the task to be learned in a flight simulator while training datasets are captured from these demonstrations. The datasets are then used by Artificial Neural Networks to generate control models automatically. The control models imitate the skills of the human pilot when performing piloting tasks including handling flight uncertainties such as severe weather conditions. Experiments show that IAS performs learned take-off, climb, and slow ascent tasks with high accuracy even after being presented with limited examples, as measured by Mean Absolute Error and Mean Absolute Deviation. The results demonstrate that the IAS is capable of imitating low-level sub-cognitive skills such as rapid and continuous stabilization attempts in stormy weather conditions, and high-level strategic skills such as the sequence of sub-tasks necessary to pilot an aircraft starting from the stationary position on the runway, and ending with a steady cruise

    Imitation from Observation: Learning to Imitate Behaviors from Raw Video via Context Translation

    Full text link
    Imitation learning is an effective approach for autonomous systems to acquire control policies when an explicit reward function is unavailable, using supervision provided as demonstrations from an expert, typically a human operator. However, standard imitation learning methods assume that the agent receives examples of observation-action tuples that could be provided, for instance, to a supervised learning algorithm. This stands in contrast to how humans and animals imitate: we observe another person performing some behavior and then figure out which actions will realize that behavior, compensating for changes in viewpoint, surroundings, object positions and types, and other factors. We term this kind of imitation learning "imitation-from-observation," and propose an imitation learning method based on video prediction with context translation and deep reinforcement learning. This lifts the assumption in imitation learning that the demonstration should consist of observations in the same environment configuration, and enables a variety of interesting applications, including learning robotic skills that involve tool use simply by observing videos of human tool use. Our experimental results show the effectiveness of our approach in learning a wide range of real-world robotic tasks modeled after common household chores from videos of a human demonstrator, including sweeping, ladling almonds, pushing objects as well as a number of tasks in simulation.Comment: Accepted at ICRA 2018, Brisbane. YuXuan Liu and Abhishek Gupta had equal contributio

    Adaptive fast open-loop maneuvers for quadrocopters

    Get PDF
    We present a conceptually and computationally lightweight method for the design and iterative learning of fast maneuvers for quadrocopters. We use first-principles, reduced-order models and we do not require nor make an attempt to follow a specific state trajectory—only the initial and the final states of the vehicle are taken into account. We evaluate the adaptation scheme through experiments on quadrocopters in the ETH Flying Machine Arena that perform multi-flips and other high-performance maneuver

    Learning Unmanned Aerial Vehicle Control for Autonomous Target Following

    Full text link
    While deep reinforcement learning (RL) methods have achieved unprecedented successes in a range of challenging problems, their applicability has been mainly limited to simulation or game domains due to the high sample complexity of the trial-and-error learning process. However, real-world robotic applications often need a data-efficient learning process with safety-critical constraints. In this paper, we consider the challenging problem of learning unmanned aerial vehicle (UAV) control for tracking a moving target. To acquire a strategy that combines perception and control, we represent the policy by a convolutional neural network. We develop a hierarchical approach that combines a model-free policy gradient method with a conventional feedback proportional-integral-derivative (PID) controller to enable stable learning without catastrophic failure. The neural network is trained by a combination of supervised learning from raw images and reinforcement learning from games of self-play. We show that the proposed approach can learn a target following policy in a simulator efficiently and the learned behavior can be successfully transferred to the DJI quadrotor platform for real-world UAV control

    Learning to drive via Apprenticeship Learning and Deep Reinforcement Learning

    Full text link
    With the implementation of reinforcement learning (RL) algorithms, current state-of-art autonomous vehicle technology have the potential to get closer to full automation. However, most of the applications have been limited to game domains or discrete action space which are far from the real world driving. Moreover, it is very tough to tune the parameters of reward mechanism since the driving styles vary a lot among the different users. For instance, an aggressive driver may prefer driving with high acceleration whereas some conservative drivers prefer a safer driving style. Therefore, we propose an apprenticeship learning in combination with deep reinforcement learning approach that allows the agent to learn the driving and stopping behaviors with continuous actions. We use gradient inverse reinforcement learning (GIRL) algorithm to recover the unknown reward function and employ REINFORCE as well as Deep Deterministic Policy Gradient algorithm (DDPG) to learn the optimal policy. The performance of our method is evaluated in simulation-based scenario and the results demonstrate that the agent performs human like driving and even better in some aspects after training.Comment: 7 pages, 11 figures, conferenc
    • …
    corecore