128,150 research outputs found
Learning Unmanned Aerial Vehicle Control for Autonomous Target Following
While deep reinforcement learning (RL) methods have achieved unprecedented
successes in a range of challenging problems, their applicability has been
mainly limited to simulation or game domains due to the high sample complexity
of the trial-and-error learning process. However, real-world robotic
applications often need a data-efficient learning process with safety-critical
constraints. In this paper, we consider the challenging problem of learning
unmanned aerial vehicle (UAV) control for tracking a moving target. To acquire
a strategy that combines perception and control, we represent the policy by a
convolutional neural network. We develop a hierarchical approach that combines
a model-free policy gradient method with a conventional feedback
proportional-integral-derivative (PID) controller to enable stable learning
without catastrophic failure. The neural network is trained by a combination of
supervised learning from raw images and reinforcement learning from games of
self-play. We show that the proposed approach can learn a target following
policy in a simulator efficiently and the learned behavior can be successfully
transferred to the DJI quadrotor platform for real-world UAV control
Batch Reinforcement Learning on the Industrial Benchmark: First Experiences
The Particle Swarm Optimization Policy (PSO-P) has been recently introduced
and proven to produce remarkable results on interacting with academic
reinforcement learning benchmarks in an off-policy, batch-based setting. To
further investigate the properties and feasibility on real-world applications,
this paper investigates PSO-P on the so-called Industrial Benchmark (IB), a
novel reinforcement learning (RL) benchmark that aims at being realistic by
including a variety of aspects found in industrial applications, like
continuous state and action spaces, a high dimensional, partially observable
state space, delayed effects, and complex stochasticity. The experimental
results of PSO-P on IB are compared to results of closed-form control policies
derived from the model-based Recurrent Control Neural Network (RCNN) and the
model-free Neural Fitted Q-Iteration (NFQ). Experiments show that PSO-P is not
only of interest for academic benchmarks, but also for real-world industrial
applications, since it also yielded the best performing policy in our IB
setting. Compared to other well established RL techniques, PSO-P produced
outstanding results in performance and robustness, requiring only a relatively
low amount of effort in finding adequate parameters or making complex design
decisions
Model-assisted Reinforcement Learning of a Quadrotor
In recent times, reinforcement learning has produced baffling results when it
comes to performing control tasks with highly non-linear systems. The
impressive results always outweigh the potential vulnerabilities or
uncertainties associated with the agents when deployed in the real-world. While
the performance is remarkable compared to the classical control algorithms, the
reinforcement learning-based methods suffer from two flaws, robustness and
interpretability, which are vital for contemporary real-world applications. The
paper attempts to alleviate such problems with reinforcement learning and
proposes the concept of model-assisted reinforcement learning to induce a
notion of conservativeness in the agents. The control task considered for the
experiment involves navigating a CrazyFlie quadrotor. The paper also describes
a way of reformulating the task to have the flexibility of tuning the level of
conservativeness via multi-objective reinforcement learning. The results
include a comparison of the vanilla reinforcement learning approaches and the
proposed approach. The metrics are evaluated by systematically injecting
disturbances to classify the inherent robustness and conservativeness of the
agents. More concrete arguments are made by computing and comparing the
backward reachability tubes of the RL policies by solving the
Hamilton-Jacobi-Bellman partial differential equation (HJ PDE)
A Benchmark Environment Motivated by Industrial Control Problems
In the research area of reinforcement learning (RL), frequently novel and
promising methods are developed and introduced to the RL community. However,
although many researchers are keen to apply their methods on real-world
problems, implementing such methods in real industry environments often is a
frustrating and tedious process. Generally, academic research groups have only
limited access to real industrial data and applications. For this reason, new
methods are usually developed, evaluated and compared by using artificial
software benchmarks. On one hand, these benchmarks are designed to provide
interpretable RL training scenarios and detailed insight into the learning
process of the method on hand. On the other hand, they usually do not share
much similarity with industrial real-world applications. For this reason we
used our industry experience to design a benchmark which bridges the gap
between freely available, documented, and motivated artificial benchmarks and
properties of real industrial problems. The resulting industrial benchmark (IB)
has been made publicly available to the RL community by publishing its Java and
Python code, including an OpenAI Gym wrapper, on Github. In this paper we
motivate and describe in detail the IB's dynamics and identify prototypic
experimental settings that capture common situations in real-world industry
control problems
A Benchmark Environment Motivated by Industrial Control Problems
In the research area of reinforcement learning (RL), frequently novel and
promising methods are developed and introduced to the RL community. However,
although many researchers are keen to apply their methods on real-world
problems, implementing such methods in real industry environments often is a
frustrating and tedious process. Generally, academic research groups have only
limited access to real industrial data and applications. For this reason, new
methods are usually developed, evaluated and compared by using artificial
software benchmarks. On one hand, these benchmarks are designed to provide
interpretable RL training scenarios and detailed insight into the learning
process of the method on hand. On the other hand, they usually do not share
much similarity with industrial real-world applications. For this reason we
used our industry experience to design a benchmark which bridges the gap
between freely available, documented, and motivated artificial benchmarks and
properties of real industrial problems. The resulting industrial benchmark (IB)
has been made publicly available to the RL community by publishing its Java and
Python code, including an OpenAI Gym wrapper, on Github. In this paper we
motivate and describe in detail the IB's dynamics and identify prototypic
experimental settings that capture common situations in real-world industry
control problems
The Sample Complexity of Teaching-by-Reinforcement on Q-Learning
We study the sample complexity of teaching, termed as "teaching dimension"
(TDim) in the literature, for the teaching-by-reinforcement paradigm, where the
teacher guides the student through rewards. This is distinct from the
teaching-by-demonstration paradigm motivated by robotics applications, where
the teacher teaches by providing demonstrations of state/action trajectories.
The teaching-by-reinforcement paradigm applies to a wider range of real-world
settings where a demonstration is inconvenient, but has not been studied
systematically. In this paper, we focus on a specific family of reinforcement
learning algorithms, Q-learning, and characterize the TDim under different
teachers with varying control power over the environment, and present matching
optimal teaching algorithms. Our TDim results provide the minimum number of
samples needed for reinforcement learning, and we discuss their connections to
standard PAC-style RL sample complexity and teaching-by-demonstration sample
complexity results. Our teaching algorithms have the potential to speed up RL
agent learning in applications where a helpful teacher is available
DoShiCo Challenge: Domain Shift in Control Prediction
Training deep neural network policies end-to-end for real-world applications
so far requires big demonstration datasets in the real world or big sets
consisting of a large variety of realistic and closely related 3D CAD models.
These real or virtual data should, moreover, have very similar characteristics
to the conditions expected at test time. These stringent requirements and the
time consuming data collection processes that they entail, are currently the
most important impediment that keeps deep reinforcement learning from being
deployed in real-world applications. Therefore, in this work we advocate an
alternative approach, where instead of avoiding any domain shift by carefully
selecting the training data, the goal is to learn a policy that can cope with
it. To this end, we propose the DoShiCo challenge: to train a model in very
basic synthetic environments, far from realistic, in a way that it can be
applied in more realistic environments as well as take the control decisions on
real-world data. In particular, we focus on the task of collision avoidance for
drones. We created a set of simulated environments that can be used as
benchmark and implemented a baseline method, exploiting depth prediction as an
auxiliary task to help overcome the domain shift. Even though the policy is
trained in very basic environments, it can learn to fly without collisions in a
very different realistic simulated environment. Of course several benchmarks
for reinforcement learning already exist - but they never include a large
domain shift. On the other hand, several benchmarks in computer vision focus on
the domain shift, but they take the form of a static datasets instead of
simulated environments. In this work we claim that it is crucial to take the
two challenges together in one benchmark.Comment: Published at SIMPAR 2018. Please visit the paper webpage for more
information, a movie and code for reproducing results:
https://kkelchte.github.io/doshic
Control and Analysis for Sequential Information based on Machine Learning
Sequential information is crucial for real-world applications that are related to time, which is same with time-series being described by sequence data followed by temporal order and regular intervals. In this thesis, we consider four major tasks of sequential information that include sequential trend prediction, control strategy optimisation, visual-temporal interpolation and visual-semantic sequential alignment. We develop machine learning theories and provide state-of-the-art models for various real-world applications that involve sequential processes, including the industrial batch process, sequential video inpainting, and sequential visual-semantic image captioning. The ultimate goal is about designing a hybrid framework that can unify diverse sequential information analysis and control systems
For industrial process, control algorithms rely on simulations to find the optimal control strategy. However, few machine learning techniques can control the process using raw data, although some works use ML to predict trends. Most control methods rely on amounts of previous experiences, and cannot execute future information to optimize the control strategy. To improve the effectiveness of the industrial process, we propose improved reinforcement learning approaches that can modify the control strategy. We also propose a hybrid reinforcement virtual learning approach to optimise the long-term control strategy. This approach creates a virtual space that interacts with reinforcement learning to predict a virtual strategy without conducting any real experiments, thereby improving and optimising control efficiency.
For sequential visual information analysis, we propose a dual-fusion transformer model to tackle the sequential visual-temporal encoding in video inpainting tasks. Our framework includes a flow-guided transformer with dual attention fusion, and we observe that the sequential information is effectively processed, resulting in promising inpainting videos.
Finally, we propose a cycle-based captioning model for the analysis of sequential visual-semantic information. This model augments data from two views to optimise caption generation from an image, overcoming new few-shot and zero-shot settings. The proposed model can generate more accurate and informative captions by leveraging sequential visual-semantic information.
Overall, the thesis contributes to analysing and manipulating sequential information in multi-modal real-world applications. Our flexible framework design provides a unified theoretical foundation to deploy sequential information systems in distinctive application domains. Considering the diversity of challenges addressed in this thesis, we believe our technique paves the pathway towards versatile AI in the new era
- …