3,288 research outputs found
Decomposition of Nonlinear Dynamical Systems Using Koopman Gramians
In this paper we propose a new Koopman operator approach to the decomposition
of nonlinear dynamical systems using Koopman Gramians. We introduce the notion
of an input-Koopman operator, and show how input-Koopman operators can be used
to cast a nonlinear system into the classical state-space form, and identify
conditions under which input and state observable functions are well separated.
We then extend an existing method of dynamic mode decomposition for learning
Koopman operators from data known as deep dynamic mode decomposition to systems
with controls or disturbances. We illustrate the accuracy of the method in
learning an input-state separable Koopman operator for an example system, even
when the underlying system exhibits mixed state-input terms. We next introduce
a nonlinear decomposition algorithm, based on Koopman Gramians, that maximizes
internal subsystem observability and disturbance rejection from unwanted noise
from other subsystems. We derive a relaxation based on Koopman Gramians and
multi-way partitioning for the resulting NP-hard decomposition problem. We
lastly illustrate the proposed algorithm with the swing dynamics for an IEEE
39-bus system.Comment: 8 pages, submitted to IEEE 2018 AC
Efficient Model Learning for Human-Robot Collaborative Tasks
We present a framework for learning human user models from joint-action
demonstrations that enables the robot to compute a robust policy for a
collaborative task with a human. The learning takes place completely
automatically, without any human intervention. First, we describe the
clustering of demonstrated action sequences into different human types using an
unsupervised learning algorithm. These demonstrated sequences are also used by
the robot to learn a reward function that is representative for each type,
through the employment of an inverse reinforcement learning algorithm. The
learned model is then used as part of a Mixed Observability Markov Decision
Process formulation, wherein the human type is a partially observable variable.
With this framework, we can infer, either offline or online, the human type of
a new user that was not included in the training set, and can compute a policy
for the robot that will be aligned to the preference of this new user and will
be robust to deviations of the human actions from prior demonstrations. Finally
we validate the approach using data collected in human subject experiments, and
conduct proof-of-concept demonstrations in which a person performs a
collaborative task with a small industrial robot
emgr - The Empirical Gramian Framework
System Gramian matrices are a well-known encoding for properties of
input-output systems such as controllability, observability or minimality.
These so-called system Gramians were developed in linear system theory for
applications such as model order reduction of control systems. Empirical
Gramian are an extension to the system Gramians for parametric and nonlinear
systems as well as a data-driven method of computation. The empirical Gramian
framework - emgr - implements the empirical Gramians in a uniform and
configurable manner, with applications such as Gramian-based (nonlinear) model
reduction, decentralized control, sensitivity analysis, parameter
identification and combined state and parameter reduction
Moment Matching Based Model Reduction for LPV State-Space Models
We present a novel algorithm for reducing the state dimension, i.e. order, of
linear parameter varying (LPV) discrete-time state-space (SS) models with
affine dependence on the scheduling variable. The input-output behavior of the
reduced order model approximates that of the original model. In fact, for input
and scheduling sequences of a certain length, the input-output behaviors of the
reduced and original model coincide. The proposed method can also be
interpreted as a reachability and observability reduction (minimization)
procedure for LPV-SS representations with affine dependence
Asymmetric Actor Critic for Image-Based Robot Learning
Deep reinforcement learning (RL) has proven a powerful technique in many
sequential decision making domains. However, Robotics poses many challenges for
RL, most notably training on a physical system can be expensive and dangerous,
which has sparked significant interest in learning control policies using a
physics simulator. While several recent works have shown promising results in
transferring policies trained in simulation to the real world, they often do
not fully utilize the advantage of working with a simulator. In this work, we
exploit the full state observability in the simulator to train better policies
which take as input only partial observations (RGBD images). We do this by
employing an actor-critic training algorithm in which the critic is trained on
full states while the actor (or policy) gets rendered images as input. We show
experimentally on a range of simulated tasks that using these asymmetric inputs
significantly improves performance. Finally, we combine this method with domain
randomization and show real robot experiments for several tasks like picking,
pushing, and moving a block. We achieve this simulation to real world transfer
without training on any real world data.Comment: Videos of experiments can be found at http://www.goo.gl/b57WT
- …