14,556 research outputs found
Decomposition of Nonlinear Dynamical Systems Using Koopman Gramians
In this paper we propose a new Koopman operator approach to the decomposition
of nonlinear dynamical systems using Koopman Gramians. We introduce the notion
of an input-Koopman operator, and show how input-Koopman operators can be used
to cast a nonlinear system into the classical state-space form, and identify
conditions under which input and state observable functions are well separated.
We then extend an existing method of dynamic mode decomposition for learning
Koopman operators from data known as deep dynamic mode decomposition to systems
with controls or disturbances. We illustrate the accuracy of the method in
learning an input-state separable Koopman operator for an example system, even
when the underlying system exhibits mixed state-input terms. We next introduce
a nonlinear decomposition algorithm, based on Koopman Gramians, that maximizes
internal subsystem observability and disturbance rejection from unwanted noise
from other subsystems. We derive a relaxation based on Koopman Gramians and
multi-way partitioning for the resulting NP-hard decomposition problem. We
lastly illustrate the proposed algorithm with the swing dynamics for an IEEE
39-bus system.Comment: 8 pages, submitted to IEEE 2018 AC
Convolutional Dictionary Learning: Acceleration and Convergence
Convolutional dictionary learning (CDL or sparsifying CDL) has many
applications in image processing and computer vision. There has been growing
interest in developing efficient algorithms for CDL, mostly relying on the
augmented Lagrangian (AL) method or the variant alternating direction method of
multipliers (ADMM). When their parameters are properly tuned, AL methods have
shown fast convergence in CDL. However, the parameter tuning process is not
trivial due to its data dependence and, in practice, the convergence of AL
methods depends on the AL parameters for nonconvex CDL problems. To moderate
these problems, this paper proposes a new practically feasible and convergent
Block Proximal Gradient method using a Majorizer (BPG-M) for CDL. The
BPG-M-based CDL is investigated with different block updating schemes and
majorization matrix designs, and further accelerated by incorporating some
momentum coefficient formulas and restarting techniques. All of the methods
investigated incorporate a boundary artifacts removal (or, more generally,
sampling) operator in the learning model. Numerical experiments show that,
without needing any parameter tuning process, the proposed BPG-M approach
converges more stably to desirable solutions of lower objective values than the
existing state-of-the-art ADMM algorithm and its memory-efficient variant do.
Compared to the ADMM approaches, the BPG-M method using a multi-block updating
scheme is particularly useful in single-threaded CDL algorithm handling large
datasets, due to its lower memory requirement and no polynomial computational
complexity. Image denoising experiments show that, for relatively strong
additive white Gaussian noise, the filters learned by BPG-M-based CDL
outperform those trained by the ADMM approach.Comment: 21 pages, 7 figures, submitted to IEEE Transactions on Image
Processin
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
Separable Cosparse Analysis Operator Learning
The ability of having a sparse representation for a certain class of signals
has many applications in data analysis, image processing, and other research
fields. Among sparse representations, the cosparse analysis model has recently
gained increasing interest. Many signals exhibit a multidimensional structure,
e.g. images or three-dimensional MRI scans. Most data analysis and learning
algorithms use vectorized signals and thereby do not account for this
underlying structure. The drawback of not taking the inherent structure into
account is a dramatic increase in computational cost. We propose an algorithm
for learning a cosparse Analysis Operator that adheres to the preexisting
structure of the data, and thus allows for a very efficient implementation.
This is achieved by enforcing a separable structure on the learned operator.
Our learning algorithm is able to deal with multidimensional data of arbitrary
order. We evaluate our method on volumetric data at the example of
three-dimensional MRI scans.Comment: 5 pages, 3 figures, accepted at EUSIPCO 201
- âŠ