64 research outputs found
Sparse Automatic Differentiation for Large-Scale Computations Using Abstract Elementary Algebra
Most numerical solvers and libraries nowadays are implemented to use
mathematical models created with language-specific built-in data types (e.g.
real in Fortran or double in C) and their respective elementary algebra
implementations. However, built-in elementary algebra typically has limited
functionality and often restricts flexibility of mathematical models and
analysis types that can be applied to those models. To overcome this
limitation, a number of domain-specific languages with more feature-rich
built-in data types have been proposed. In this paper, we argue that if
numerical libraries and solvers are designed to use abstract elementary algebra
rather than language-specific built-in algebra, modern mainstream languages can
be as effective as any domain-specific language. We illustrate our ideas using
the example of sparse Jacobian matrix computation. We implement an automatic
differentiation method that takes advantage of sparse system structures and is
straightforward to parallelize in MPI setting. Furthermore, we show that the
computational cost scales linearly with the size of the system.Comment: Submitted to ACM Transactions on Mathematical Softwar
Sensing and Control in Symmetric Networks
In engineering applications, one of the major challenges today is to develop
reliable and robust control algorithms for complex networked systems.
Controllability and observability of such systems play a crucial role in the
design process. The underlying network structure may contain symmetries --
caused for example by the coupling of identical building blocks -- and these
symmetries lead to repeated eigenvalues in a generic way. This complicates the
design of controllers since repeated eigenvalues might decrease the
controllability of the system. In this paper, we will analyze the relationship
between the controllability and observability of complex networked systems and
graph symmetries using results from representation theory. Furthermore, we will
propose an algorithm to compute sparse input and output matrices based on
projections onto the isotypic components. We will illustrate our results with
the aid of two guiding examples, a network with symmetry and the
Petersen graph
Koopman operator-based model reduction for switched-system control of PDEs
We present a new framework for optimal and feedback control of PDEs using
Koopman operator-based reduced order models (K-ROMs). The Koopman operator is a
linear but infinite-dimensional operator which describes the dynamics of
observables. A numerical approximation of the Koopman operator therefore yields
a linear system for the observation of an autonomous dynamical system. In our
approach, by introducing a finite number of constant controls, the dynamic
control system is transformed into a set of autonomous systems and the
corresponding optimal control problem into a switching time optimization
problem. This allows us to replace each of these systems by a K-ROM which can
be solved orders of magnitude faster. By this approach, a nonlinear
infinite-dimensional control problem is transformed into a low-dimensional
linear problem. In situations where the Koopman operator can be computed
exactly using Extended Dynamic Mode Decomposition (EDMD), the proposed approach
yields optimal control inputs. Furthermore, a recent convergence result for
EDMD suggests that the approach can be applied to more complex dynamics as
well. To illustrate the results, we consider the 1D Burgers equation and the 2D
Navier--Stokes equations. The numerical experiments show remarkable performance
concerning both solution times and accuracy.Comment: arXiv admin note: text overlap with arXiv:1801.0641
Towards tensor-based methods for the numerical approximation of the Perron-Frobenius and Koopman operator
The global behavior of dynamical systems can be studied by analyzing the
eigenvalues and corresponding eigenfunctions of linear operators associated
with the system. Two important operators which are frequently used to gain
insight into the system's behavior are the Perron-Frobenius operator and the
Koopman operator. Due to the curse of dimensionality, computing the
eigenfunctions of high-dimensional systems is in general infeasible. We will
propose a tensor-based reformulation of two numerical methods for computing
finite-dimensional approximations of the aforementioned infinite-dimensional
operators, namely Ulam's method and Extended Dynamic Mode Decomposition (EDMD).
The aim of the tensor formulation is to approximate the eigenfunctions by
low-rank tensors, potentially resulting in a significant reduction of the time
and memory required to solve the resulting eigenvalue problems, provided that
such a low-rank tensor decomposition exists. Typically, not all variables of a
high-dimensional dynamical system contribute equally to the system's behavior,
often the dynamics can be decomposed into slow and fast processes, which is
also reflected in the eigenfunctions. Thus, the weak coupling between different
variables might be approximated by low-rank tensor cores. We will illustrate
the efficiency of the tensor-based formulation of Ulam's method and EDMD using
simple stochastic differential equations
Tensor-Based Algorithms for Image Classification
Interest in machine learning with tensor networks has been growing rapidly in recent years. We show that tensor-based methods developed for learning the governing equations of dynamical systems from data can, in the same way, be used for supervised learning problems and propose two novel approaches for image classification. One is a kernel-based reformulation of the previously introduced multidimensional approximation of nonlinear dynamics (MANDy), the other an alternating ridge regression in the tensor train format. We apply both methods to the MNIST and fashion MNIST data set and show that the approaches are competitive with state-of-the-art neural network-based classifiers
Eigendecompositions of Transfer Operators in Reproducing Kernel Hilbert Spaces
Transfer operators such as the Perron--Frobenius or Koopman operator play an
important role in the global analysis of complex dynamical systems. The
eigenfunctions of these operators can be used to detect metastable sets, to
project the dynamics onto the dominant slow processes, or to separate
superimposed signals. We extend transfer operator theory to reproducing kernel
Hilbert spaces and show that these operators are related to Hilbert space
representations of conditional distributions, known as conditional mean
embeddings in the machine learning community. Moreover, numerical methods to
compute empirical estimates of these embeddings are akin to data-driven methods
for the approximation of transfer operators such as extended dynamic mode
decomposition and its variants. One main benefit of the presented kernel-based
approaches is that these methods can be applied to any domain where a
similarity measure given by a kernel is available. We illustrate the results
with the aid of guiding examples and highlight potential applications in
molecular dynamics as well as video and text data analysis
Nearest-Neighbor Interaction Systems in the Tensor-Train Format
Low-rank tensor approximation approaches have become an important tool in the
scientific computing community. The aim is to enable the simulation and
analysis of high-dimensional problems which cannot be solved using conventional
methods anymore due to the so-called curse of dimensionality. This requires
techniques to handle linear operators defined on extremely large state spaces
and to solve the resulting systems of linear equations or eigenvalue problems.
In this paper, we present a systematic tensor-train decomposition for
nearest-neighbor interaction systems which is applicable to a host of different
problems. With the aid of this decomposition, it is possible to reduce the
memory consumption as well as the computational costs significantly.
Furthermore, it can be shown that in some cases the rank of the tensor
decomposition does not depend on the network size. The format is thus feasible
even for high-dimensional systems. We will illustrate the results with several
guiding examples such as the Ising model, a system of coupled oscillators, and
a CO oxidation model
Tensor-based dynamic mode decomposition
Dynamic mode decomposition (DMD) is a recently developed tool for the
analysis of the behavior of complex dynamical systems. In this paper, we will
propose an extension of DMD that exploits low-rank tensor decompositions of
potentially high-dimensional data sets to compute the corresponding DMD modes
and eigenvalues. The goal is to reduce the computational complexity and also
the amount of memory required to store the data in order to mitigate the curse
of dimensionality. The efficiency of these tensor-based methods will be
illustrated with the aid of several different fluid dynamics problems such as
the von K\'arm\'an vortex street and the simulation of two merging vortices
Multidimensional approximation of nonlinear dynamical systems
A key task in the field of modeling and analyzing nonlinear dynamical systems is the recovery of unknown governing equations from measurement data only. There is a wide range of application areas for this important instance of system identification, ranging from industrial engineering and acoustic signal processing to stock market models. In order to find appropriate representations of underlying dynamical systems, various data-driven methods have been proposed by different communities. However, if the given data sets are high-dimensional, then these methods typically suffer from the curse of dimensionality. To significantly reduce the computational costs and storage consumption, we propose the method multidimensional approximation of nonlinear dynamical systems (MANDy) which combines data-driven methods with tensor network decompositions. The efficiency of the introduced approach will be illustrated with the aid of several high-dimensional nonlinear dynamical systems
- …