43,807 research outputs found
A sparse decomposition of low rank symmetric positive semi-definite matrices
Suppose that is symmetric positive
semidefinite with rank . Our goal is to decompose into
rank-one matrices where the modes
are required to be as sparse as possible. In contrast to eigen decomposition,
these sparse modes are not required to be orthogonal. Such a problem arises in
random field parametrization where is the covariance function and is
intractable to solve in general. In this paper, we partition the indices from 1
to into several patches and propose to quantify the sparseness of a vector
by the number of patches on which it is nonzero, which is called patch-wise
sparseness. Our aim is to find the decomposition which minimizes the total
patch-wise sparseness of the decomposed modes. We propose a
domain-decomposition type method, called intrinsic sparse mode decomposition
(ISMD), which follows the "local-modes-construction + patching-up" procedure.
The key step in the ISMD is to construct local pieces of the intrinsic sparse
modes by a joint diagonalization problem. Thereafter a pivoted Cholesky
decomposition is utilized to glue these local pieces together. Optimal sparse
decomposition, consistency with different domain decomposition and robustness
to small perturbation are proved under the so called regular-sparse assumption
(see Definition 1.2). We provide simulation results to show the efficiency and
robustness of the ISMD. We also compare the ISMD to other existing methods,
e.g., eigen decomposition, pivoted Cholesky decomposition and convex relaxation
of sparse principal component analysis [25] and [40]
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Frequency domain reduced order model of aligned-spin effective-one-body waveforms with generic mass-ratios and spins
I provide a frequency domain reduced order model (ROM) for the aligned-spin
effective-one-body (EOB) model "SEOBNRv2" for data analysis with second and
third generation ground based gravitational wave (GW) detectors. SEOBNRv2
models the dominant mode of the GWs emitted by the coalescence of black hole
(BH) binaries. The large physical parameter space (dimensionless spins and symmetric mass-ratios )
requires sophisticated reduced order modeling techniques, including patching in
the parameter space and in frequency. I find that the time window over which
the inspiral-plunge and the merger-ringdown waveform in SEOBNRv2 are connected
is discontinuous when the spin of the deformed Kerr BH or the
symmetric mass-ratio . This discontinuity increases resolution
requirements for the ROM. The ROM can be used for compact binary systems with
total masses of or higher for the advanced LIGO (aLIGO) design
sensitivity and a Hz lower cutoff frequency. The ROM has a worst mismatch
against SEOBNRv2 of , but in general mismatches are better than . The ROM is crucial for key data analysis applications for compact
binaries, such as GW searches and parameter estimation carried out within the
LIGO Scientific Collaboration (LSC).Comment: 14 pages, 14 figure
Computing the Jacobian in spatial models: an applied survey.
Despite attempts to get around the Jacobian in fitting spatial econometric models by using GMM and other approximations, it remains a central problem for maximum likelihood estimation. In principle, and for smaller data sets, the use of the eigenvalues of the spatial weights matrix provides a very rapid and satisfactory resolution. For somewhat larger problems, including those induced in spatial panel and dyadic (network) problems, solving the eigenproblem is not as attractive, and a number of alternatives have been proposed. This paper will survey chosen alternatives, and comment on their relative usefulness.Spatial autoregression; Maximum likelihood estimation; Jacobian computation; Econometric software.
Towards a Simplified Dynamic Wake Model using POD Analysis
We apply the proper orthogonal decomposition (POD) to large eddy simulation
data of a wind turbine wake in a turbulent atmospheric boundary layer. The
turbine is modeled as an actuator disk. Our analyis mainly focuses on the
question whether POD could be a useful tool to develop a simplified dynamic
wake model. The extracted POD modes are used to obtain approximate descriptions
of the velocity field. To assess the quality of these POD reconstructions, we
define simple measures which are believed to be relevant for a sequential
turbine in the wake such as the energy flux through a disk in the wake. It is
shown that only a few modes are necessary to capture basic dynamical aspects of
these measures even though only a small part of the turbulent kinetic energy is
restored. Furthermore, we show that the importance of the individual modes
depends on the measure chosen. Therefore, the optimal choice of modes for a
possible model could in principle depend on the application of interest. We
additionally present a possible interpretation of the POD modes relating them
to specific properties of the wake. For example the first mode is related to
the horizontal large scale movement. Besides yielding a deeper understanding,
this also enables us to view our results in comparison to existing dynamic wake
models
- …