1,324 research outputs found
Shaped extensions of singular spectrum analysis
Extensions of singular spectrum analysis (SSA) for processing of
non-rectangular images and time series with gaps are considered. A circular
version is suggested, which allows application of the method to the data given
on a circle or on a cylinder, e.g. cylindrical projection of a 3D ellipsoid.
The constructed Shaped SSA method with planar or circular topology is able to
produce low-rank approximations for images of complex shapes. Together with
Shaped SSA, a shaped version of the subspace-based ESPRIT method for frequency
estimation is developed. Examples of 2D circular SSA and 2D Shaped ESPRIT are
presented
Video Compressive Sensing for Dynamic MRI
We present a video compressive sensing framework, termed kt-CSLDS, to
accelerate the image acquisition process of dynamic magnetic resonance imaging
(MRI). We are inspired by a state-of-the-art model for video compressive
sensing that utilizes a linear dynamical system (LDS) to model the motion
manifold. Given compressive measurements, the state sequence of an LDS can be
first estimated using system identification techniques. We then reconstruct the
observation matrix using a joint structured sparsity assumption. In particular,
we minimize an objective function with a mixture of wavelet sparsity and joint
sparsity within the observation matrix. We derive an efficient convex
optimization algorithm through alternating direction method of multipliers
(ADMM), and provide a theoretical guarantee for global convergence. We
demonstrate the performance of our approach for video compressive sensing, in
terms of reconstruction accuracy. We also investigate the impact of various
sampling strategies. We apply this framework to accelerate the acquisition
process of dynamic MRI and show it achieves the best reconstruction accuracy
with the least computational time compared with existing algorithms in the
literature.Comment: 30 pages, 9 figure
Quantum field tomography
We introduce the concept of quantum field tomography, the efficient and
reliable reconstruction of unknown quantum fields based on data of correlation
functions. At the basis of the analysis is the concept of continuous matrix
product states, a complete set of variational states grasping states in quantum
field theory. We innovate a practical method, making use of and developing
tools in estimation theory used in the context of compressed sensing such as
Prony methods and matrix pencils, allowing us to faithfully reconstruct quantum
field states based on low-order correlation functions. In the absence of a
phase reference, we highlight how specific higher order correlation functions
can still be predicted. We exemplify the functioning of the approach by
reconstructing randomised continuous matrix product states from their
correlation data and study the robustness of the reconstruction for different
noise models. We also apply the method to data generated by simulations based
on continuous matrix product states and using the time-dependent variational
principle. The presented approach is expected to open up a new window into
experimentally studying continuous quantum systems, such as encountered in
experiments with ultra-cold atoms on top of atom chips. By virtue of the
analogy with the input-output formalism in quantum optics, it also allows for
studying open quantum systems.Comment: 31 pages, 5 figures, minor change
Maximum Entropy Vector Kernels for MIMO system identification
Recent contributions have framed linear system identification as a
nonparametric regularized inverse problem. Relying on -type
regularization which accounts for the stability and smoothness of the impulse
response to be estimated, these approaches have been shown to be competitive
w.r.t classical parametric methods. In this paper, adopting Maximum Entropy
arguments, we derive a new penalty deriving from a vector-valued
kernel; to do so we exploit the structure of the Hankel matrix, thus
controlling at the same time complexity, measured by the McMillan degree,
stability and smoothness of the identified models. As a special case we recover
the nuclear norm penalty on the squared block Hankel matrix. In contrast with
previous literature on reweighted nuclear norm penalties, our kernel is
described by a small number of hyper-parameters, which are iteratively updated
through marginal likelihood maximization; constraining the structure of the
kernel acts as a (hyper)regularizer which helps controlling the effective
degrees of freedom of our estimator. To optimize the marginal likelihood we
adapt a Scaled Gradient Projection (SGP) algorithm which is proved to be
significantly computationally cheaper than other first and second order
off-the-shelf optimization methods. The paper also contains an extensive
comparison with many state-of-the-art methods on several Monte-Carlo studies,
which confirms the effectiveness of our procedure
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
- …