2,444 research outputs found
Partial Trace Regression and Low-Rank Kraus Decomposition
The trace regression model, a direct extension of the well-studied linear
regression model, allows one to map matrices to real-valued outputs. We here
introduce an even more general model, namely the partial-trace regression
model, a family of linear mappings from matrix-valued inputs to matrix-valued
outputs; this model subsumes the trace regression model and thus the linear
regression model. Borrowing tools from quantum information theory, where
partial trace operators have been extensively studied, we propose a framework
for learning partial trace regression models from data by taking advantage of
the so-called low-rank Kraus representation of completely positive maps. We
show the relevance of our framework with synthetic and real-world experiments
conducted for both i) matrix-to-matrix regression and ii) positive semidefinite
matrix completion, two tasks which can be formulated as partial trace
regression problems
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Recent Developments in Complex and Spatially Correlated Functional Data
As high-dimensional and high-frequency data are being collected on a large
scale, the development of new statistical models is being pushed forward.
Functional data analysis provides the required statistical methods to deal with
large-scale and complex data by assuming that data are continuous functions,
e.g., a realization of a continuous process (curves) or continuous random
fields (surfaces), and that each curve or surface is considered as a single
observation. Here, we provide an overview of functional data analysis when data
are complex and spatially correlated. We provide definitions and estimators of
the first and second moments of the corresponding functional random variable.
We present two main approaches: The first assumes that data are realizations of
a functional random field, i.e., each observation is a curve with a spatial
component. We call them 'spatial functional data'. The second approach assumes
that data are continuous deterministic fields observed over time. In this case,
one observation is a surface or manifold, and we call them 'surface time
series'. For the two approaches, we describe software available for the
statistical analysis. We also present a data illustration, using a
high-resolution wind speed simulated dataset, as an example of the two
approaches. The functional data approach offers a new paradigm of data
analysis, where the continuous processes or random fields are considered as a
single entity. We consider this approach to be very valuable in the context of
big data.Comment: Some typos fixed and new references adde
- …