801 research outputs found
A Levinson-Galerkin algorithm for regularized trigonometric approximation
Trigonometric polynomials are widely used for the approximation of a smooth
function from a set of nonuniformly spaced samples
. If the samples are perturbed by noise, controlling
the smoothness of the trigonometric approximation becomes an essential issue to
avoid overfitting and underfitting of the data. Using the polynomial degree as
regularization parameter we derive a multi-level algorithm that iteratively
adapts to the least squares solution of optimal smoothness. The proposed
algorithm computes the solution in at most operations (
being the polynomial degree of the approximation) by solving a family of nested
Toeplitz systems. It is shown how the presented method can be extended to
multivariate trigonometric approximation. We demonstrate the performance of the
algorithm by applying it in echocardiography to the recovery of the boundary of
the Left Ventricle
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Numerical Analysis of the Non-uniform Sampling Problem
We give an overview of recent developments in the problem of reconstructing a
band-limited signal from non-uniform sampling from a numerical analysis view
point. It is shown that the appropriate design of the finite-dimensional model
plays a key role in the numerical solution of the non-uniform sampling problem.
In the one approach (often proposed in the literature) the finite-dimensional
model leads to an ill-posed problem even in very simple situations. The other
approach that we consider leads to a well-posed problem that preserves
important structural properties of the original infinite-dimensional problem
and gives rise to efficient numerical algorithms. Furthermore a fast multilevel
algorithm is presented that can reconstruct signals of unknown bandwidth from
noisy non-uniformly spaced samples. We also discuss the design of efficient
regularization methods for ill-conditioned reconstruction problems. Numerical
examples from spectroscopy and exploration geophysics demonstrate the
performance of the proposed methods
Multilevel Approach For Signal Restoration Problems With Toeplitz Matrices
We present a multilevel method for discrete ill-posed problems arising from the discretization of Fredholm integral equations of the first kind. In this method, we use the Haar wavelet transform to define restriction and prolongation operators within a multigrid-type iteration. The choice of the Haar wavelet operator has the advantage of preserving matrix structure, such as Toeplitz, between grids, which can be exploited to obtain faster solvers on each level where an edge-preserving Tikhonov regularization is applied. Finally, we present results that indicate the promise of this approach for restoration of signals and images with edges
Structural Variability from Noisy Tomographic Projections
In cryo-electron microscopy, the 3D electric potentials of an ensemble of
molecules are projected along arbitrary viewing directions to yield noisy 2D
images. The volume maps representing these potentials typically exhibit a great
deal of structural variability, which is described by their 3D covariance
matrix. Typically, this covariance matrix is approximately low-rank and can be
used to cluster the volumes or estimate the intrinsic geometry of the
conformation space. We formulate the estimation of this covariance matrix as a
linear inverse problem, yielding a consistent least-squares estimator. For
images of size -by- pixels, we propose an algorithm for calculating this
covariance estimator with computational complexity
, where the condition number
is empirically in the range --. Its efficiency relies on the
observation that the normal equations are equivalent to a deconvolution problem
in 6D. This is then solved by the conjugate gradient method with an appropriate
circulant preconditioner. The result is the first computationally efficient
algorithm for consistent estimation of 3D covariance from noisy projections. It
also compares favorably in runtime with respect to previously proposed
non-consistent estimators. Motivated by the recent success of eigenvalue
shrinkage procedures for high-dimensional covariance matrices, we introduce a
shrinkage procedure that improves accuracy at lower signal-to-noise ratios. We
evaluate our methods on simulated datasets and achieve classification results
comparable to state-of-the-art methods in shorter running time. We also present
results on clustering volumes in an experimental dataset, illustrating the
power of the proposed algorithm for practical determination of structural
variability.Comment: 52 pages, 11 figure
- …