370 research outputs found
A discrete graph Laplacian for signal processing
In this thesis we exploit diffusion processes on graphs to effect two fundamental problems of image processing: denoising and segmentation. We treat these two low-level vision problems on the pixel-wise level under a unified framework: a graph embedding. Using this framework opens us up to the possibilities of exploiting recently introduced algorithms from the semi-supervised machine learning literature.
We contribute two novel edge-preserving smoothing algorithms to the literature. Furthermore we apply these edge-preserving smoothing algorithms to some computational photography tasks. Many recent computational photography tasks require the decomposition of an image into a smooth base layer containing large scale intensity variations and a residual layer capturing fine details. Edge-preserving smoothing is the main computational mechanism in producing these multi-scale image representations. We, in effect, introduce a new approach to edge-preserving multi-scale image decompositions. Where as prior approaches such as the Bilateral filter and weighted-least squares methods require multiple parameters to tune the response of the filters our method only requires one. This parameter can be interpreted as a scale parameter. We demonstrate the utility of our approach by applying the method to computational photography tasks that utilise multi-scale image decompositions.
With minimal modification to these edge-preserving smoothing algorithms we show that we can extend them to produce interactive image segmentation. As a result the operations of segmentation and denoising are conducted under a unified framework. Moreover we discuss how our method is related to region based active contours. We benchmark our proposed interactive segmentation algorithms against those based upon energy-minimisation, specifically graph-cut methods. We demonstrate that we achieve competitive performance
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Multivariate moment problems with applications to spectral estimation and physical layer security in wireless communications
This thesis focuses on generalized moment problems and their applications in the framework of information engineering. Its contribution is twofold.
The first part of this dissertation proposes two new techniques for tackling multivariate spectral estimation, which is a key topic in system identification: Relative entropy rate estimation and multivariate circulant rational covariance extension.
The former provides a very natural multivariate extension of a state-of-the-art approach for scalar parametric spectral estimation with a complexity bound, known as THREE (Tunable High-Resolution Estimator). It allows to take into account available a priori information on the spectral density. It exhibits high resolution features and it is robust in case of short data records.
As for multivariate circulant rational covariance extension, it is a new convex optimization approach to spectral estimation for periodic multivariate processes, in which the computation of the solution can be tackled efficiently by means of Fast Fourier Transform. Numerical examples show that this procedure also provides an efficient tool for approximating regular covariance extension for multivariate processes.
The second part of this dissertation considers the problem of deriving a universal performance bound for a
message source authentication scheme based on channel estimates in a wireless fading scenario, where an attacker may have correlated observations available and possibly unbounded computational power. Under the assumption that the channels are represented by multivariate complex Gaussian variables, it is proved that the tightest bound corresponds to a forging strategy that produces a zero mean signal that is jointly Gaussian with the attacker observations. A characterization of their joint covariance matrix is derived through the solution of a system of two nonlinear matrix equations. Based upon this characterization, the thesis proposes an efficient iterative algorithm for its computation: The solution to the matricial system appears as fixed point of the iteration. Numerical examples suggest that this procedure is effective in assessing worst case channel authentication performance
Recommended from our members
Structured Sub-Nyquist Sampling with Applications in Compressive Toeplitz Covariance Estimation, Super-Resolution and Phase Retrieval
Sub-Nyquist sampling has received a huge amount of interest in the past decade. In classical compressed sensing theory, if the measurement procedure satisfies a particular condition known as Restricted Isometry Property (RIP), we can achieve stable recovery of signals of low-dimensional intrinsic structures with an order-wise optimal sample size. Such low-dimensional structures include sparse and low rank for both vector and matrix cases. The main drawback of conventional compressed sensing theory is that random measurements are required to ensure the RIP property. However, in many applications such as imaging and array signal processing, applying independent random measurements may not be practical as the systems are deterministic. Moreover, random measurements based compressed sensing always exploits convex programs for signal recovery even in the noiseless case, and solving those programs is computationally intensive if the ambient dimension is large, especially in the matrix case. The main contribution of this dissertation is that we propose a deterministic sub-Nyquist sampling framework for compressing the structured signal and come up with computationally efficient algorithms. Besides widely studied sparse and low-rank structures, we particularly focus on the cases that the signals of interest are stationary or the measurements are of Fourier type. The key difference between our work from classical compressed sensing theory is that we explicitly exploit the second-order statistics of the signals, and study the equivalent quadratic measurement model in the correlation domain. The essential observation made in this dissertation is that a difference/sum coarray structure will arise from the quadratic model if the measurements are of Fourier type. With these observations, we are able to achieve a better compression rate for covariance estimation, identify more sources in array signal processing or recover the signals of larger sparsity. In this dissertation, we will first study the problem of Toeplitz covariance estimation. In particular, we will show how to achieve an order-wise optimal compression rate using the idea of sparse arrays in both general and low-rank cases. Then, an analysis framework of super-resolution with positivity constraint is established. We will present fundamental robustness guarantees, efficient algorithms and applications in practices. Next, we will study the problem of phase-retrieval for which we successfully apply the sparse array ideas by fully exploiting the quadratic measurement model. We achieve near-optimal sample complexity for both sparse and general cases with practical Fourier measurements and provide efficient and deterministic recovery algorithms. In the end, we will further elaborate on the essential role of non-negative constraint in underdetermined inverse problems. In particular, we will analyze the nonlinear co-array interpolation problem and develop a universal upper bound of the interpolation error. Bilinear problem with non-negative constraint will be considered next and the exact characterization of the ambiguous solutions will be established for the first time in literature. At last, we will show how to apply the nested array idea to solve real problems such as Kriging. Using spatial correlation information, we are able to have a stable estimate of the field of interest with fewer sensors than classic methodologies. Extensive numerical experiments are implemented to demonstrate our theoretical claims
An Examination of Some Signi cant Approaches to Statistical Deconvolution
We examine statistical approaches to two significant areas of deconvolution - Blind
Deconvolution (BD) and Robust Deconvolution (RD) for stochastic stationary signals.
For BD, we review some major classical and new methods in a unified framework of
nonGaussian signals. The first class of algorithms we look at falls into the class
of Minimum Entropy Deconvolution (MED) algorithms. We discuss the similarities
between them despite differences in origins and motivations. We give new theoretical
results concerning the behaviour and generality of these algorithms and give evidence
of scenarios where they may fail. In some cases, we present new modifications to the
algorithms to overcome these shortfalls.
Following our discussion on the MED algorithms, we next look at a recently
proposed BD algorithm based on the correntropy function, a function defined as a
combination of the autocorrelation and the entropy functiosn. We examine its BD
performance when compared with MED algorithms. We find that the BD carried
out via correntropy-matching cannot be straightforwardly interpreted as simultaneous
moment-matching due to the breakdown of the correntropy expansion in terms
of moments. Other issues such as maximum/minimum phase ambiguity and computational
complexity suggest that careful attention is required before establishing the
correntropy algorithm as a superior alternative to the existing BD techniques.
For the problem of RD, we give a categorisation of different kinds of uncertainties
encountered in estimation and discuss techniques required to solve each individual
case. Primarily, we tackle the overlooked cases of robustification of deconvolution
filters based on estimated blurring response or estimated signal spectrum. We do
this by utilising existing methods derived from criteria such as minimax MSE with imposed uncertainty bands and penalised MSE. In particular, we revisit the Modified
Wiener Filter (MWF) which offers simplicity and flexibility in giving improved RDs
to the standard plug-in Wiener Filter (WF)
- …