1,325 research outputs found
Communications-Inspired Projection Design with Application to Compressive Sensing
We consider the recovery of an underlying signal x \in C^m based on
projection measurements of the form y=Mx+w, where y \in C^l and w is
measurement noise; we are interested in the case l < m. It is assumed that the
signal model p(x) is known, and w CN(w;0,S_w), for known S_W. The objective is
to design a projection matrix M \in C^(l x m) to maximize key
information-theoretic quantities with operational significance, including the
mutual information between the signal and the projections I(x;y) or the Renyi
entropy of the projections h_a(y) (Shannon entropy is a special case). By
capitalizing on explicit characterizations of the gradients of the information
measures with respect to the projections matrix, where we also partially extend
the well-known results of Palomar and Verdu from the mutual information to the
Renyi entropy domain, we unveil the key operations carried out by the optimal
projections designs: mode exposure and mode alignment. Experiments are
considered for the case of compressive sensing (CS) applied to imagery. In this
context, we provide a demonstration of the performance improvement possible
through the application of the novel projection designs in relation to
conventional ones, as well as justification for a fast online projections
design method with which state-of-the-art adaptive CS signal recovery is
achieved.Comment: 25 pages, 7 figures, parts of material published in IEEE ICASSP 2012,
submitted to SIIM
Multi-scale Discriminant Saliency with Wavelet-based Hidden Markov Tree Modelling
The bottom-up saliency, an early stage of humans' visual attention, can be
considered as a binary classification problem between centre and surround
classes. Discriminant power of features for the classification is measured as
mutual information between distributions of image features and corresponding
classes . As the estimated discrepancy very much depends on considered scale
level, multi-scale structure and discriminant power are integrated by employing
discrete wavelet features and Hidden Markov Tree (HMT). With wavelet
coefficients and Hidden Markov Tree parameters, quad-tree like label structures
are constructed and utilized in maximum a posterior probability (MAP) of hidden
class variables at corresponding dyadic sub-squares. Then, a saliency value for
each square block at each scale level is computed with discriminant power
principle. Finally, across multiple scales is integrated the final saliency map
by an information maximization rule. Both standard quantitative tools such as
NSS, LCC, AUC and qualitative assessments are used for evaluating the proposed
multi-scale discriminant saliency (MDIS) method against the well-know
information based approach AIM on its released image collection with
eye-tracking data. Simulation results are presented and analysed to verify the
validity of MDIS as well as point out its limitation for further research
direction.Comment: arXiv admin note: substantial text overlap with arXiv:1301.396
On the design of linear projections for compressive sensing with side information
In this paper, we study the problem of projection kernel design for the reconstruction of high-dimensional signals from low-dimensional measurements in the presence of side information, assuming that the signal of interest and the side information signal are described by a joint Gaussian mixture model (GMM). In particular, we consider the case where the projection kernel for the signal of interest is random, whereas the projection kernel associated to the side information is designed. We then derive sufficient conditions on the number of measurements needed to guarantee that the minimum meansquared error (MMSE) tends to zero in the low-noise regime. Our results demonstrate that the use of a designed kernel to capture side information can lead to substantial gains in relation to a random one, in terms of the number of linear projections required for reliable reconstruction
EM Algorithms for Weighted-Data Clustering with Application to Audio-Visual Scene Analysis
Data clustering has received a lot of attention and numerous methods,
algorithms and software packages are available. Among these techniques,
parametric finite-mixture models play a central role due to their interesting
mathematical properties and to the existence of maximum-likelihood estimators
based on expectation-maximization (EM). In this paper we propose a new mixture
model that associates a weight with each observed point. We introduce the
weighted-data Gaussian mixture and we derive two EM algorithms. The first one
considers a fixed weight for each observation. The second one treats each
weight as a random variable following a gamma distribution. We propose a model
selection method based on a minimum message length criterion, provide a weight
initialization strategy, and validate the proposed algorithms by comparing them
with several state of the art parametric and non-parametric clustering
techniques. We also demonstrate the effectiveness and robustness of the
proposed clustering technique in the presence of heterogeneous data, namely
audio-visual scene analysis.Comment: 14 pages, 4 figures, 4 table
Source Separation in the Presence of Side-information
The source separation problem involves the separation of unknown signals from their mixture. This problem is relevant in a wide range of applications from audio signal processing, communication, biomedical signal processing and art investigation to name a few. There is a vast literature on this problem which is based on either making strong assumption on the source signals or availability of additional data. This thesis proposes new algorithms for source separation with side information where one observes the linear superposition of two source signals plus two additional signals that are correlated with the mixed ones. The first algorithm is based on two ingredients: first, we learn a Gaussian mixture model (GMM) for the joint distribution of a source signal and the corresponding correlated side information signal; second, we separate the signals using standard computationally efficient conditional mean estimators. This also puts forth new recovery guarantees for this source separation algorithm. In particular, under the assumption that the signals can be perfectly described by a GMM model, we characterize necessary and sufficient conditions for reliable source separation in the asymptotic regime of low-noise as a function of the geometry of the underlying signals and their interaction. It is shown that if the subspaces spanned by the innovation components of the source signals with respect to the side information signals have zero intersection, provided that we observe a certain number of linear measurements from the mixture, then we can reliably separate the sources; otherwise we cannot. The second algorithms is based on deep learning where we introduce a novel self-supervised algorithm for the source separation problem. Source separation is intrinsically unsupervised and the lack of training data makes it a difficult task for artificial intelligence to solve. The proposed framework takes advantage of the available data and delivers near perfect separation results in real data scenarios. Our proposed frameworks – which provide new ways to incorporate side information to aid the solution of the source separation problem – are also employed in a real-world art investigation application involving the separation of mixtures of X-Ray images. The simulation results showcase the superiority of our algorithm against other state-of-the-art algorithms
- …