4,402 research outputs found
Recovery of Missing Samples Using Sparse Approximation via a Convex Similarity Measure
In this paper, we study the missing sample recovery problem using methods
based on sparse approximation. In this regard, we investigate the algorithms
used for solving the inverse problem associated with the restoration of missed
samples of image signal. This problem is also known as inpainting in the
context of image processing and for this purpose, we suggest an iterative
sparse recovery algorithm based on constrained -norm minimization with a
new fidelity metric. The proposed metric called Convex SIMilarity (CSIM) index,
is a simplified version of the Structural SIMilarity (SSIM) index, which is
convex and error-sensitive. The optimization problem incorporating this
criterion, is then solved via Alternating Direction Method of Multipliers
(ADMM). Simulation results show the efficiency of the proposed method for
missing sample recovery of 1D patch vectors and inpainting of 2D image signals
Sparse Probit Linear Mixed Model
Linear Mixed Models (LMMs) are important tools in statistical genetics. When
used for feature selection, they allow to find a sparse set of genetic traits
that best predict a continuous phenotype of interest, while simultaneously
correcting for various confounding factors such as age, ethnicity and
population structure. Formulated as models for linear regression, LMMs have
been restricted to continuous phenotypes. We introduce the Sparse Probit Linear
Mixed Model (Probit-LMM), where we generalize the LMM modeling paradigm to
binary phenotypes. As a technical challenge, the model no longer possesses a
closed-form likelihood function. In this paper, we present a scalable
approximate inference algorithm that lets us fit the model to high-dimensional
data sets. We show on three real-world examples from different domains that in
the setup of binary labels, our algorithm leads to better prediction accuracies
and also selects features which show less correlation with the confounding
factors.Comment: Published version, 21 pages, 6 figure
Scalable and Robust Community Detection with Randomized Sketching
This paper explores and analyzes the unsupervised clustering of large
partially observed graphs. We propose a scalable and provable randomized
framework for clustering graphs generated from the stochastic block model. The
clustering is first applied to a sub-matrix of the graph's adjacency matrix
associated with a reduced graph sketch constructed using random sampling. Then,
the clusters of the full graph are inferred based on the clusters extracted
from the sketch using a correlation-based retrieval step. Uniform random node
sampling is shown to improve the computational complexity over clustering of
the full graph when the cluster sizes are balanced. A new random degree-based
node sampling algorithm is presented which significantly improves upon the
performance of the clustering algorithm even when clusters are unbalanced. This
algorithm improves the phase transitions for matrix-decomposition-based
clustering with regard to computational complexity and minimum cluster size,
which are shown to be nearly dimension-free in the low inter-cluster
connectivity regime. A third sampling technique is shown to improve balance by
randomly sampling nodes based on spatial distribution. We provide analysis and
numerical results using a convex clustering algorithm based on matrix
completion
Fast Robust PCA on Graphs
Mining useful clusters from high dimensional data has received significant
attention of the computer vision and pattern recognition community in the
recent years. Linear and non-linear dimensionality reduction has played an
important role to overcome the curse of dimensionality. However, often such
methods are accompanied with three different problems: high computational
complexity (usually associated with the nuclear norm minimization),
non-convexity (for matrix factorization methods) and susceptibility to gross
corruptions in the data. In this paper we propose a principal component
analysis (PCA) based solution that overcomes these three issues and
approximates a low-rank recovery method for high dimensional datasets. We
target the low-rank recovery by enforcing two types of graph smoothness
assumptions, one on the data samples and the other on the features by designing
a convex optimization problem. The resulting algorithm is fast, efficient and
scalable for huge datasets with O(nlog(n)) computational complexity in the
number of data samples. It is also robust to gross corruptions in the dataset
as well as to the model parameters. Clustering experiments on 7 benchmark
datasets with different types of corruptions and background separation
experiments on 3 video datasets show that our proposed model outperforms 10
state-of-the-art dimensionality reduction models. Our theoretical analysis
proves that the proposed model is able to recover approximate low-rank
representations with a bounded error for clusterable data
- …