11,040 research outputs found
Compositional Model based Fisher Vector Coding for Image Classification
Deriving from the gradient vector of a generative model of local features,
Fisher vector coding (FVC) has been identified as an effective coding method
for image classification. Most, if not all, FVC implementations employ the
Gaussian mixture model (GMM) to depict the generation process of local
features. However, the representative power of the GMM could be limited because
it essentially assumes that local features can be characterized by a fixed
number of feature prototypes and the number of prototypes is usually small in
FVC. To handle this limitation, in this paper we break the convention which
assumes that a local feature is drawn from one of few Gaussian distributions.
Instead, we adopt a compositional mechanism which assumes that a local feature
is drawn from a Gaussian distribution whose mean vector is composed as the
linear combination of multiple key components and the combination weight is a
latent random variable. In this way, we can greatly enhance the representative
power of the generative model of FVC. To implement our idea, we designed two
particular generative models with such a compositional mechanism.Comment: Fixed typos. 16 pages. Appearing in IEEE T. Pattern Analysis and
Machine Intelligence (TPAMI
Relaxed Recovery Conditions for OMP/OLS by Exploiting both Coherence and Decay
We propose extended coherence-based conditions for exact sparse support
recovery using orthogonal matching pursuit (OMP) and orthogonal least squares
(OLS). Unlike standard uniform guarantees, we embed some information about the
decay of the sparse vector coefficients in our conditions. As a result, the
standard condition (where denotes the mutual coherence and
the sparsity level) can be weakened as soon as the non-zero coefficients
obey some decay, both in the noiseless and the bounded-noise scenarios.
Furthermore, the resulting condition is approaching for strongly
decaying sparse signals. Finally, in the noiseless setting, we prove that the
proposed conditions, in particular the bound , are the tightest
achievable guarantees based on mutual coherence
Exact Recovery Conditions for Sparse Representations with Partial Support Information
We address the exact recovery of a k-sparse vector in the noiseless setting
when some partial information on the support is available. This partial
information takes the form of either a subset of the true support or an
approximate subset including wrong atoms as well. We derive a new sufficient
and worst-case necessary (in some sense) condition for the success of some
procedures based on lp-relaxation, Orthogonal Matching Pursuit (OMP) and
Orthogonal Least Squares (OLS). Our result is based on the coherence "mu" of
the dictionary and relaxes the well-known condition mu<1/(2k-1) ensuring the
recovery of any k-sparse vector in the non-informed setup. It reads
mu<1/(2k-g+b-1) when the informed support is composed of g good atoms and b
wrong atoms. We emphasize that our condition is complementary to some
restricted-isometry based conditions by showing that none of them implies the
other.
Because this mutual coherence condition is common to all procedures, we carry
out a finer analysis based on the Null Space Property (NSP) and the Exact
Recovery Condition (ERC). Connections are established regarding the
characterization of lp-relaxation procedures and OMP in the informed setup.
First, we emphasize that the truncated NSP enjoys an ordering property when p
is decreased. Second, the partial ERC for OMP (ERC-OMP) implies in turn the
truncated NSP for the informed l1 problem, and the truncated NSP for p<1.Comment: arXiv admin note: substantial text overlap with arXiv:1211.728
Graph learning under sparsity priors
Graph signals offer a very generic and natural representation for data that
lives on networks or irregular structures. The actual data structure is however
often unknown a priori but can sometimes be estimated from the knowledge of the
application domain. If this is not possible, the data structure has to be
inferred from the mere signal observations. This is exactly the problem that we
address in this paper, under the assumption that the graph signals can be
represented as a sparse linear combination of a few atoms of a structured graph
dictionary. The dictionary is constructed on polynomials of the graph
Laplacian, which can sparsely represent a general class of graph signals
composed of localized patterns on the graph. We formulate a graph learning
problem, whose solution provides an ideal fit between the signal observations
and the sparse graph signal model. As the problem is non-convex, we propose to
solve it by alternating between a signal sparse coding and a graph update step.
We provide experimental results that outline the good graph recovery
performance of our method, which generally compares favourably to other recent
network inference algorithms
Data-Driven Time-Frequency Analysis
In this paper, we introduce a new adaptive data analysis method to study
trend and instantaneous frequency of nonlinear and non-stationary data. This
method is inspired by the Empirical Mode Decomposition method (EMD) and the
recently developed compressed (compressive) sensing theory. The main idea is to
look for the sparsest representation of multiscale data within the largest
possible dictionary consisting of intrinsic mode functions of the form , where , consists of the
functions smoother than and . This problem can
be formulated as a nonlinear optimization problem. In order to solve this
optimization problem, we propose a nonlinear matching pursuit method by
generalizing the classical matching pursuit for the optimization problem.
One important advantage of this nonlinear matching pursuit method is it can be
implemented very efficiently and is very stable to noise. Further, we provide a
convergence analysis of our nonlinear matching pursuit method under certain
scale separation assumptions. Extensive numerical examples will be given to
demonstrate the robustness of our method and comparison will be made with the
EMD/EEMD method. We also apply our method to study data without scale
separation, data with intra-wave frequency modulation, and data with incomplete
or under-sampled data
- âŠ