1,147 research outputs found
Sparse Estimation using Bayesian Hierarchical Prior Modeling for Real and Complex Linear Models
In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been
used to model sparsity-inducing priors that realize a class of concave penalty
functions for the regression task in real-valued signal models. Motivated by
the relative scarcity of formal tools for SBL in complex-valued models, this
paper proposes a GSM model - the Bessel K model - that induces concave penalty
functions for the estimation of complex sparse signals. The properties of the
Bessel K model are analyzed when it is applied to Type I and Type II
estimation. This analysis reveals that, by tuning the parameters of the mixing
pdf different penalty functions are invoked depending on the estimation type
used, the value of the noise variance, and whether real or complex signals are
estimated. Using the Bessel K model, we derive a sparse estimator based on a
modification of the expectation-maximization algorithm formulated for Type II
estimation. The estimator includes as a special instance the algorithms
proposed by Tipping and Faul [1] and by Babacan et al. [2]. Numerical results
show the superiority of the proposed estimator over these state-of-the-art
estimators in terms of convergence speed, sparseness, reconstruction error, and
robustness in low and medium signal-to-noise ratio regimes.Comment: The paper provides a new comprehensive analysis of the theoretical
foundations of the proposed estimators. Minor modification of the titl
Testing for Homogeneity in Mixture Models
Statistical models of unobserved heterogeneity are typically formalized as
mixtures of simple parametric models and interest naturally focuses on testing
for homogeneity versus general mixture alternatives. Many tests of this type
can be interpreted as tests, as in Neyman (1959), and shown to be
locally, asymptotically optimal. These tests will be contrasted
with a new approach to likelihood ratio testing for general mixture models. The
latter tests are based on estimation of general nonparametric mixing
distribution with the Kiefer and Wolfowitz (1956) maximum likelihood estimator.
Recent developments in convex optimization have dramatically improved upon
earlier EM methods for computation of these estimators, and recent results on
the large sample behavior of likelihood ratios involving such estimators yield
a tractable form of asymptotic inference. Improvement in computation efficiency
also facilitates the use of a bootstrap methods to determine critical values
that are shown to work better than the asymptotic critical values in finite
samples. Consistency of the bootstrap procedure is also formally established.
We compare performance of the two approaches identifying circumstances in which
each is preferred
A Comparison of Nature Inspired Algorithms for Multi-threshold Image Segmentation
In the field of image analysis, segmentation is one of the most important
preprocessing steps. One way to achieve segmentation is by mean of threshold
selection, where each pixel that belongs to a determined class islabeled
according to the selected threshold, giving as a result pixel groups that share
visual characteristics in the image. Several methods have been proposed in
order to solve threshold selectionproblems; in this work, it is used the method
based on the mixture of Gaussian functions to approximate the 1D histogram of a
gray level image and whose parameters are calculated using three nature
inspired algorithms (Particle Swarm Optimization, Artificial Bee Colony
Optimization and Differential Evolution). Each Gaussian function approximates
thehistogram, representing a pixel class and therefore a threshold point.
Experimental results are shown, comparing in quantitative and qualitative
fashion as well as the main advantages and drawbacks of each algorithm, applied
to multi-threshold problem.Comment: 16 pages, this is a draft of the final version of the article sent to
the Journa
- …