24,304 research outputs found
Model-based object recognition from a complex binary imagery using genetic algorithm
This paper describes a technique for model-based object recognition in a noisy and cluttered environment, by extending the work presented in an earlier study by the authors. In order to accurately model small irregularly shaped objects, the model and the image are represented by their binary edge maps, rather then approximating them with straight line segments. The problem is then formulated as that of finding the best describing match between a hypothesized object and the image. A special form of template matching is used to deal with the noisy environment, where the templates are generated on-line by a Genetic Algorithm. For experiments, two complex test images have been considered and the results when compared with standard techniques indicate the scope for further research in this direction
Statistical M-Estimation and Consistency in Large Deformable Models for Image Warping
The problem of defining appropriate distances between shapes or images and modeling the variability of natural images by group transformations is at the heart of modern image analysis. A current trend is the study of probabilistic and statistical aspects of deformation models, and the development of consistent statistical procedure for the estimation of template images. In this paper, we consider a set of images randomly warped from a mean template which has to be recovered. For this, we define an appropriate statistical parametric model to generate random diffeomorphic deformations in two-dimensions. Then, we focus on the problem of estimating the mean pattern when the images are observed with noise. This problem is challenging both from a theoretical and a practical point of view. M-estimation theory enables us to build an estimator defined as a minimizer of a well-tailored empirical criterion. We prove the convergence of this estimator and propose a gradient descent algorithm to compute this M-estimator in practice. Simulations of template extraction and an application to image clustering and classification are also provided
A simple and objective method for reproducible resting state network (RSN) detection in fMRI
Spatial Independent Component Analysis (ICA) decomposes the time by space
functional MRI (fMRI) matrix into a set of 1-D basis time courses and their
associated 3-D spatial maps that are optimized for mutual independence. When
applied to resting state fMRI (rsfMRI), ICA produces several spatial
independent components (ICs) that seem to have biological relevance - the
so-called resting state networks (RSNs). The ICA problem is well posed when the
true data generating process follows a linear mixture of ICs model in terms of
the identifiability of the mixing matrix. However, the contrast function used
for promoting mutual independence in ICA is dependent on the finite amount of
observed data and is potentially non-convex with multiple local minima. Hence,
each run of ICA could produce potentially different IC estimates even for the
same data. One technique to deal with this run-to-run variability of ICA was
proposed by Yang et al. (2008) in their algorithm RAICAR which allows for the
selection of only those ICs that have a high run-to-run reproducibility. We
propose an enhancement to the original RAICAR algorithm that enables us to
assign reproducibility p-values to each IC and allows for an objective
assessment of both within subject and across subjects reproducibility. We call
the resulting algorithm RAICAR-N (N stands for null hypothesis test), and we
have applied it to publicly available human rsfMRI data (http://www.nitrc.org).
Our reproducibility analyses indicated that many of the published RSNs in
rsfMRI literature are highly reproducible. However, we found several other RSNs
that are highly reproducible but not frequently listed in the literature.Comment: 54 pages, 13 figure
Darth Fader: Using wavelets to obtain accurate redshifts of spectra at very low signal-to-noise
We present the DARTH FADER algorithm, a new wavelet-based method for
estimating redshifts of galaxy spectra in spectral surveys that is particularly
adept in the very low SNR regime. We use a standard cross-correlation method to
estimate the redshifts of galaxies, using a template set built using a PCA
analysis on a set of simulated, noise-free spectra. Darth Fader employs wavelet
filtering to both estimate the continuum & to extract prominent line features
in each galaxy spectrum. A simple selection criterion based on the number of
features present in the spectrum is then used to clean the catalogue: galaxies
with fewer than six total features are removed as we are unlikely to obtain a
reliable redshift estimate. Applying our wavelet-based cleaning algorithm to a
simulated testing set, we successfully build a clean catalogue including
extremely low signal-to-noise data (SNR=2.0), for which we are able to obtain a
5.1% catastrophic failure rate in the redshift estimates (compared with 34.5%
prior to cleaning). We also show that for a catalogue with uniformly mixed SNRs
between 1.0 & 20.0, with realistic pixel-dependent noise, it is possible to
obtain redshifts with a catastrophic failure rate of 3.3% after cleaning (as
compared to 22.7% before cleaning). Whilst we do not test this algorithm
exhaustively on real data, we present a proof of concept of the applicability
of this method to real data, showing that the wavelet filtering techniques
perform well when applied to some typical spectra from the SDSS archive. The
Darth Fader algorithm provides a robust method for extracting spectral features
from very noisy spectra. The resulting clean catalogue gives an extremely low
rate of catastrophic failures, even when the spectra have a very low SNR. For
very large sky surveys, this technique may offer a significant boost in the
number of faint galaxies with accurately determined redshifts.Comment: 22 pages, 15 figures. Accepted for publication in Astronomy &
Astrophysic
- …