3,140 research outputs found
Stochastic Algorithm For Parameter Estimation For Dense Deformable Template Mixture Model
Estimating probabilistic deformable template models is a new approach in the
fields of computer vision and probabilistic atlases in computational anatomy. A
first coherent statistical framework modelling the variability as a hidden
random variable has been given by Allassonni\`ere, Amit and Trouv\'e in [1] in
simple and mixture of deformable template models. A consistent stochastic
algorithm has been introduced in [2] to face the problem encountered in [1] for
the convergence of the estimation algorithm for the one component model in the
presence of noise. We propose here to go on in this direction of using some
"SAEM-like" algorithm to approximate the MAP estimator in the general Bayesian
setting of mixture of deformable template model. We also prove the convergence
of this algorithm toward a critical point of the penalised likelihood of the
observations and illustrate this with handwritten digit images
Construction of Bayesian Deformable Models via Stochastic Approximation Algorithm: A Convergence Study
The problem of the definition and the estimation of generative models based
on deformable templates from raw data is of particular importance for modelling
non aligned data affected by various types of geometrical variability. This is
especially true in shape modelling in the computer vision community or in
probabilistic atlas building for Computational Anatomy (CA). A first coherent
statistical framework modelling the geometrical variability as hidden variables
has been given by Allassonni\`ere, Amit and Trouv\'e (JRSS 2006). Setting the
problem in a Bayesian context they proved the consistency of the MAP estimator
and provided a simple iterative deterministic algorithm with an EM flavour
leading to some reasonable approximations of the MAP estimator under low noise
conditions. In this paper we present a stochastic algorithm for approximating
the MAP estimator in the spirit of the SAEM algorithm. We prove its convergence
to a critical point of the observed likelihood with an illustration on images
of handwritten digits
Binary Biometrics: An Analytic Framework to Estimate the Performance Curves Under Gaussian Assumption
In recent years, the protection of biometric data has gained increased interest from the scientific community. Methods such as the fuzzy commitment scheme, helper-data system, fuzzy extractors, fuzzy vault, and cancelable biometrics have been proposed for protecting biometric data. Most of these methods use cryptographic primitives or error-correcting codes (ECCs) and use a binary representation of the real-valued biometric data. Hence, the difference between two biometric samples is given by the Hamming distance (HD) or bit errors between the binary vectors obtained from the enrollment and verification phases, respectively. If the HD is smaller (larger) than the decision threshold, then the subject is accepted (rejected) as genuine. Because of the use of ECCs, this decision threshold is limited to the maximum error-correcting capacity of the code, consequently limiting the false rejection rate (FRR) and false acceptance rate tradeoff. A method to improve the FRR consists of using multiple biometric samples in either the enrollment or verification phase. The noise is suppressed, hence reducing the number of bit errors and decreasing the HD. In practice, the number of samples is empirically chosen without fully considering its fundamental impact. In this paper, we present a Gaussian analytical framework for estimating the performance of a binary biometric system given the number of samples being used in the enrollment and the verification phase. The error-detection tradeoff curve that combines the false acceptance and false rejection rates is estimated to assess the system performance. The analytic expressions are validated using the Face Recognition Grand Challenge v2 and Fingerprint Verification Competition 2000 biometric databases
Correlator Bank Detection of GW chirps. False-Alarm Probability, Template Density and Thresholds: Behind and Beyond the Minimal-Match Issue
The general problem of computing the false-alarm rate vs. detection-threshold
relationship for a bank of correlators is addressed, in the context of
maximum-likelihood detection of gravitational waves, with specific reference to
chirps from coalescing binary systems. Accurate (lower-bound) approximants for
the cumulative distribution of the whole-bank supremum are deduced from a class
of Bonferroni-type inequalities. The asymptotic properties of the cumulative
distribution are obtained, in the limit where the number of correlators goes to
infinity. The validity of numerical simulations made on small-size banks is
extended to banks of any size, via a gaussian-correlation inequality. The
result is used to estimate the optimum template density, yielding the best
tradeoff between computational cost and detection efficiency, in terms of
undetected potentially observable sources at a prescribed false-alarm level,
for the simplest case of Newtonian chirps.Comment: submitted to Phys. Rev.
The outer halo globular cluster system of M31 - II. Kinematics
We present a detailed kinematic analysis of the outer halo globular cluster
(GC) system of M31. Our basis for this is a set of new spectroscopic
observations for 78 clusters lying at projected distances between Rproj ~20-140
kpc from the M31 centre. These are largely drawn from the recent PAndAS
globular cluster catalogue; 63 of our targets have no previous velocity data.
Via a Bayesian maximum likelihood analysis we find that GCs with Rproj > 30 kpc
exhibit coherent rotation around the minor optical axis of M31, in the same
direction as more centrally- located GCs, but with a smaller amplitude of
86+/-17 km s-1. There is also evidence that the velocity dispersion of the
outer halo GC system decreases as a function of projected distance from the M31
centre, and that this relation can be well described by a power law of index ~
-0.5. The velocity dispersion profile of the outer halo GCs is quite similar to
that of the halo stars, at least out to the radius up to which there is
available information on the stellar kinematics. We detect and discuss various
velocity correlations amongst subgroups of GCs that lie on stellar debris
streams in the M31 halo. Many of these subgroups are dynamically cold,
exhibiting internal velocity dispersions consistent with zero. Simple Monte
Carlo experiments imply that such configurations are unlikely to form by
chance, adding weight to the notion that a significant fraction of the outer
halo GCs in M31 have been accreted alongside their parent dwarf galaxies. We
also estimate the M31 mass within 200 kpc via the Tracer Mass Estimator,
finding (1.2 - 1.6) +/- 0.2 10^{12}M_sun. This quantity is subject to
additional systematic effects due to various limitations of the data, and
assumptions built in into the TME. Finally, we discuss our results in the
context of formation scenarios for the M31 halo.Comment: 24 pages, 12 figures, 7 tables; Accepted for publication in MNRA
Technique(s) for Spike - Sorting
Spike-sorting techniques attempt to classify a series of noisy electrical
waveforms according to the identity of the neurons that generated them.
Existing techniques perform this classification ignoring several properties of
actual neurons that can ultimately improve classification performance. In this
chapter, after illustrating the spike-sorting problem with real data, we
propose a more realistic spike train generation model. It incorporates both a
description of "non trivial" (ie, non Poisson) neuronal discharge statistics
and a description of spike waveform dynamics (eg, the events amplitude decays
for short inter-spike intervals). We show that this spike train generation
model is analogous to a one-dimensional Potts spin glass model. We can
therefore use the computational methods which have been developed in fields
where Potts models are extensively used. These methods are based on the
construction of a Markov Chain in the space of model parameters and spike train
configurations, where a configuration is defined by specifying a neuron of
origin for each spike. This Markov Chain is built such that its unique
stationary density is the posterior density of model parameters and
configurations given the observed data. A Monte Carlo simulation of the Markov
Chain is then used to estimate the posterior density. The theoretical
background on Markov chains is provided and the way to build the transition
matrix of the Markov Chain is illustrated with a simple, but realistic, model
for data generation . Simulated data are used to illustrate the performance of
the method and to show that it can easily cope with neurons generating spikes
with highly dynamic waveforms and/or generating strongly overlapping clusters
on Wilson plots.Comment: 40 pages, 18 figures. LaTeX source file prepared with LyX. To be
published as a chapter of the book "Models and Methods in Neurophysics"
edited by D. Hansel and C. Meunie
- âŠ