11,818 research outputs found
Spline histogram method for reconstruction of probability density function of clusters of galaxies
We describe the spline histogram algorithm which is useful for visualization
of the probability density function setting up a statistical hypothesis for a
test. The spline histogram is constructed from discrete data measurements using
tensioned cubic spline interpolation of the cumulative distribution function
which is then differentiated and smoothed using the Savitzky-Golay filter. The
optimal width of the filter is determined by minimization of the Integrated
Square Error function.
The current distribution of the TCSplin algorithm written in f77 with IDL and
Gnuplot visualization scripts is available from
http://www.virac.lv/en/soft.htmlComment: 8 pages, 3 figures, to be published in "Galaxies and Chaos: Theory
and Observations", eds. N.Voglis, G.Contoupoulos, conference proceedings (CD
version), uses Springer Verlag svmult.cls style file
Extended Target Shape Estimation by Fitting B-Spline Curve
Taking into account the difficulty of shape estimation for the extended targets, a novel algorithm is proposed by fitting the B-spline curve. For the single extended target tracking, the multiple frame statistic technique is introduced to construct the pseudomeasurement sets and the control points are selected to form the B-spline curve. Then the shapes of the extended targets are extracted under the Bayes framework. Furthermore, the proposed shape estimation algorithm is modified suitably and combined with the probability hypothesis density (PHD) filter for multiple extended target tracking. Simulations show that the proposed algorithm has a good performance for shape estimate of any extended targets
The XENON100 exclusion limit without considering Leff as a nuisance parameter
In 2011, the XENON100 experiment has set unprecedented constraints on dark
matter-nucleon interactions, excluding dark matter candidates with masses down
to 6 GeV if the corresponding cross section is larger than 10^{-39} cm^2. The
dependence of the exclusion limit in terms of the scintillation efficiency
(Leff) has been debated at length. To overcome possible criticisms XENON100
performed an analysis in which Leff was considered as a nuisance parameter and
its uncertainties were profiled out by using a Gaussian likelihood in which the
mean value corresponds to the best fit Leff value smoothly extrapolated to zero
below 3 keVnr. Although such a method seems fairly robust, it does not account
for more extreme types of extrapolation nor does it enable to anticipate on how
much the exclusion limit would vary if new data were to support a flat
behaviour for Leff below 3 keVnr, for example. Yet, such a question is crucial
for light dark matter models which are close to the published XENON100 limit.
To answer this issue, we use a maximum Likelihood ratio analysis, as done by
the XENON100 collaboration, but do not consider Leff as a nuisance parameter.
Instead, Leff is obtained directly from the fits to the data. This enables us
to define frequentist confidence intervals by marginalising over Leff.Comment: 10 pages;, 9 figures; references adde
Image Reconstruction from Undersampled Confocal Microscopy Data using Multiresolution Based Maximum Entropy Regularization
We consider the problem of reconstructing 2D images from randomly
under-sampled confocal microscopy samples. The well known and widely celebrated
total variation regularization, which is the L1 norm of derivatives, turns out
to be unsuitable for this problem; it is unable to handle both noise and
under-sampling together. This issue is linked with the notion of phase
transition phenomenon observed in compressive sensing research, which is
essentially the break-down of total variation methods, when sampling density
gets lower than certain threshold. The severity of this breakdown is determined
by the so-called mutual incoherence between the derivative operators and
measurement operator. In our problem, the mutual incoherence is low, and hence
the total variation regularization gives serious artifacts in the presence of
noise even when the sampling density is not very low. There has been very few
attempts in developing regularization methods that perform better than total
variation regularization for this problem. We develop a multi-resolution based
regularization method that is adaptive to image structure. In our approach, the
desired reconstruction is formulated as a series of coarse-to-fine
multi-resolution reconstructions; for reconstruction at each level, the
regularization is constructed to be adaptive to the image structure, where the
information for adaption is obtained from the reconstruction obtained at
coarser resolution level. This adaptation is achieved by using maximum entropy
principle, where the required adaptive regularization is determined as the
maximizer of entropy subject to the information extracted from the coarse
reconstruction as constraints. We demonstrate the superiority of the proposed
regularization method over existing ones using several reconstruction examples
Probabilistic Numerics and Uncertainty in Computations
We deliver a call to arms for probabilistic numerical methods: algorithms for
numerical tasks, including linear algebra, integration, optimization and
solving differential equations, that return uncertainties in their
calculations. Such uncertainties, arising from the loss of precision induced by
numerical calculation with limited time or hardware, are important for much
contemporary science and industry. Within applications such as climate science
and astrophysics, the need to make decisions on the basis of computations with
large and complex data has led to a renewed focus on the management of
numerical uncertainty. We describe how several seminal classic numerical
methods can be interpreted naturally as probabilistic inference. We then show
that the probabilistic view suggests new algorithms that can flexibly be
adapted to suit application specifics, while delivering improved empirical
performance. We provide concrete illustrations of the benefits of probabilistic
numeric algorithms on real scientific problems from astrometry and astronomical
imaging, while highlighting open problems with these new algorithms. Finally,
we describe how probabilistic numerical methods provide a coherent framework
for identifying the uncertainty in calculations performed with a combination of
numerical algorithms (e.g. both numerical optimisers and differential equation
solvers), potentially allowing the diagnosis (and control) of error sources in
computations.Comment: Author Generated Postprint. 17 pages, 4 Figures, 1 Tabl
The Harmonic Analysis of Kernel Functions
Kernel-based methods have been recently introduced for linear system
identification as an alternative to parametric prediction error methods.
Adopting the Bayesian perspective, the impulse response is modeled as a
non-stationary Gaussian process with zero mean and with a certain kernel (i.e.
covariance) function. Choosing the kernel is one of the most challenging and
important issues. In the present paper we introduce the harmonic analysis of
this non-stationary process, and argue that this is an important tool which
helps in designing such kernel. Furthermore, this analysis suggests also an
effective way to approximate the kernel, which allows to reduce the
computational burden of the identification procedure
A maximum likelihood based technique for validating detrended fluctuation analysis (ML-DFA)
Detrended Fluctuation Analysis (DFA) is widely used to assess the presence of
long-range temporal correlations in time series. Signals with long-range
temporal correlations are typically defined as having a power law decay in
their autocorrelation function. The output of DFA is an exponent, which is the
slope obtained by linear regression of a log-log fluctuation plot against
window size. However, if this fluctuation plot is not linear, then the
underlying signal is not self-similar, and the exponent has no meaning. There
is currently no method for assessing the linearity of a DFA fluctuation plot.
Here we present such a technique, called ML-DFA. We scale the DFA fluctuation
plot to construct a likelihood function for a set of alternative models
including polynomial, root, exponential, logarithmic and spline functions. We
use this likelihood function to determine the maximum likelihood and thus to
calculate values of the Akaike and Bayesian information criteria, which
identify the best fit model when the number of parameters involved is taken
into account and over-fitting is penalised. This ensures that, of the models
that fit well, the least complicated is selected as the best fit. We apply
ML-DFA to synthetic data from FARIMA processes and sine curves with DFA
fluctuation plots whose form has been analytically determined, and to
experimentally collected neurophysiological data. ML-DFA assesses whether the
hypothesis of a linear fluctuation plot should be rejected, and thus whether
the exponent can be considered meaningful. We argue that ML-DFA is essential to
obtaining trustworthy results from DFA.Comment: 22 pages, 7 figure
- …