406 research outputs found
Measurement Invariance, Entropy, and Probability
We show that the natural scaling of measurement for a particular problem
defines the most likely probability distribution of observations taken from
that measurement scale. Our approach extends the method of maximum entropy to
use measurement scale as a type of information constraint. We argue that a very
common measurement scale is linear at small magnitudes grading into logarithmic
at large magnitudes, leading to observations that often follow Student's
probability distribution which has a Gaussian shape for small fluctuations from
the mean and a power law shape for large fluctuations from the mean. An inverse
scaling often arises in which measures naturally grade from logarithmic to
linear as one moves from small to large magnitudes, leading to observations
that often follow a gamma probability distribution. A gamma distribution has a
power law shape for small magnitudes and an exponential shape for large
magnitudes. The two measurement scales are natural inverses connected by the
Laplace integral transform. This inversion connects the two major scaling
patterns commonly found in nature. We also show that superstatistics is a
special case of an integral transform, and thus can be understood as a
particular way in which to change the scale of measurement. Incorporating
information about measurement scale into maximum entropy provides a general
approach to the relations between measurement, information and probability
General heatbath algorithm for pure lattice gauge theory
A heatbath algorithm is proposed for pure SU(N) lattice gauge theory based on
the Manton action of the plaquette element for general gauge group N.
Comparison is made to the Metropolis thermalization algorithm using both the
Wilson and Manton actions. The heatbath algorithm is found to outperform the
Metropolis algorithm in both execution speed and decorrelation rate. Results,
mostly in D=3, for N=2 through 5 at several values for the inverse coupling are
presented.Comment: 9 pages, 10 figures, 1 table, major revision, final version, to
appear in PR
Differential cross section analysis in kaon photoproduction using associated legendre polynomials
Angular distributions of differential cross sections from the latest CLAS
data sets \cite{bradford}, for the reaction have been analyzed using associated Legendre polynomials. This
analysis is based upon theoretical calculations in Ref. \cite{fasano} where all
sixteen observables in kaon photoproduction can be classified into four
Legendre classes. Each observable can be described by an expansion of
associated Legendre polynomial functions. One of the questions to be addressed
is how many associated Legendre polynomials are required to describe the data.
In this preliminary analysis, we used data models with different numbers of
associated Legendre polynomials. We then compared these models by calculating
posterior probabilities of the models. We found that the CLAS data set needs no
more than four associated Legendre polynomials to describe the differential
cross section data. In addition, we also show the extracted coefficients of the
best model.Comment: Talk given at APFB08, Depok, Indonesia, August, 19-23, 200
Application of Bryan's algorithm to the mobility spectrum analysis of semiconductor devices
A powerful method for mobility spectrum analysis is presented, based on Bryan's maximum entropy algorithm. The Bayesian analysis central to Bryan's algorithm ensures that we avoid overfitting of data, resulting in a physically reasonable solution. The algorithm is fast, and allows the analysis of large quantities of data, removing the bias of data selection inherent in all previous techniques. Existing mobility spectrum analysis systems are reviewed, and the performance of the Bryan's algorithm mobility spectrum (BAMS) approach is demonstrated using synthetic data sets. Analysis of experimental data is briefly discussed. We find that BAMS performs well compared to existing mobility spectrum methods
A Bayesian approach to the follow-up of candidate gravitational wave signals
Ground-based gravitational wave laser interferometers (LIGO, GEO-600, Virgo
and Tama-300) have now reached high sensitivity and duty cycle. We present a
Bayesian evidence-based approach to the search for gravitational waves, in
particular aimed at the followup of candidate events generated by the analysis
pipeline. We introduce and demonstrate an efficient method to compute the
evidence and odds ratio between different models, and illustrate this approach
using the specific case of the gravitational wave signal generated during the
inspiral phase of binary systems, modelled at the leading quadrupole Newtonian
order, in synthetic noise. We show that the method is effective in detecting
signals at the detection threshold and it is robust against (some types of)
instrumental artefacts. The computational efficiency of this method makes it
scalable to the analysis of all the triggers generated by the analysis
pipelines to search for coalescing binaries in surveys with ground-based
interferometers, and to a whole variety of signal waveforms, characterised by a
larger number of parameters.Comment: 9 page
Symmetrization and enhancement of the continuous Morlet transform
The forward and inverse wavelet transform using the continuous Morlet basis
may be symmetrized by using an appropriate normalization factor. The loss of
response due to wavelet truncation is addressed through a renormalization of
the wavelet based on power. The spectral density has physical units which may
be related to the squared amplitude of the signal, as do its margins the mean
wavelet power and the integrated instant power, giving a quantitative estimate
of the power density with temporal resolution. Deconvolution with the wavelet
response matrix reduces the spectral leakage and produces an enhanced wavelet
spectrum providing maximum resolution of the harmonic content of a signal.
Applications to data analysis are discussed.Comment: 12 pages, 8 figures, 2 tables, minor revision, final versio
Results for the response function determination of the Compact Neutron Spectrometer
The Compact Neutron Spectrometer (CNS) is a Joint European Torus (JET)
Enhancement Project, designed for fusion diagnostics in different plasma
scenarios. The CNS is based on a liquid scintillator (BC501A) which allows good
discrimination between neutron and gamma radiation. Neutron spectrometry with a
BC501A spectrometer requires the use of a reliable, fully characterized
detector. The determination of the response matrix was carried out at the Ion
Accelerator Facility (PIAF) of the Physikalisch-Technische Bundesanstalt (PTB).
This facility provides several monoenergetic beams (2.5, 8, 10, 12 and 14 MeV)
and a 'white field'(Emax ~17 MeV), which allows for a full characterization of
the spectrometer in the region of interest (from ~1.5 MeV to ~17 MeV. The
energy of the incoming neutrons was determined by the time of flight method
(TOF), with time resolution in the order of 1 ns. To check the response matrix,
the measured pulse height spectra were unfolded with the code MAXED and the
resulting energy distributions were compared with those obtained from TOF. The
CNS project required modification of the PTB BC501A spectrometer design, to
replace an analog data acquisition system (NIM modules) with a digital system
developed by the 'Ente per le Nuove tecnologie, l'Energia e l'Ambiente' (ENEA).
Results for the new digital system were evaluated using new software developed
specifically for this project.Comment: Proceedings of FNDA 201
Fitting the Phenomenological MSSM
We perform a global Bayesian fit of the phenomenological minimal
supersymmetric standard model (pMSSM) to current indirect collider and dark
matter data. The pMSSM contains the most relevant 25 weak-scale MSSM
parameters, which are simultaneously fit using `nested sampling' Monte Carlo
techniques in more than 15 years of CPU time. We calculate the Bayesian
evidence for the pMSSM and constrain its parameters and observables in the
context of two widely different, but reasonable, priors to determine which
inferences are robust. We make inferences about sparticle masses, the sign of
the parameter, the amount of fine tuning, dark matter properties and the
prospects for direct dark matter detection without assuming a restrictive
high-scale supersymmetry breaking model. We find the inferred lightest CP-even
Higgs boson mass as an example of an approximately prior independent
observable. This analysis constitutes the first statistically convergent pMSSM
global fit to all current data.Comment: Added references, paragraph on fine-tunin
Bayesian feedback control of a two-atom spin-state in an atom-cavity system
We experimentally demonstrate real-time feedback control of the joint
spin-state of two neutral Caesium atoms inside a high finesse optical cavity.
The quantum states are discriminated by their different cavity transmission
levels. A Bayesian update formalism is used to estimate state occupation
probabilities as well as transition rates. We stabilize the balanced two-atom
mixed state, which is deterministically inaccessible, via feedback control and
find very good agreement with Monte-Carlo simulations. On average, the feedback
loops achieves near optimal conditions by steering the system to the target
state marginally exceeding the time to retrieve information about its state.Comment: 4 pages, 4 figure
A Parameterization Invariant Approach to the Statistical Estimation of the CKM Phase
In contrast to previous analyses, we demonstrate a Bayesian approach to the
estimation of the CKM phase that is invariant to parameterization. We
also show that in addition to {\em computing} the marginal posterior in a
Bayesian manner, the distribution must also be {\em interpreted} from a
subjective Bayesian viewpoint. Doing so gives a very natural interpretation to
the distribution. We also comment on the effect of removing information about
.Comment: 14 pages, 3 figures, 1 table, minor revision; to appear in JHE
- …