1,629 research outputs found
A population-based approach to background discrimination in particle physics
Background properties in experimental particle physics are typically
estimated using control samples corresponding to large numbers of events. This
can provide precise knowledge of average background distributions, but
typically does not consider the effect of fluctuations in a data set of
interest. A novel approach based on mixture model decomposition is presented as
a way to estimate the effect of fluctuations on the shapes of probability
distributions in a given data set, with a view to improving on the knowledge of
background distributions obtained from control samples. Events are treated as
heterogeneous populations comprising particles originating from different
processes, and individual particles are mapped to a process of interest on a
probabilistic basis. The proposed approach makes it possible to extract from
the data information about the effect of fluctuations that would otherwise be
lost using traditional methods based on high-statistics control samples. A
feasibility study on Monte Carlo is presented, together with a comparison with
existing techniques. Finally, the prospects for the development of tools for
intensive offline analysis of individual events at the Large Hadron Collider
are discussed.Comment: Updated according to the version published in J. Phys.: Conf. Ser.
Minor changes have been made to the text with respect to the published
article with a view to improving readabilit
Identifying dynamical systems with bifurcations from noisy partial observation
Dynamical systems are used to model a variety of phenomena in which the
bifurcation structure is a fundamental characteristic. Here we propose a
statistical machine-learning approach to derive lowdimensional models that
automatically integrate information in noisy time-series data from partial
observations. The method is tested using artificial data generated from two
cell-cycle control system models that exhibit different bifurcations, and the
learned systems are shown to robustly inherit the bifurcation structure.Comment: 16 pages, 6 figure
Strain control of superlattice implies weak charge-lattice coupling in LaCaMnO
We have recently argued that manganites do not possess stripes of charge
order, implying that the electron-lattice coupling is weak [Phys Rev Lett
\textbf{94} (2005) 097202]. Here we independently argue the same conclusion
based on transmission electron microscopy measurements of a nanopatterned
epitaxial film of LaCaMnO. In strain relaxed regions, the
superlattice period is modified by 2-3% with respect to the parent lattice,
suggesting that the two are not strongly tied.Comment: 4 pages, 4 figures It is now explained why the work provides evidence
to support weak-coupling, and rule out charge orde
Host Galaxy Evolution in Radio-Loud AGN
We investigate the luminosity evolution of the host galaxies of radio-loud
AGN through Hubble Space Telescope imaging of 72 BL Lac objects, including new
STIS imaging of nine z > 0.6 BL Lacs. With their intrinsically low accretion
rates and their strongly beamed jets, BL Lacs provide a unique opportunity to
probe host galaxy evolution independent of the biases and ambiguities implicit
in quasar studies. We find that the host galaxies of BL Lacs evolve strongly,
consistent with passive evolution from a period of active star formation in the
range 0.5 <~ z <~ 2.5, and inconsistent with either passive evolution from a
high formation redshift or a non-evolving population. This evolution is broadly
consistent with that observed in the hosts of other radio-loud AGN, and
inconsistent with the flatter luminosity evolution of quiescent early types and
radio-quiet hosts. This indicates that active star formation, and hence galaxy
interactions, are associated with the formation for radio-loud AGN, and that
these host galaxies preferentially accrete less material after their formation
epoch than galaxies without powerful radio jets. We discuss possible
explanations for the link between merger history and the incidence of a radio
jet.Comment: 37 pages, 8 figures, accepted for publication in ApJ, for full PDF
incl. figures see
http://www.ph.unimelb.edu.au/~modowd/papers/odowdurry2005.pd
On regularization methods of EM-Kaczmarz type
We consider regularization methods of Kaczmarz type in connection with the
expectation-maximization (EM) algorithm for solving ill-posed equations. For
noisy data, our methods are stabilized extensions of the well established
ordered-subsets expectation-maximization iteration (OS-EM). We show
monotonicity properties of the methods and present a numerical experiment which
indicates that the extended OS-EM methods we propose are much faster than the
standard EM algorithm.Comment: 18 pages, 6 figures; On regularization methods of EM-Kaczmarz typ
Deformation of the N=Z nucleus 76Sr using beta-decay studies
A novel method of deducing the deformation of the N=Z nucleus 76Sr is
presented. It is based on the comparison of the experimental Gamow-Teller
strength distribution B(GT) from its beta decay with the results of QRPA
calculations. This method confirms previous indications of the strong prolate
deformation of this nucleus in a totally independent way. The measurement has
been carried out with a large Total Absorption gamma Spectrometer, "Lucrecia",
newly installed at CERN-ISOLDE.Comment: Accepted in Phys. Rev. Letter
Recommended from our members
Model error estimation using the expectation maximization algorithm and a particle flow filter
Model error covariances play a central role in the performance of data assimilation methods applied
to nonlinear state-space models. However, these covariances are largely unknown in most of the
applications. A misspecification of the model error covariance has a strong impact on the computation
of the posterior probability density function, leading to unreliable estimations and even
to a total failure of the assimilation procedure. In this work, we propose the combination of the
expectation maximization (EM) algorithm with an efficient particle filter to estimate the model error
covariance using a batch of observations. Based on the EM algorithm principles, the proposed
method encompasses two stages: the expectation stage, in which a particle filter is used with the
present updated value of the model error covariance as given to find the probability density function
that maximizes the likelihood, followed by a maximization stage, in which the expectation under the
probability density function found in the expectation step is maximized as a function of the elements
of the model error covariance. This novel algorithm here presented combines the EM algorithm
with a fixed point algorithm and does not require a particle smoother to approximate the posterior
densities. We demonstrate that the new method accurately and efficiently solves the linear model
problem. Furthermore, for the chaotic nonlinear Lorenz-96 model the method is stable even for
observation error covariance 10 times larger than the estimated model error covariance matrix and
also is successful in moderately large dimensional situations where the dimension of the estimated
matrix is 40 \times 40
Continuous-variable optical quantum state tomography
This review covers latest developments in continuous-variable quantum-state
tomography of optical fields and photons, placing a special accent on its
practical aspects and applications in quantum information technology. Optical
homodyne tomography is reviewed as a method of reconstructing the state of
light in a given optical mode. A range of relevant practical topics are
discussed, such as state-reconstruction algorithms (with emphasis on the
maximum-likelihood technique), the technology of time-domain homodyne
detection, mode matching issues, and engineering of complex quantum states of
light. The paper also surveys quantum-state tomography for the transverse
spatial state (spatial mode) of the field in the special case of fields
containing precisely one photon.Comment: Finally, a revision! Comments to lvov(at)ucalgary.ca and
raymer(at)uoregon.edu are welcom
Data Mining and Machine Learning in Astronomy
We review the current state of data mining and machine learning in astronomy.
'Data Mining' can have a somewhat mixed connotation from the point of view of a
researcher in this field. If used correctly, it can be a powerful approach,
holding the potential to fully exploit the exponentially increasing amount of
available data, promising great scientific advance. However, if misused, it can
be little more than the black-box application of complex computing algorithms
that may give little physical insight, and provide questionable results. Here,
we give an overview of the entire data mining process, from data collection
through to the interpretation of results. We cover common machine learning
algorithms, such as artificial neural networks and support vector machines,
applications from a broad range of astronomy, emphasizing those where data
mining techniques directly resulted in improved science, and important current
and future directions, including probability density functions, parallel
algorithms, petascale computing, and the time domain. We conclude that, so long
as one carefully selects an appropriate algorithm, and is guided by the
astronomical problem at hand, data mining can be very much the powerful tool,
and not the questionable black box.Comment: Published in IJMPD. 61 pages, uses ws-ijmpd.cls. Several extra
figures, some minor additions to the tex
Intelligence as inference or forcing Occam on the world
We propose to perform the optimization task of Universal Artificial Intelligence (UAI) through learning a reference machine on which good programs are short. Further, we also acknowledge that the choice of reference machine that the UAI objective is based on is arbitrary and, therefore, we learn a suitable machine for the environment we are in. This is based on viewing Occam’s razor as an imperative instead of as a proposition about the world. Since this principle cannot be true for all reference machines, we need to find a machine that makes the principle true. We both want good policies and the environment to have short implementations on the machine. Such a machine is learnt iteratively through a procedure that generalizes the principle underlying the Expectation-Maximization algorithm
- …