2,458 research outputs found
Studies of Boosted Decision Trees for MiniBooNE Particle Identification
Boosted decision trees are applied to particle identification in the
MiniBooNE experiment operated at Fermi National Accelerator Laboratory
(Fermilab) for neutrino oscillations. Numerous attempts are made to tune the
boosted decision trees, to compare performance of various boosting algorithms,
and to select input variables for optimal performance.Comment: 28 pages, 22 figures, submitted to Nucl. Inst & Meth.
Boosted Decision Trees as an Alternative to Artificial Neural Networks for Particle Identification
The efficacy of particle identification is compared using artificial neutral
networks and boosted decision trees. The comparison is performed in the context
of the MiniBooNE, an experiment at Fermilab searching for neutrino
oscillations. Based on studies of Monte Carlo samples of simulated data,
particle identification with boosting algorithms has better performance than
that with artificial neural networks for the MiniBooNE experiment. Although the
tests in this paper were for one experiment, it is expected that boosting
algorithms will find wide application in physics.Comment: 6 pages, 5 figures; Accepted for publication in Nucl. Inst. & Meth.
Studies of Stability and Robustness for Artificial Neural Networks and Boosted Decision Trees
In this paper, we compare the performance, stability and robustness of
Artificial Neural Networks (ANN) and Boosted Decision Trees (BDT) using
MiniBooNE Monte Carlo samples. These methods attempt to classify events given a
number of identification variables. The BDT algorithm has been discussed by us
in previous publications. Testing is done in this paper by smearing and
shifting the input variables of testing samples. Based on these studies, BDT
has better particle identification performance than ANN. The degradation of the
classifications obtained by shifting or smearing variables of testing results
is smaller for BDT than for ANN.Comment: 23 pages, 13 figure
Fingerprinting Hysteresis
We test the predictive power of first-oder reversal curve (FORC) diagrams
using simulations of random magnets. In particular, we compute a histogram of
the switching fields of the underlying microscopic switching units along the
major hysteresis loop, and compare to the corresponding FORC diagram. We find
qualitative agreement between the switching-field histogram and the FORC
diagram, yet differences are noticeable. We discuss possible sources for these
differences and present results for frustrated systems where the discrepancies
are more pronounced.Comment: 4 pages, 5 figure
Optimizing Beam Transport in Rapidly Compressing Beams on the Neutralized Drift Compression Experiment - II
The Neutralized Drift Compression Experiment-II (NDCX-II) is an induction
linac that generates intense pulses of 1.2 MeV helium ions for heating matter
to extreme conditions. Here, we present recent results on optimizing beam
transport. The NDCX-II beamline includes a 1-meter-long drift section
downstream of the last transport solenoid, which is filled with
charge-neutralizing plasma that enables rapid longitudinal compression of an
intense ion beam against space-charge forces. The transport section on NDCX-II
consists of 28 solenoids. Finding optimal field settings for a group of
solenoids requires knowledge of the envelope parameters of the beam. Imaging
the beam on scintillator gives the radius of the beam, but the envelope angle
dr/dz is not measured directly. We demonstrate how the parameters of the beam
envelope (r, dr/dz, and emittance) can be reconstructed from a series of images
taken at varying B-field strengths of a solenoid upstream of the scintillator.
We use this technique to evaluate emittance at several points in the NDCX-II
beamline and for optimizing the trajectory of the beam at the entry of the
plasma-filled drift section
Short-Pulse, Compressed Ion Beams at the Neutralized Drift Compression Experiment
We have commenced experiments with intense short pulses of ion beams on the
Neutralized Drift Compression Experiment (NDCX-II) at Lawrence Berkeley
National Laboratory, with 1-mm beam spot size within 2.5 ns full-width at half
maximum. The ion kinetic energy is 1.2 MeV. To enable the short pulse duration
and mm-scale focal spot radius, the beam is neutralized in a 1.5-meter-long
drift compression section following the last accelerator cell. A
short-focal-length solenoid focuses the beam in the presence of the volumetric
plasma that is near the target. In the accelerator, the line-charge density
increases due to the velocity ramp imparted on the beam bunch. The scientific
topics to be explored are warm dense matter, the dynamics of radiation damage
in materials, and intense beam and beam-plasma physics including select topics
of relevance to the development of heavy-ion drivers for inertial fusion
energy. Below the transition to melting, the short beam pulses offer an
opportunity to study the multi-scale dynamics of radiation-induced damage in
materials with pump-probe experiments, and to stabilize novel metastable phases
of materials when short-pulse heating is followed by rapid quenching. First
experiments used a lithium ion source; a new plasma-based helium ion source
shows much greater charge delivered to the target.Comment: 4 pages, 2 figures, 1 table. Submitted to the proceedings for the
Ninth International Conference on Inertial Fusion Sciences and Applications,
IFSA 201
Irradiation of Materials with Short, Intense Ion pulses at NDCX-II
We present an overview of the performance of the Neutralized Drift
Compression Experiment-II (NDCX-II) accelerator at Berkeley Lab, and report on
recent target experiments on beam driven melting and transmission ion energy
loss measurements with nanosecond and millimeter-scale ion beam pulses and thin
tin foils. Bunches with around 10^11 ions, 1-mm radius, and 2-30 ns FWHM
duration have been created with corresponding fluences in the range of 0.1 to
0.7 J/cm^2. To achieve these short pulse durations and mm-scale focal spot
radii, the 1.1 MeV He+ ion beam is neutralized in a drift compression section,
which removes the space charge defocusing effect during final compression and
focusing. The beam space charge and drift compression techniques resemble
necessary beam conditions and manipulations in heavy ion inertial fusion
accelerators. Quantitative comparison of detailed particle-in-cell simulations
with the experiment play an important role in optimizing accelerator
performance.Comment: 15 pages, 7 figures. revised manuscript submitted to Laser and
Particle Beam
Population dynamical behavior of Lotka-Volterra system under regime switching
In this paper, we investigate a Lotka-Volterra system under regime switching dx(t) = diag(x1(t); : : : ; xn(t))[(b(r(t)) + A(r(t))x(t))dt + (r(t))dB(t)]; where B(t) is a standard Brownian motion. The aim here is to find out what happens under regime switching. We first obtain the sufficient conditions for the existence of global positive solutions, stochastic permanence and extinction. We find out that both stochastic permanence and extinction have close relationships with the stationary probability distribution of the Markov chain. The limit of the average in time of the sample path of the solution is then estimated by two constants related to the stationary distribution and the coefficients. Finally, the main results are illustrated by several examples
The Sample Complexity of Dictionary Learning
A large set of signals can sometimes be described sparsely using a
dictionary, that is, every element can be represented as a linear combination
of few elements from the dictionary. Algorithms for various signal processing
applications, including classification, denoising and signal separation, learn
a dictionary from a set of signals to be represented. Can we expect that the
representation found by such a dictionary for a previously unseen example from
the same source will have L_2 error of the same magnitude as those for the
given examples? We assume signals are generated from a fixed distribution, and
study this questions from a statistical learning theory perspective.
We develop generalization bounds on the quality of the learned dictionary for
two types of constraints on the coefficient selection, as measured by the
expected L_2 error in representation when the dictionary is used. For the case
of l_1 regularized coefficient selection we provide a generalization bound of
the order of O(sqrt(np log(m lambda)/m)), where n is the dimension, p is the
number of elements in the dictionary, lambda is a bound on the l_1 norm of the
coefficient vector and m is the number of samples, which complements existing
results. For the case of representing a new signal as a combination of at most
k dictionary elements, we provide a bound of the order O(sqrt(np log(m k)/m))
under an assumption on the level of orthogonality of the dictionary (low Babel
function). We further show that this assumption holds for most dictionaries in
high dimensions in a strong probabilistic sense. Our results further yield fast
rates of order 1/m as opposed to 1/sqrt(m) using localized Rademacher
complexity. We provide similar results in a general setting using kernels with
weak smoothness requirements
Clinical narrative analytics challenges
Precision medicine or evidence based medicine is based on
the extraction of knowledge from medical records to provide individuals
with the appropriate treatment in the appropriate moment according to
the patient features. Despite the efforts of using clinical narratives for
clinical decision support, many challenges have to be faced still today
such as multilinguarity, diversity of terms and formats in different services,
acronyms, negation, to name but a few. The same problems exist
when one wants to analyze narratives in literature whose analysis would
provide physicians and researchers with highlights. In this talk we will
analyze challenges, solutions and open problems and will analyze several
frameworks and tools that are able to perform NLP over free text to
extract medical entities by means of Named Entity Recognition process.
We will also analyze a framework we have developed to extract and validate
medical terms. In particular we present two uses cases: (i) medical
entities extraction of a set of infectious diseases description texts provided
by MedlinePlus and (ii) scales of stroke identification in clinical
narratives written in Spanish
- …