171,489 research outputs found
Variational Autoencoders for New Physics Mining at the Large Hadron Collider
Using variational autoencoders trained on known physics processes, we develop
a one-sided threshold test to isolate previously unseen processes as outlier
events. Since the autoencoder training does not depend on any specific new
physics signature, the proposed procedure doesn't make specific assumptions on
the nature of new physics. An event selection based on this algorithm would be
complementary to classic LHC searches, typically based on model-dependent
hypothesis testing. Such an algorithm would deliver a list of anomalous events,
that the experimental collaborations could further scrutinize and even release
as a catalog, similarly to what is typically done in other scientific domains.
Event topologies repeating in this dataset could inspire new-physics model
building and new experimental searches. Running in the trigger system of the
LHC experiments, such an application could identify anomalous events that would
be otherwise lost, extending the scientific reach of the LHC.Comment: 29 pages, 12 figures, 5 table
Weakly-Supervised Temporal Localization via Occurrence Count Learning
We propose a novel model for temporal detection and localization which allows
the training of deep neural networks using only counts of event occurrences as
training labels. This powerful weakly-supervised framework alleviates the
burden of the imprecise and time-consuming process of annotating event
locations in temporal data. Unlike existing methods, in which localization is
explicitly achieved by design, our model learns localization implicitly as a
byproduct of learning to count instances. This unique feature is a direct
consequence of the model's theoretical properties. We validate the
effectiveness of our approach in a number of experiments (drum hit and piano
onset detection in audio, digit detection in images) and demonstrate
performance comparable to that of fully-supervised state-of-the-art methods,
despite much weaker training requirements.Comment: Accepted at ICML 201
An Efficient Automatic Mass Classification Method In Digitized Mammograms Using Artificial Neural Network
In this paper we present an efficient computer aided mass classification
method in digitized mammograms using Artificial Neural Network (ANN), which
performs benign-malignant classification on region of interest (ROI) that
contains mass. One of the major mammographic characteristics for mass
classification is texture. ANN exploits this important factor to classify the
mass into benign or malignant. The statistical textural features used in
characterizing the masses are mean, standard deviation, entropy, skewness,
kurtosis and uniformity. The main aim of the method is to increase the
effectiveness and efficiency of the classification process in an objective
manner to reduce the numbers of false-positive of malignancies. Three layers
artificial neural network (ANN) with seven features was proposed for
classifying the marked regions into benign and malignant and 90.91% sensitivity
and 83.87% specificity is achieved that is very much promising compare to the
radiologist's sensitivity 75%.Comment: 13 pages, 10 figure
Towards automatic pulmonary nodule management in lung cancer screening with deep learning
The introduction of lung cancer screening programs will produce an
unprecedented amount of chest CT scans in the near future, which radiologists
will have to read in order to decide on a patient follow-up strategy. According
to the current guidelines, the workup of screen-detected nodules strongly
relies on nodule size and nodule type. In this paper, we present a deep
learning system based on multi-stream multi-scale convolutional networks, which
automatically classifies all nodule types relevant for nodule workup. The
system processes raw CT data containing a nodule without the need for any
additional information such as nodule segmentation or nodule size and learns a
representation of 3D data by analyzing an arbitrary number of 2D views of a
given nodule. The deep learning system was trained with data from the Italian
MILD screening trial and validated on an independent set of data from the
Danish DLCST screening trial. We analyze the advantage of processing nodules at
multiple scales with a multi-stream convolutional network architecture, and we
show that the proposed deep learning system achieves performance at classifying
nodule type that surpasses the one of classical machine learning approaches and
is within the inter-observer variability among four experienced human
observers.Comment: Published on Scientific Report
Finding Galaxy Groups In Photometric Redshift Space: the Probability Friends-of-Friends (pFoF) Algorithm
We present a structure finding algorithm designed to identify galaxy groups
in photometric redshift data sets: the probability friends-of-friends (pFoF)
algorithm. This algorithm is derived by combining the friends-of-friends
algorithm in the transverse direction and the photometric redshift probability
densities in the radial dimension. The innovative characteristic of our
group-finding algorithm is the improvement of redshift estimation via the
constraints given by the transversely connected galaxies in a group, based on
the assumption that all galaxies in a group have the same redshift. Tests using
the Virgo Consortium Millennium Simulation mock catalogs allow us to show that
the recovery rate of the pFoF algorithm is larger than 80% for mock groups of
at least 2\times10^{13}M_{\sun}, while the false detection rate is about 10%
for pFoF groups containing at least net members. Applying the algorithm
to the CNOC2 group catalogs gives results which are consistent with the mock
catalog tests. From all these results, we conclude that our group-finding
algorithm offers an effective yet simple way to identify galaxy groups in
photometric redshift catalogs.Comment: AJ accepte
- …