12,690 research outputs found
Jet Charge and Machine Learning
Modern machine learning techniques, such as convolutional, recurrent and
recursive neural networks, have shown promise for jet substructure at the Large
Hadron Collider. For example, they have demonstrated effectiveness at boosted
top or W boson identification or for quark/gluon discrimination. We explore
these methods for the purpose of classifying jets according to their electric
charge. We find that both neural networks that incorporate distance within the
jet as an input and boosted decision trees including radial distance
information can provide significant improvement in jet charge extraction over
current methods. Specifically, convolutional, recurrent, and recursive networks
can provide the largest improvement over traditional methods, in part by
effectively utilizing distance within the jet or clustering history. The
advantages of using a fixed-size input representation (as with the CNN) or a
small input representation (as with the RNN) suggest that both convolutional
and recurrent networks will be essential to the future of modern machine
learning at colliders.Comment: 17 pages, 8 figures, 1 table; Updated to JHEP versio
Multi-Path Region-Based Convolutional Neural Network for Accurate Detection of Unconstrained "Hard Faces"
Large-scale variations still pose a challenge in unconstrained face
detection. To the best of our knowledge, no current face detection algorithm
can detect a face as large as 800 x 800 pixels while simultaneously detecting
another one as small as 8 x 8 pixels within a single image with equally high
accuracy. We propose a two-stage cascaded face detection framework, Multi-Path
Region-based Convolutional Neural Network (MP-RCNN), that seamlessly combines a
deep neural network with a classic learning strategy, to tackle this challenge.
The first stage is a Multi-Path Region Proposal Network (MP-RPN) that proposes
faces at three different scales. It simultaneously utilizes three parallel
outputs of the convolutional feature maps to predict multi-scale candidate face
regions. The "atrous" convolution trick (convolution with up-sampled filters)
and a newly proposed sampling layer for "hard" examples are embedded in MP-RPN
to further boost its performance. The second stage is a Boosted Forests
classifier, which utilizes deep facial features pooled from inside the
candidate face regions as well as deep contextual features pooled from a larger
region surrounding the candidate face regions. This step is included to further
remove hard negative samples. Experiments show that this approach achieves
state-of-the-art face detection performance on the WIDER FACE dataset "hard"
partition, outperforming the former best result by 9.6% for the Average
Precision.Comment: 11 pages, 7 figures, to be presented at CRV 201
Fusion of Multispectral Data Through Illumination-aware Deep Neural Networks for Pedestrian Detection
Multispectral pedestrian detection has received extensive attention in recent
years as a promising solution to facilitate robust human target detection for
around-the-clock applications (e.g. security surveillance and autonomous
driving). In this paper, we demonstrate illumination information encoded in
multispectral images can be utilized to significantly boost performance of
pedestrian detection. A novel illumination-aware weighting mechanism is present
to accurately depict illumination condition of a scene. Such illumination
information is incorporated into two-stream deep convolutional neural networks
to learn multispectral human-related features under different illumination
conditions (daytime and nighttime). Moreover, we utilized illumination
information together with multispectral data to generate more accurate semantic
segmentation which are used to boost pedestrian detection accuracy. Putting all
of the pieces together, we present a powerful framework for multispectral
pedestrian detection based on multi-task learning of illumination-aware
pedestrian detection and semantic segmentation. Our proposed method is trained
end-to-end using a well-designed multi-task loss function and outperforms
state-of-the-art approaches on KAIST multispectral pedestrian dataset
Pulling Out All the Tops with Computer Vision and Deep Learning
We apply computer vision with deep learning -- in the form of a convolutional
neural network (CNN) -- to build a highly effective boosted top tagger.
Previous work (the "DeepTop" tagger of Kasieczka et al) has shown that a
CNN-based top tagger can achieve comparable performance to state-of-the-art
conventional top taggers based on high-level inputs. Here, we introduce a
number of improvements to the DeepTop tagger, including architecture, training,
image preprocessing, sample size and color pixels. Our final CNN top tagger
outperforms BDTs based on high-level inputs by a factor of --3 or more
in background rejection, over a wide range of tagging efficiencies and fiducial
jet selections. As reference points, we achieve a QCD background rejection
factor of 500 (60) at 50\% top tagging efficiency for fully-merged (non-merged)
top jets with in the 800--900 GeV (350--450 GeV) range. Our CNN can also
be straightforwardly extended to the classification of other types of jets, and
the lessons learned here may be useful to others designing their own deep NNs
for LHC applications.Comment: 33 pages, 11 figure
- …