7,321 research outputs found
Convolutional Neural Networks Applied to Neutrino Events in a Liquid Argon Time Projection Chamber
We present several studies of convolutional neural networks applied to data
coming from the MicroBooNE detector, a liquid argon time projection chamber
(LArTPC). The algorithms studied include the classification of single particle
images, the localization of single particle and neutrino interactions in an
image, and the detection of a simulated neutrino event overlaid with cosmic ray
backgrounds taken from real detector data. These studies demonstrate the
potential of convolutional neural networks for particle identification or event
detection on simulated neutrino interactions. We also address technical issues
that arise when applying this technique to data from a large LArTPC at or near
ground level
Crowdsourcing in Computer Vision
Computer vision systems require large amounts of manually annotated data to
properly learn challenging visual concepts. Crowdsourcing platforms offer an
inexpensive method to capture human knowledge and understanding, for a vast
number of visual perception tasks. In this survey, we describe the types of
annotations computer vision researchers have collected using crowdsourcing, and
how they have ensured that this data is of high quality while annotation effort
is minimized. We begin by discussing data collection on both classic (e.g.,
object recognition) and recent (e.g., visual story-telling) vision tasks. We
then summarize key design decisions for creating effective data collection
interfaces and workflows, and present strategies for intelligently selecting
the most important data instances to annotate. Finally, we conclude with some
thoughts on the future of crowdsourcing in computer vision.Comment: A 69-page meta review of the field, Foundations and Trends in
Computer Graphics and Vision, 201
Introducing Geometry in Active Learning for Image Segmentation
We propose an Active Learning approach to training a segmentation classifier
that exploits geometric priors to streamline the annotation process in 3D image
volumes. To this end, we use these priors not only to select voxels most in
need of annotation but to guarantee that they lie on 2D planar patch, which
makes it much easier to annotate than if they were randomly distributed in the
volume. A simplified version of this approach is effective in natural 2D
images. We evaluated our approach on Electron Microscopy and Magnetic Resonance
image volumes, as well as on natural images. Comparing our approach against
several accepted baselines demonstrates a marked performance increase
- …