12,035 research outputs found
Novel deep learning methods for track reconstruction
For the past year, the HEP.TrkX project has been investigating machine
learning solutions to LHC particle track reconstruction problems. A variety of
models were studied that drew inspiration from computer vision applications and
operated on an image-like representation of tracking detector data. While these
approaches have shown some promise, image-based methods face challenges in
scaling up to realistic HL-LHC data due to high dimensionality and sparsity. In
contrast, models that can operate on the spacepoint representation of track
measurements ("hits") can exploit the structure of the data to solve tasks
efficiently. In this paper we will show two sets of new deep learning models
for reconstructing tracks using space-point data arranged as sequences or
connected graphs. In the first set of models, Recurrent Neural Networks (RNNs)
are used to extrapolate, build, and evaluate track candidates akin to Kalman
Filter algorithms. Such models can express their own uncertainty when trained
with an appropriate likelihood loss function. The second set of models use
Graph Neural Networks (GNNs) for the tasks of hit classification and segment
classification. These models read a graph of connected hits and compute
features on the nodes and edges. They adaptively learn which hit connections
are important and which are spurious. The models are scaleable with simple
architecture and relatively few parameters. Results for all models will be
presented on ACTS generic detector simulated data.Comment: CTD 2018 proceeding
The Neural Particle Filter
The robust estimation of dynamically changing features, such as the position
of prey, is one of the hallmarks of perception. On an abstract, algorithmic
level, nonlinear Bayesian filtering, i.e. the estimation of temporally changing
signals based on the history of observations, provides a mathematical framework
for dynamic perception in real time. Since the general, nonlinear filtering
problem is analytically intractable, particle filters are considered among the
most powerful approaches to approximating the solution numerically. Yet, these
algorithms prevalently rely on importance weights, and thus it remains an
unresolved question how the brain could implement such an inference strategy
with a neuronal population. Here, we propose the Neural Particle Filter (NPF),
a weight-less particle filter that can be interpreted as the neuronal dynamics
of a recurrently connected neural network that receives feed-forward input from
sensory neurons and represents the posterior probability distribution in terms
of samples. Specifically, this algorithm bridges the gap between the
computational task of online state estimation and an implementation that allows
networks of neurons in the brain to perform nonlinear Bayesian filtering. The
model captures not only the properties of temporal and multisensory integration
according to Bayesian statistics, but also allows online learning with a
maximum likelihood approach. With an example from multisensory integration, we
demonstrate that the numerical performance of the model is adequate to account
for both filtering and identification problems. Due to the weightless approach,
our algorithm alleviates the 'curse of dimensionality' and thus outperforms
conventional, weighted particle filters in higher dimensions for a limited
number of particles
Predicting Out-of-View Feature Points for Model-Based Camera Pose Estimation
In this work we present a novel framework that uses deep learning to predict
object feature points that are out-of-view in the input image. This system was
developed with the application of model-based tracking in mind, particularly in
the case of autonomous inspection robots, where only partial views of the
object are available. Out-of-view prediction is enabled by applying scaling to
the feature point labels during network training. This is combined with a
recurrent neural network architecture designed to provide the final prediction
layers with rich feature information from across the spatial extent of the
input image. To show the versatility of these out-of-view predictions, we
describe how to integrate them in both a particle filter tracker and an
optimisation based tracker. To evaluate our work we compared our framework with
one that predicts only points inside the image. We show that as the amount of
the object in view decreases, being able to predict outside the image bounds
adds robustness to the final pose estimation.Comment: Submitted to IROS 201
- …