23,909 research outputs found
Uncertainty Estimation in One-Stage Object Detection
Environment perception is the task for intelligent vehicles on which all
subsequent steps rely. A key part of perception is to safely detect other road
users such as vehicles, pedestrians, and cyclists. With modern deep learning
techniques huge progress was made over the last years in this field. However
such deep learning based object detection models cannot predict how certain
they are in their predictions, potentially hampering the performance of later
steps such as tracking or sensor fusion. We present a viable approaches to
estimate uncertainty in an one-stage object detector, while improving the
detection performance of the baseline approach. The proposed model is evaluated
on a large scale automotive pedestrian dataset. Experimental results show that
the uncertainty outputted by our system is coupled with detection accuracy and
the occlusion level of pedestrians
Recommended from our members
'BioNessie(G) - a grid enabled biochemical networks simulation environment
The simulation of biochemical networks provides insight and
understanding about the underlying biochemical processes and pathways
used by cells and organisms. BioNessie is a biochemical network simulator
which has been developed at the University of Glasgow. This paper
describes the simulator and focuses in particular on how it has been
extended to benefit from a wide variety of high performance compute resources
across the UK through Grid technologies to support larger scale
simulations
BioNessie - a grid enabled biochemical networks simulation environment
The simulation of biochemical networks provides insight and understanding about the underlying biochemical processes and pathways used by cells and organisms. BioNessie is a biochemical network simulator which has been developed at the University of Glasgow. This paper describes the simulator and focuses in particular on how it has been extended to benefit from a wide variety of high performance compute resources across the UK through Grid technologies to support larger scale simulations
Deep Network Uncertainty Maps for Indoor Navigation
Most mobile robots for indoor use rely on 2D laser scanners for localization,
mapping and navigation. These sensors, however, cannot detect transparent
surfaces or measure the full occupancy of complex objects such as tables. Deep
Neural Networks have recently been proposed to overcome this limitation by
learning to estimate object occupancy. These estimates are nevertheless subject
to uncertainty, making the evaluation of their confidence an important issue
for these measures to be useful for autonomous navigation and mapping. In this
work we approach the problem from two sides. First we discuss uncertainty
estimation in deep models, proposing a solution based on a fully convolutional
neural network. The proposed architecture is not restricted by the assumption
that the uncertainty follows a Gaussian model, as in the case of many popular
solutions for deep model uncertainty estimation, such as Monte-Carlo Dropout.
We present results showing that uncertainty over obstacle distances is actually
better modeled with a Laplace distribution. Then, we propose a novel approach
to build maps based on Deep Neural Network uncertainty models. In particular,
we present an algorithm to build a map that includes information over obstacle
distance estimates while taking into account the level of uncertainty in each
estimate. We show how the constructed map can be used to increase global
navigation safety by planning trajectories which avoid areas of high
uncertainty, enabling higher autonomy for mobile robots in indoor settings.Comment: Accepted for publication in "2019 IEEE-RAS International Conference
on Humanoid Robots (Humanoids)
Parallelized Inference for Gravitational-Wave Astronomy
Bayesian inference is the workhorse of gravitational-wave astronomy, for
example, determining the mass and spins of merging black holes, revealing the
neutron star equation of state, and unveiling the population properties of
compact binaries. The science enabled by these inferences comes with a
computational cost that can limit the questions we are able to answer. This
cost is expected to grow. As detectors improve, the detection rate will go up,
allowing less time to analyze each event. Improvement in low-frequency
sensitivity will yield longer signals, increasing the number of computations
per event. The growing number of entries in the transient catalog will drive up
the cost of population studies. While Bayesian inference calculations are not
entirely parallelizable, key components are embarrassingly parallel:
calculating the gravitational waveform and evaluating the likelihood function.
Graphical processor units (GPUs) are adept at such parallel calculations. We
report on progress porting gravitational-wave inference calculations to GPUs.
Using a single code - which takes advantage of GPU architecture if it is
available - we compare computation times using modern GPUs (NVIDIA P100) and
CPUs (Intel Gold 6140). We demonstrate speed-ups of for
compact binary coalescence gravitational waveform generation and likelihood
evaluation and more than for population inference within the
lifetime of current detectors. Further improvement is likely with continued
development. Our python-based code is publicly available and can be used
without familiarity with the parallel computing platform, CUDA.Comment: 5 pages, 4 figures, submitted to PRD, code can be found at
https://github.com/ColmTalbot/gwpopulation
https://github.com/ColmTalbot/GPUCBC
https://github.com/ADACS-Australia/ADACS-SS18A-RSmith Add demonstration of
improvement in BNS spi
SHADHO: Massively Scalable Hardware-Aware Distributed Hyperparameter Optimization
Computer vision is experiencing an AI renaissance, in which machine learning
models are expediting important breakthroughs in academic research and
commercial applications. Effectively training these models, however, is not
trivial due in part to hyperparameters: user-configured values that control a
model's ability to learn from data. Existing hyperparameter optimization
methods are highly parallel but make no effort to balance the search across
heterogeneous hardware or to prioritize searching high-impact spaces. In this
paper, we introduce a framework for massively Scalable Hardware-Aware
Distributed Hyperparameter Optimization (SHADHO). Our framework calculates the
relative complexity of each search space and monitors performance on the
learning task over all trials. These metrics are then used as heuristics to
assign hyperparameters to distributed workers based on their hardware. We first
demonstrate that our framework achieves double the throughput of a standard
distributed hyperparameter optimization framework by optimizing SVM for MNIST
using 150 distributed workers. We then conduct model search with SHADHO over
the course of one week using 74 GPUs across two compute clusters to optimize
U-Net for a cell segmentation task, discovering 515 models that achieve a lower
validation loss than standard U-Net.Comment: 10 pages, 6 figure
- …