4 research outputs found

    Multivariate Confidence Calibration for Object Detection

    Full text link
    Unbiased confidence estimates of neural networks are crucial especially for safety-critical applications. Many methods have been developed to calibrate biased confidence estimates. Though there is a variety of methods for classification, the field of object detection has not been addressed yet. Therefore, we present a novel framework to measure and calibrate biased (or miscalibrated) confidence estimates of object detection methods. The main difference to related work in the field of classifier calibration is that we also use additional information of the regression output of an object detector for calibration. Our approach allows, for the first time, to obtain calibrated confidence estimates with respect to image location and box scale. In addition, we propose a new measure to evaluate miscalibration of object detectors. Finally, we show that our developed methods outperform state-of-the-art calibration models for the task of object detection and provides reliable confidence estimates across different locations and scales.Comment: Accepted on CVPR 2020 Workshop: "2nd Workshop on Safe Artificial Intelligence for Automated Driving (SAIAD)

    Dynamic Parameter Update for Robot Navigation Systems through Unsupervised Environmental Situational Analysis

    Get PDF
    A robot’s local navigation is often done through forward simulation of robot velocities and measuring the possible trajectories against safety, distance to the final goal and the generated path of a global path planner. Then, the computed velocities vector for the winning trajectory is executed on the robot. This process is done continuously through the whole navigation process and requires an extensive amount of processing. This only allows for a very limited sampling space. In this paper, we propose a novel approach to automatically detect the type of surrounding environment based on navigation complexity using unsupervised clustering, and limit the local controller’s sampling space. The experimental results in 3D simulation and using a real mobile robot show that we can increase the navigation performance by at least thirty percent while reducing the number of failures due to collision or lack of sampling

    Two-stage visual navigation by deep neural networks and multi-goal reinforcement learning

    Get PDF
    In this paper, we propose a two-stage learning framework for visual navigation in which the experience of the agent during exploration of one goal is shared to learn to navigate to other goals. We train a deep neural network for estimating the robot's position in the environment using ground truth information provided by a classical localization and mapping approach. The second simpler multi-goal Q-function learns to traverse the environment by using the provided discretized map. Transfer learning is applied to the multi-goal Q-function from a maze structure to a 2D simulator and is finally deployed in a 3D simulator where the robot uses the estimated locations from the position estimator deep network. In the experiments, we first compare different architectures to select the best deep network for location estimation, and then compare the effects of the multi-goal reinforcement learning method to traditional reinforcement learning. The results show a significant improvement when multi-goal reinforcement learning is used. Furthermore, the results of the location estimator show that a deep network can learn and generalize in different environments using camera images with high accuracy in both position and orientation

    Simulated Visual Navigation Dataset for "Two-Stage Visual Navigation by Deep Neural Networks and Multi-Goal Reinforcement Learning"

    No full text
    Condensed data set (train, validation, test) of two simulated environments in the Gazebo simulator. One small kitchen of approximately 45 square meters and a big room of roughly 140 square meters. The data can be loaded using numpy. The images are size 84x84x3. The labels for each image are (X position, Y position, sine of theta, and cosine of theta). License: CC BY NC3.
    corecore