6 research outputs found

    A hybrid probabilistic model for camera relocalization

    Get PDF
    We present a hybrid deep learning method for modelling the uncertainty of camera relocalization from a single RGB image. The proposed system leverages the discriminative deep image representation from a convolutional neural networks, and uses Gaussian Process regressors to generate the probability distribution of the six degree of freedom (6DoF) camera pose in an end-to-end fashion. This results in a network that can generate uncertainties over its inferences with no need to sample many times. Furthermore we show that our objective based on KL divergence reduces the dependence on the choice of hyperparameters. The results show that compared to the state-of-the-art Bayesian camera relocalization method, our model produces comparable localization uncertainty and improves the system efficiency significantly, without loss of accuracy.Ming Cai, Chunhua Shen, Ian Rei

    Understanding the Limitations of CNN-based Absolute Camera Pose Regression

    Full text link
    Visual localization is the task of accurate camera pose estimation in a known scene. It is a key problem in computer vision and robotics, with applications including self-driving cars, Structure-from-Motion, SLAM, and Mixed Reality. Traditionally, the localization problem has been tackled using 3D geometry. Recently, end-to-end approaches based on convolutional neural networks have become popular. These methods learn to directly regress the camera pose from an input image. However, they do not achieve the same level of pose accuracy as 3D structure-based methods. To understand this behavior, we develop a theoretical model for camera pose regression. We use our model to predict failure cases for pose regression techniques and verify our predictions through experiments. We furthermore use our model to show that pose regression is more closely related to pose approximation via image retrieval than to accurate pose estimation via 3D structure. A key result is that current approaches do not consistently outperform a handcrafted image retrieval baseline. This clearly shows that additional research is needed before pose regression algorithms are ready to compete with structure-based methods.Comment: Initial version of a paper accepted to CVPR 201

    Camera Pose Auto-Encoders for Improving Pose Regression

    Full text link
    Absolute pose regressor (APR) networks are trained to estimate the pose of the camera given a captured image. They compute latent image representations from which the camera position and orientation are regressed. APRs provide a different tradeoff between localization accuracy, runtime, and memory, compared to structure-based localization schemes that provide state-of-the-art accuracy. In this work, we introduce Camera Pose Auto-Encoders (PAEs), multilayer perceptrons that are trained via a Teacher-Student approach to encode camera poses using APRs as their teachers. We show that the resulting latent pose representations can closely reproduce APR performance and demonstrate their effectiveness for related tasks. Specifically, we propose a light-weight test-time optimization in which the closest train poses are encoded and used to refine camera position estimation. This procedure achieves a new state-of-the-art position accuracy for APRs, on both the CambridgeLandmarks and 7Scenes benchmarks. We also show that train images can be reconstructed from the learned pose encoding, paving the way for integrating visual information from the train set at a low memory cost. Our code and pre-trained models are available at https://github.com/yolish/camera-pose-auto-encoders.Comment: Accepted to ECCV2

    Performance evaluation of recurrent neural networks applied to indoor camera localization

    Get PDF
    Researchers in robotics and computer vision are experimenting with the image-based localization of indoor cameras. Implementation of indoor camera localization problems using a Convolutional neural network (CNN) or Recurrent neural network (RNN) is more challenging from a large image dataset because of the internal structure of CNN or RNN. We can choose a preferable CNN or RNN variant based on the problem type and size of the dataset. CNN is the most flexible method for implementing indoor localization problems. Despite CNN's suitability for hyper-parameter selection, it requires a lot of training images to achieve high accuracy. In addition, overfitting leads to a decrease in accuracy. Introduce RNN, which accurately keeps input images in internal memory to solve these problems. Long-short-term memory (LSTM), Bi-directional LSTM (BiLSTM), and Gated recurrent unit (GRU) are three variants of RNN. We may choose the most appropriate RNN variation based on the problem type and dataset. In this study, we can recommend which variant is effective for training more speedily and which variant produces more accurate results. Vanishing gradient issues also affect RNNs, making it difficult to learn more data. Overcome the gradient vanishing problem by utilizing LSTM. The BiLSTM is an advanced version of the LSTM and is capable of higher performance than the LSTM. A more advanced RNN variant is GRU which is computationally more efficient than an LSTM. In this study, we explore a variety of recurring units for localizing indoor cameras. Our focus is on more powerful recurrent units like LSTM, BiLSTM, and GRU. Using the Microsoft 7-Scenes and InteriorNet datasets, we evaluate the performance of LSTM, BiLSTM, and GRU. Our experiment has shown that the BiLSTM is more efficient in accuracy than the LSTM and GRU. We also observed that the GRU is faster than LSTM and BiLSTM
    corecore