43 research outputs found

    Transferring knowledge across robots: A risk sensitive approach

    Get PDF
    One of the most impressive characteristics of human perception is its domain adaptation capability. Humans can recognize objects and places simply by transferring knowledge from their past experience. Inspired by that, current research in robotics is addressing a great challenge: building robots able to sense and interpret the surrounding world by reusing information previously collected, gathered by other robots or obtained from the web. But, how can a robot automatically understand what is useful among a large amount of information and perform knowledge transfer? In this paper we address the domain adaptation problem in the context of visual place recognition. We consider the scenario where a robot equipped with a monocular camera explores a new environment. In this situation traditional approaches based on supervised learning perform poorly, as no annotated data are provided in the new environment and the models learned from data collected in other places are inappropriate due to the large variability of visual information. To overcome these problems we introduce a novel transfer learning approach. With our algorithm the robot is given only some training data (annotated images collected in different environments by other robots) and is able to decide whether, and how much, this knowledge is useful in the current scenario. At the base of our approach there is a transfer risk measure which quantifies the similarity between the given and the new visual data. To improve the performance, we also extend our framework to take into account multiple visual cues. Our experiments on three publicly available datasets demonstrate the effectiveness of the proposed approach

    Transfer Learning for Visual Place Classification

    No full text
    A fundamental challenge in mobile robotics is to provide robots the capability to move autonomously in real world unconstrained scenarios. In the recent years this led to an increased interest towards novel learning paradigms for domain adaptation. In this paper we speciïŹcally consider the problem of visual place recognition. Current semantic place categorization approaches typically rely on supervised learning methods. This implies a time consuming human labeling effort. Moreover, once learning has been performed, if the environmental conditions vary or the robot is moved to another location, the learned model may not be useful, as the novel scenario can be very different from the old one. To avoid these issues, we propose a novel transfer learning approach for visual place recognition. With our method the robot is only given some training data, eventually collected in different scenarios by other robots, and is able to decide autonomously if and how much this knowledge is useful in the current scenario. Differently from previous approaches, our method keeps the human annotation effort to the minimum and, thanks to the adoption of a transfer risk measure, is able to quantify automatically the similarity between the old and the novel scenario. The experimental results on publicly available datasets demonstrate the effectiveness of our approach

    A Transfer Learning Approach for Multi-Cue Semantic Place Recognition

    No full text
    As researchers are continuously striving for developing robotic systems able to move into the ’the wild’, the interest towards novel learning paradigms for domain adaptation has increased. In the specific application of semantic place recognition from cameras, supervised learning algorithms are typically adopted. However, once learning have been performed, if the robot is moved to another location, the acquired knowledge may be not useful, as the novel scenario can be very different from the old one. The obvious solution would be to retrain the model updating the robot internal representation of the environment. Unfortunately this procedure involves a very time consuming data-labeling effort at the human side. To avoid these issues, in this paper we propose a novel transfer learning approach for place categorization from visual cues. With our method the robot is able to decide automatically if and how much its internal knowledge is useful in the novel scenario. Differently from previous approaches, we consider the situation where the old and the novel scenario may differ significantly (not only the visual room appearance changes but also different room categories are present). Importantly, our approach does not require labeling from a human operator. We also propose a strategy for improving the performance of the proposed method optimally fusing two complementary visual cues. Our extensive experimental evaluation demonstrates the advantages of our approach on several sequences from publicly available datasets.As researchers are continuously striving for de- veloping robotic systems able to move into the ’the wild’, the interest towards novel learning paradigms for domain adaptation has increased. In the specific application of semantic place recognition from cameras, supervised learning algorithms are typically adopted. However, once learning have been per- formed, if the robot is moved to another location, the acquired knowledge may be not useful, as the novel scenario can be very different from the old one. The obvious solution would be to retrain the model updating the robot internal representation of the environment. Unfortunately this procedure involves a very time consuming data-labeling effort at the human side. To avoid these issues, in this paper we propose a novel transfer learning approach for place categorization from visual cues. With our method the robot is able to decide automatically if and how much its internal knowledge is useful in the novel scenario. Differently from previous approaches, we consider the situation where the old and the novel scenario may differ significantly (not only the visual room appearance changes but also different room categories are present). Importantly, our approach does not require labeling from a human operator. We also propose a strategy for improving the performance of the proposed method optimally fusing two complementary visual cues. Our extensive experimental evaluation demonstrates the advantages of our approach on several sequences from publicly available datasets

    Evaluation of non-geometric methods for visual odometry

    No full text
    Visual Odometry (VO) is one of the fundamental building blocks of modern autonomous robot navigation and mapping. While most state-of-the-art techniques use geometrical methods for camera ego-motion estimation from optical flow vectors, in the last few years learning approaches have been proposed to solve this problem. These approaches are emerging and there is still much to explore. This work follows this track applying Kernel Machines to monocular visual ego-motion estimation. Unlike geometrical methods, learning-based approaches to monocular visual odometry allow issues like scale estimation and camera calibration to be overcome, assuming the availability of training data. While some previous works have proposed learning paradigms to VO, to our knowledge no extensive evaluation of applying kernelbased methods to Visual Odometry has been conducted. To fill this gap, in this work we consider publicly available datasets and perform several experiments in order to set a comparison baseline with traditional techniques. Experimental results show good performances of learning algorithms and set them as a solid alternative to the computationally intensive and complex to implement geometrical techniques

    A discriminative approach for appearance based loop closing

    No full text
    The place recognition module is a fundamental component in SLAM systems, as incorrect loop closures may result in severe errors in trajectory estimation. In the case of appearance-based methods the bag-of-words approach is typically employed for recognizing locations. This paper in- troduces a novel algorithm for improving loop closures detec- tion performance by adopting a set of visual words weights, learned offline accordingly to a discriminative criterion. The proposed weights learning approach, based on the large margin paradigm, can be used for generic similarity functions and relies on an efficient online leaning algorithm in the training phase. As the computed weights are usually very sparse, a gain in terms of computational cost at recognition time is also obtained. Our experiments, conducted on publicly available datasets, demonstrate that the discriminative weights lead to loop closures detection results that are more accurate than the traditional bag-of-words method and that our place recognition approach is competitive with state-of-the-art methods.The place recognition module is a fundamental component in SLAM systems, as incorrect loop closures may result in severe errors in trajectory estimation. In the case of appearance-based methods the bag-of-words approach is typically employed for recognizing locations. This paper introduces a novel algorithm for improving loop closures detection performance by adopting a set of visual words weights, learned offline accordingly to a discriminative criterion. The proposed weights learning approach, based on the large margin paradigm, can be used for generic similarity functions and relies on an efficient online leaning algorithm in the training phase. As the computed weights are usually very sparse, a gain in terms of computational cost at recognition time is also obtained. Our experiments, conducted on publicly available datasets, demonstrate that the discriminative weights lead to loop closures detection results that are more accurate than the traditional bag-of-words method and that our place recognition approach is competitive with state-of-the-art methods
    corecore