826 research outputs found

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Dimensionality reduction in images for appearance-based camera localization

    Get PDF
    [Abstract] Appearance-based Localization (AL) focuses on estimating the pose of a camera from the information encoded in an image, treated holistically. However, the high-dimensionality of images makes this estimation intractable and some technique of dimensionality Reduction (DR) must be applied. The resulting reduced image representation, though, must keep underlying information about the structure of the scene to be able to infer the camera pose. This work explores the problem of DR in the context of AL, and evaluates four popular methods in two simple cases on a synthetic environment: two linear (PCA and MDS) and two non-linear, also known as Manifold Learning methods (LLE and Isomap). The evaluation is carried out in terms of their capability to generate lower-dimensional embeddings that maintain underlying information that is isometric to the camera poses.Junta de AndalucĂ­a; P20 0130

    Dimensionality Reduction in images for Appearance-based camera Localization

    Get PDF
    Appearance-based Localization (AL) focuses on estimating the pose of a camera from the information encoded in an image, treated holistically. However, the high-dimensionality of images makes this estimation intractable and some techniques of Dimensionality Reduction (DR) must be applied. The resulting reduced image representation, though, must keep underlying information about the structure of the scene to be able to infer the camera pose. This work explores the problem of DR in the context of AL, and evaluates four popular methods in two simple cases on a synthetic environment: two linear (PCA and MDS) and two non-linear, also known as Manifold Learning methods (LLE and Isomap). The evaluation is carried out in terms of their capability to generate lower-dimensional embeddings that maintain underlying information that is isometric to the camera poses.Plan propio UMA, HOUNDBOT (P20 01302), funding by Andalusian Regional Government, and ARPEGGIO (PID2020-117057GB-I00), funded by Spain National Research Agency. Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tec

    Automatic vehicle tracking and recognition from aerial image sequences

    Full text link
    This paper addresses the problem of automated vehicle tracking and recognition from aerial image sequences. Motivated by its successes in the existing literature focus on the use of linear appearance subspaces to describe multi-view object appearance and highlight the challenges involved in their application as a part of a practical system. A working solution which includes steps for data extraction and normalization is described. In experiments on real-world data the proposed methodology achieved promising results with a high correct recognition rate and few, meaningful errors (type II errors whereby genuinely similar targets are sometimes being confused with one another). Directions for future research and possible improvements of the proposed method are discussed

    Manifold-Based Robot Motion Generation

    Get PDF
    In order to make an autonomous robot system more adaptive to human-centered environments, it is effective to let the robot collect sensor values by itself and build controller to reach a desired configuration autonomously. Multiple sensors are often available to estimate the state of the robot, but they contain two problems: (1) sensing ranges of each sensor might not overlap with each other and (2) sensor variable can contain redundancy against the original state space. Regarding the first problem, a local coordinate definition based on a sensor value and its extension to unobservable region is presented. This technique helps the robot to estimate the sensor variable outside of its observation range and to integrate regions of two sensors that do not overlap. For a solution to the second problem, a grid-based estimation of lower-dimensional subspace is presented. This estimation of manifold allows the robot to have a compact representation, and thus the proposed motion generation method can be applied to the redundant sensor system. In the case of image feature spaces with a high-dimensional sensor signal, a manifold estimation-based mapping, known as locally linear embedding (LLE), was applied to an estimation of distance between robot body and an obstacle
    • …
    corecore