16 research outputs found

    Real-time model-based slam using line segments

    Get PDF
    Abstract. Existing monocular vision-based SLAM systems favour interest point features as landmarks, but these are easily occluded and can only be reliably matched over a narrow range of viewpoints. Line segments offer an interesting alternative, as line matching is more stable with respect to viewpoint changes and lines are robust to partial occlusion. In this paper we present a model-based SLAM system that uses 3D line segments as landmarks. Unscented Kalman filters are used to initialise new line segments and generate a 3D wireframe model of the scene that can be tracked with a robust model-based tracking algorithm. Uncertainties in the camera position are fed into the initialisation of new model edges. Results show the system operating in real-time with resilience to partial occlusion. The maps of line segments generated during the SLAM process are physically meaningful and their structure is measured against the true 3D structure of the scene.

    Real-time visual loop-closure detection

    No full text
    Published versio

    Registration Combining Wide and Narrow Baseline Feature Tracking Techniques for Markerless AR Systems

    Get PDF
    Augmented reality (AR) is a field of computer research which deals with the combination of real world and computer generated data. Registration is one of the most difficult problems currently limiting the usability of AR systems. In this paper, we propose a novel natural feature tracking based registration method for AR applications. The proposed method has following advantages: (1) it is simple and efficient, as no man-made markers are needed for both indoor and outdoor AR applications; moreover, it can work with arbitrary geometric shapes including planar, near planar and non planar structures which really enhance the usability of AR systems. (2) Thanks to the reduced SIFT based augmented optical flow tracker, the virtual scene can still be augmented on the specified areas even under the circumstances of occlusion and large changes in viewpoint during the entire process. (3) It is easy to use, because the adaptive classification tree based matching strategy can give us fast and accurate initialization, even when the initial camera is different from the reference image to a large degree. Experimental evaluations validate the performance of the proposed method for online pose tracking and augmentation

    Calibration and Validation of Earth-Observing Sensors Using Deployable Surface-Based Sensor Networks

    Get PDF
    ©2010 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other worksDOI: 10.1109/JSTARS.2010.2053021Satellite-based instruments are now routinely used to map the surface of the globe or monitor weather conditions. However, these orbital measurements of ground-based quantities are heavily influenced by external factors, such as air moisture content or surface emissivity. Detailed atmospheric models are created to compensate for these factors, but the satellite system must still be tested over a wide variety of surface conditions to validate the instrumentation and correction model. Validation and correction are particularly important for arctic environments, as the unique surface properties of packed snow and ice are poorly modeled by any other terrain type. Currently, this process is human intensive, requiring the coordinated collection of surface measurements over a number of years. A decentralized, autonomous sensor network is proposed which allows the collection of ground-based environmental measurements at a location and resolution that is optimal for the specific on-orbit sensor under investigation. A prototype sensor network has been constructed and fielded on a glacier in Alaska, illustrating the ability of such systems to properly collect and log sensor measurements, even in harsh arctic environments

    Fast and Incremental Method for Loop-Closure Detection Using Bags of Visual Words

    No full text
    Published versio

    Augmented indoor hybrid maps using catadioptric vision

    Get PDF
    En este Trabajo de Fin de Máster se presenta un nuevo método para crear mapas semánticos a partir de secuencias de imágenes omnidireccionales. El objetivo es diseñar el nivel superior de un mapa jerárquico: mapa semántico o mapa topológico aumentado, aprovechando y adaptando este tipo de cámaras. La segmentación de la secuencia de imágenes se realiza distinguiendo entre Lugares y Transiciones, poniendo especial énfasis en la detección de estas Transiciones ya que aportan una información muy útil e importante al mapa. Dentro de los Lugares se hace una clasificación más detallada entre pasillos y habitaciones de distintos tipos. Y dentro de las Transiciones distinguiremos entre puertas, jambas, escaleras y ascensores, que son los principales tipos de Transiciones que aparecen en escenarios de interior. Para la segmentación del espacio en estos tipos de áreas se han utilizado solo descriptores de imagen globales, en concreto Gist. La gran ventaja de usar este tipo de descriptores es la mayor eficiencia y compacidad frente al uso de descriptores locales. Además para mantener la consistencia espacio-temporal de la secuencia de imágenes, se hace uso de un modelo probabilístico: Modelo Oculto de Markov (HMM). A pesar de la simplicidad del método, los resultados muestran cómo es capaz de realizar una segmentación de la secuencia de imágenes en clusters con significado para las personas. Todos los experimentos se han llevado a cabo utilizando nuestro nuevo data set de imágenes omnidireccionales, capturado con una cámara montada en un casco, por lo que la secuencia sigue el movimiento de una persona durante su desplazamiento dentro de un edificio. El data set se encuentra público en Internet para que pueda ser utilizado en otras investigaciones

    Visual Simultaneous Localization and Mapping in an Active Dynamic Environment

    Get PDF
    In recent years, the work on simultaneous localization and mapping has matured significantly. Robust techniques have been developed to explore and map a static environment in real-time. However, the problem of localizing and mapping a dynamic environment is still to be solved. The dynamic part of the environment not only makes the localization difficult but it introduces a diverse set of challenges to the existing problems such as detecting, tracking and segmenting the moving objects, and 3D reconstruction of the moving objects and/or static environment. This thesis focuses on studying the problem of simultaneously localizing and mapping an actively dynamic environment. A comprehensive review and analysis of the state-of-the-art methods are provided for both static and dynamic cases. A stereo camera is used to explore the dynamic environment and obtain semi-dense point clouds for the image sequence. The proposed approach is a variant of the standard ICP where the outliers of the registration process are not discarded. All 3D points are assigned a confidence measure based on their association in their respective neighborhood. The confidence measure decides if a 3D point is classified static or dynamic in the global map. Hence, the approach does not require any prior information about the environment or the moving objects. In the latter part of this study, the moving objects are segmented in 3D space and 2D images for any potential future analysis. The framework is tested with highly dynamic scenes from both indoor and outdoor environments. The results demonstrate the effectiveness of the proposed approach

    Simultaneous Localization and Mapping (SLAM) for Autonomous Driving: Concept and Analysis

    Get PDF
    The Simultaneous Localization and Mapping (SLAM) technique has achieved astonishing progress over the last few decades and has generated considerable interest in the autonomous driving community. With its conceptual roots in navigation and mapping, SLAM outperforms some traditional positioning and localization techniques since it can support more reliable and robust localization, planning, and controlling to meet some key criteria for autonomous driving. In this study the authors first give an overview of the different SLAM implementation approaches and then discuss the applications of SLAM for autonomous driving with respect to different driving scenarios, vehicle system components and the characteristics of the SLAM approaches. The authors then discuss some challenging issues and current solutions when applying SLAM for autonomous driving. Some quantitative quality analysis means to evaluate the characteristics and performance of SLAM systems and to monitor the risk in SLAM estimation are reviewed. In addition, this study describes a real-world road test to demonstrate a multi-sensor-based modernized SLAM procedure for autonomous driving. The numerical results show that a high-precision 3D point cloud map can be generated by the SLAM procedure with the integration of Lidar and GNSS/INS. Online four–five cm accuracy localization solution can be achieved based on this pre-generated map and online Lidar scan matching with a tightly fused inertial system
    corecore