11 research outputs found

    A minimalistic approach to appearance-based visual SLAM

    Get PDF
    This paper presents a vision-based approach to SLAM in indoor / outdoor environments with minimalistic sensing and computational requirements. The approach is based on a graph representation of robot poses, using a relaxation algorithm to obtain a globally consistent map. Each link corresponds to a relative measurement of the spatial relation between the two nodes it connects. The links describe the likelihood distribution of the relative pose as a Gaussian distribution. To estimate the covariance matrix for links obtained from an omni-directional vision sensor, a novel method is introduced based on the relative similarity of neighbouring images. This new method does not require determining distances to image features using multiple view geometry, for example. Combined indoor and outdoor experiments demonstrate that the approach can handle qualitatively different environments (without modification of the parameters), that it can cope with violations of the “flat floor assumption” to some degree, and that it scales well with increasing size of the environment, producing topologically correct and geometrically accurate maps at low computational cost. Further experiments demonstrate that the approach is also suitable for combining multiple overlapping maps, e.g. for solving the multi-robot SLAM problem with unknown initial poses

    Featureless visual processing for SLAM in changing outdoor environments

    Get PDF
    Vision-based SLAM is mostly a solved problem providing clear, sharp images can be obtained. However, in outdoor environments a number of factors such as rough terrain, high speeds and hardware limitations can result in these conditions not being met. High speed transit on rough terrain can lead to image blur and under/over exposure, problems that cannot easily be dealt with using low cost hardware. Furthermore, recently there has been a growth in interest in lifelong autonomy for robots, which brings with it the challenge in outdoor environments of dealing with a moving sun and lack of constant artificial lighting. In this paper, we present a lightweight approach to visual localization and visual odometry that addresses the challenges posed by perceptual change and low cost cameras. The approach combines low resolution imagery with the SLAM algorithm, RatSLAM. We test the system using a cheap consumer camera mounted on a small vehicle in a mixed urban and vegetated environment, at times ranging from dawn to dusk and in conditions ranging from sunny weather to rain. We first show that the system is able to provide reliable mapping and recall over the course of the day and incrementally incorporate new visual scenes from different times into an existing map. We then restrict the system to only learning visual scenes at one time of day, and show that the system is still able to localize and map at other times of day. The results demonstrate the viability of the approach in situations where image quality is poor and environmental or hardware factors preclude the use of visual features

    2D Visual Place Recognition for Domestic Service Robots at Night

    Get PDF
    Domestic service robots such as lawn mowing and vacuum cleaning robots are the most numerous consumer robots in existence today. While early versions employed random exploration, recent systems fielded by most of the major manufacturers have utilized range-based and visual sensors and user-placed beacons to enable robots to map and localize. However, active range and visual sensing solutions have the disadvantages of being intrusive, expensive, or only providing a 1D scan of the environment, while the requirement for beacon placement imposes other practical limitations. In this paper we present a passive and potentially cheap vision-based solution to 2D localization at night that combines easily obtainable day-time maps with low resolution contrast-normalized image matching algorithms, image sequence-based matching in two-dimensions, place match interpolation and recent advances in conventional low light camera technology. In a range of experiments over a domestic lawn and in a lounge room, we demonstrate that the proposed approach enables 2D localization at night, and analyse the effect on performance of varying odometry noise levels, place match interpolation and sequence matching length. Finally we benchmark the new low light camera technology and show how it can enable robust place recognition even in an environment lit only by a moonless sky, raising the tantalizing possibility of being able to apply all conventional vision algorithms, even in the darkest of nights

    Monocular Vision SLAM for Indoor Aerial Vehicles

    Get PDF
    This paper presents a novel indoor navigation and ranging strategy by using a monocular camera. The proposed algorithms are integrated with simultaneous localization and mapping (SLAM) with a focus on indoor aerial vehicle applications. We experimentally validate the proposed algorithms by using a fully self-contained micro aerial vehicle (MAV) with on-board image processing and SLAM capabilities. The range measurement strategy is inspired by the key adaptive mechanisms for depth perception and pattern recognition found in humans and intelligent animals. The navigation strategy assumes an unknown, GPS-denied environment, which is representable via corner-like feature points and straight architectural lines. Experimental results show that the system is only limited by the capabilities of the camera and the availability of good corners

    Biologically Inspired Monocular Vision Based Navigation and Mapping in GPS-Denied Environments

    Get PDF
    This paper presents an in-depth theoretical study of bio-vision inspired feature extraction and depth perception method integrated with vision-based simultaneous localization and mapping (SLAM). We incorporate the key functions of developed visual cortex in several advanced species, including humans, for depth perception and pattern recognition. Our navigation strategy assumes GPS-denied manmade environment consisting of orthogonal walls, corridors and doors. By exploiting the architectural features of the indoors, we introduce a method for gathering useful landmarks from a monocular camera for SLAM use, with absolute range information without using active ranging sensors. Experimental results show that the system is only limited by the capabilities of the camera and the availability of good corners. The proposed methods are experimentally validated by our self-contained MAV inside a conventional building

    Camera localization using trajectories and maps

    Get PDF
    We propose a new Bayesian framework for automatically determining the position (location and orientation) of an uncalibrated camera using the observations of moving objects and a schematic map of the passable areas of the environment. Our approach takes advantage of static and dynamic information on the scene structures through prior probability distributions for object dynamics. The proposed approach restricts plausible positions where the sensor can be located while taking into account the inherent ambiguity of the given setting. The proposed framework samples from the posterior probability distribution for the camera position via data driven MCMC, guided by an initial geometric analysis that restricts the search space. A Kullback-Leibler divergence analysis is then used that yields the final camera position estimate, while explicitly isolating ambiguous settings. The proposed approach is evaluated in synthetic and real environments, showing its satisfactory performance in both ambiguous and unambiguous settings

    Efficient probabilistic planar robot motion estimation given pairs of images

    Full text link
    Estimating the relative pose between two camera positions given image point correspondences is a vital task in most view based SLAM and robot navigation approaches. In order to improve the robustness to noise and false point correspondences it is common to incorporate the constraint that the robot moves over a planar surface, as is the case for most indoor and outdoor mapping applications. We propose a novel estimation method that determines the full likelihood in the space of all possible planar relative poses. The likelihood function can be learned from existing data using standard Bayesian methods and is efficiently stored in a low dimensional look up table. Estimating the likelihood of a new pose given a set of correspondences boils down to a simple look up. As a result, the proposed method allows for very efficient creation of pose constraints for vision based SLAM applications, including a proper estimate of its uncertainty. It can handle ambiguous image data, such as acquired in long corridors, naturally. The method can be trained using either artificial or real data, and is applied on both controlled simulated data and challenging images taken in real home environments. By computing the maximum likelihood estimate we can compare our approach with state of the art estimators based on a combination of RANSAC and iterative reweighted least squares and show a significant increase in both the efficiency and accuracy

    Biologically Inspired Monocular Vision Based Navigation and Mapping in GPS-Denied Environments

    Get PDF
    This paper presents an in-depth theoretical study of bio-vision inspired feature extraction and depth perception method integrated with vision-based simultaneous localization and mapping (SLAM). We incorporate the key functions of developed visual cortex in several advanced species, including humans, for depth perception and pattern recognition. Our navigation strategy assumes GPS-denied manmade environment consisting of orthogonal walls, corridors and doors. By exploiting the architectural features of the indoors, we introduce a method for gathering useful landmarks from a monocular camera for SLAM use, with absolute range information without using active ranging sensors. Experimental results show that the system is only limited by the capabilities of the camera and the availability of good corners. The proposed methods are experimentally validated by our self-contained MAV inside a conventional building

    Gestion de mémoire pour la détection de fermeture de boucle pour la cartographie temps réel par un robot mobile

    Get PDF
    Pour permettre Ă  un robot autonome de faire des tĂąches complexes, il est important qu'il puisse cartographier son environnement pour s'y localiser. À long terme, pour corriger sa carte globale, il est nĂ©cessaire qu'il dĂ©tecte les endroits dĂ©jĂ  visitĂ©s. C'est une des caractĂ©ristiques les plus importantes en localisation et cartographie simultanĂ©e (SLAM), mais aussi sa principale limitation. La charge de calcul augmente en fonction de la taille de l'environnement, et alors les algorithmes n'arrivent plus Ă  s'exĂ©cuter en temps rĂ©el. Pour rĂ©soudre cette problĂ©matique, l'objectif est de dĂ©velopper un nouvel algorithme de dĂ©tection en temps rĂ©el d'endroits dĂ©jĂ  visitĂ©s, et qui fonctionne peu importe la taille de l'environnement. La dĂ©tection de fermetures de boucle, c'est-Ă -dire la reconnaissance des endroits dĂ©jĂ  visitĂ©s, est rĂ©alisĂ©e par un algorithme probabiliste robuste d'Ă©valuation de la similitude entre les images acquises par une camĂ©ra Ă  intervalles rĂ©guliers. Pour gĂ©rer efficacement la charge de calcul de cet algorithme, la mĂ©moire du robot est divisĂ©e en mĂ©moires Ă  long terme (base de donnĂ©es), Ă  court terme et de travail (mĂ©moires vives). La mĂ©moire de travail garde les images les plus caractĂ©ristiques de l'environnement afin de respecter la contrainte d'exĂ©cution temps rĂ©el. Lorsque la contrainte de temps rĂ©el est atteinte, les images des endroits vus les moins souvent depuis longtemps sont transfĂ©rĂ©es de la mĂ©moire de travail Ă  la mĂ©moire Ă  long terme. Ces images transfĂ©rĂ©es peuvent ĂȘtre rĂ©cupĂ©rĂ©es de la mĂ©moire Ă  long terme Ă  la mĂ©moire de travail lorsqu'une image voisine dans la mĂ©moire de travail reçoit une haute probabilitĂ© que le robot soit dĂ©jĂ  passĂ© par cet endroit, augmentant ainsi la capacitĂ© de dĂ©tecter des endroits dĂ©jĂ  visitĂ©s avec les prochaines images acquises. Le systĂšme a Ă©tĂ© testĂ© avec des donnĂ©es prĂ©alablement prises sur le campus de l'UniversitĂ© de Sherbrooke afin d'Ă©valuer sa performance sur de longues distances, ainsi qu'avec quatre autres ensembles de donnĂ©es standards afin d'Ă©valuer sa capacitĂ© d'adaptation avec diffĂ©rents environnements. Les rĂ©sultats suggĂšrent que l'algorithme atteint les objectifs fixĂ©s et permet d'obtenir des performances supĂ©rieures que les approches existantes. Ce nouvel algorithme de dĂ©tection de fermeture de boucle peut ĂȘtre utilisĂ© directement comme une technique de SLAM topologique ou en parallĂšle avec une technique de SLAM existante afin de dĂ©tecter les endroits dĂ©jĂ  visitĂ©s par un robot autonome. Lors d'une dĂ©tection de boucle, la carte globale peut alors ĂȘtre corrigĂ©e en utilisant la nouvelle contrainte crĂ©Ă©e entre le nouveau et l'ancien endroit semblable

    ContribuiçÔes para a localização e mapeamento em robótica através da identificação visual de lugares

    Get PDF
    Tese de doutoramento, InformĂĄtica (Engenharia InformĂĄtica), Universidade de Lisboa, Faculdade de CiĂȘncias, 2015Em robĂłtica mĂłvel, os mĂ©todos baseados na aparĂȘncia visual constituem umaabordagem atractiva para o tratamento dos problemas da localização e mapeamento.Contudo, para o seu sucesso Ă© fundamental o uso de caracterĂ­sticas visuais suficientemente discriminativas. Esta Ă© uma condição necessĂĄria para assegurar o reconhecimento de lugares na presença de factores inibidores, tais como a semelhança entre lugares ou as variaçÔes de luminosidade. Esta tese debruça-se sobre os problemas de localização e mapeamento, tendo como objectivo transversal a obtenção de representaçÔes mais discriminativas ou com menores custos computacionais. Em termos gerais, dois tipos de caracterĂ­sticas visuais sĂŁo usadas, as caracterĂ­sticas locais e globais. A aplicação de caracterĂ­sticas locais na descrição da aparĂȘncia tem sido dominada pelo modelo BoW (Bag-of-Words), segundo o qual os descritores sĂŁo quantizados e substituĂ­dos por palavras visuais. Nesta tese questiona-se esta opção atravĂ©s do estudo da abordagem alternativa, a representação nĂŁo-quantizada (NQ). Em resultado deste estudo, contribui-se com um novo mĂ©todo para a localização global de robĂŽs mĂłveis,o classificador NQ. Este, para alĂ©m de apresentar maior precisĂŁo do que o modeloBoW, admite simplificaçÔes importantes que o tornam competitivo, tambĂ©m emtermos de eficiĂȘncia, com a representação quantizada. Nesta tese Ă© tambĂ©m estudado o problema anterior Ă  localização, o da extracção de um mapa do ambiente, sendo focada, em particular, a detecção da revisitação de lugares. Para o tratamento deste problema Ă© proposta uma nova caracterĂ­stica global,designada LBP-Gist, que combina a anĂĄlise de texturas pelo mĂ©todo LBP com a codificação da estrutura global da imagem, inerente Ă  caracterĂ­stica Gist. A avaliação deste mĂ©todo em vĂĄrios datasets demonstra a viabilidade do detector proposto, o qual apresenta precisĂŁo e eficiĂȘncia superiores ao state-of–the-art em ambientes de exterior.In the mobile robotics field, appearance-based methods are at the core of several attractive systems for localization and mapping. To be successful, however, these methods require features having good descriptive power. This is a necessary condition to ensure place recognition in the presence of disturbing factors, such as high similarity between places or lighting variations. This thesis addresses the localization and mapping problems, globally seeking representations which are more discriminative or more efficient. To this end, two broad types of visual features are used, local and global features. Appearance representations based on local features have been dominated by the BoW (Bag of Words) model, which prescribes the quantization of descriptors and their labelling with visual words. In this thesis, this method is challenged through the study of the alternative approach, the non-quantized representation (NQ). As an outcome of this study, we contribute with a novel global localization method, the NQ classifier. Besides offering higher precision than the BoW model, this classifier is susceptible of significant simplifications, through which it is made competitive to the quantized representation in terms of efficiency. This thesis also addresses the problem posed prior to localization, the mapping of the environment, focusing specifically on the loop closure detection task. To support loop closing, a new global feature, LBP-Gist, is proposed. As the name suggests, this feature combines texture analysis, provided by the LBP method, with the encoding of global image structure, underlying the Gist feature. Evaluation on several datasets demonstrates the validity of the proposed detector. Concretely, precision and efficiency of the method are shown to be superior to the state-of-the-art in outdoor environments
    corecore