15 research outputs found

    Extending the Limits of the Random Exploration Graph for Efficient Autonomous Exploration in Unknown Environments

    Get PDF
    The autonomous construction of environment maps using mobile robots is a fundamental problem of robotics; this is because virtually all tasks performed by robots need a representation of the working environment to operate. Although many works have addressed this problem known as SLAM, it still remains open; since most of the solutions do not consider a planner that allows the robot to explore autonomously the working environment or the works that consider it, they have developed slow algorithms that do not guarantee a total coverage of the environment or an optimal development of the exploration, which may result in maps of poor quality or definitely not usable given this lack of information. Thus, this work presents a new exploration method based on the random exploration graph (REG), which, unlike its predecessor, defines a systematic analysis of the next positions to be explored eliminating randomness in decision-making and thus minimizing the amount of movements that the robot must make to reach them and the time required to achieve total coverage of the environment. Additionally, a series of tests carried out on the proposed method are presented, and the results obtained in classical variables such as time and distance allow to validate the efficiency of our approach

    Cautious Planning with Incremental Symbolic Perception: Designing Verified Reactive Driving Maneuvers

    Full text link
    This work presents a step towards utilizing incrementally-improving symbolic perception knowledge of the robot's surroundings for provably correct reactive control synthesis applied to an autonomous driving problem. Combining abstract models of motion control and information gathering, we show that assume-guarantee specifications (a subclass of Linear Temporal Logic) can be used to define and resolve traffic rules for cautious planning. We propose a novel representation called symbolic refinement tree for perception that captures the incremental knowledge about the environment and embodies the relationships between various symbolic perception inputs. The incremental knowledge is leveraged for synthesizing verified reactive plans for the robot. The case studies demonstrate the efficacy of the proposed approach in synthesizing control inputs even in case of partially occluded environments

    Robot Collaboration for Simultaneous Map Building and Localization

    Get PDF

    DELIBOT WITH SLAM IMPLEMENTATION

    Get PDF
    This paper describes and discusses a research work on "DeliBOT – A Mobile Robot with Implementation of SLAM utilizing Computer Vision/Machine Learning Techniques". The principle objective is to study about the utilization of Kinect in mobile robotics and use it to assemble an integrated system framework equipped for building a map of environment, and localizing mobile robot with respect to the map using visual cues. There were four principle work stages. The initial step was studying and testing solutions for mapping and navigation with a RGB-D sensor, the Kinect. The accompanying stage was implementing a system framework equipped for identifying and localizing objects from the point cloud given by the Kinect, permitting the execution of further errands on the system framework, i.e. considering the computational load. The third step was identifying the landmarks and the improvement they can present in the framework. At last, the joining of the previous modules was led and experimental evaluation and validation of the integrated system. The demand of substitution of human by a robot is winding up noticeably more probable eager these days because of the likelihood of less mistakes that the robot apparently makes. Amid the previous couple of years, the technology turn out to be more accurate and legitimate outcomes with less errors, and researches started to consolidate more sensors. By utilizing accessible sensors, robot will perceive and identify environment it is in and makes map. Additionally, robot will have element of itself locating inside environment. Robot fundamental operations are identification of objects and localization for conduction of the services. Robot conduct appropriate path planning and avoidance of object by setting a target or determining goal [1]. Because of the outstanding research and robotics applications in almost every segments of life of human's, from space surveillance to health-care, solution is created for autonomous mobile robots direct tasks excluding human intervention in indoor environment [2], a few applications like cleaning facilities and transportation fields. Robot navigation in environment that is safe that performs profoundly, require environment map. Since in the greater part of applications in real-life map is not given, exploration algorithm is used

    Gestion de mémoire pour la détection de fermeture de boucle pour la cartographie temps réel par un robot mobile

    Get PDF
    Pour permettre Ă  un robot autonome de faire des tĂąches complexes, il est important qu'il puisse cartographier son environnement pour s'y localiser. À long terme, pour corriger sa carte globale, il est nĂ©cessaire qu'il dĂ©tecte les endroits dĂ©jĂ  visitĂ©s. C'est une des caractĂ©ristiques les plus importantes en localisation et cartographie simultanĂ©e (SLAM), mais aussi sa principale limitation. La charge de calcul augmente en fonction de la taille de l'environnement, et alors les algorithmes n'arrivent plus Ă  s'exĂ©cuter en temps rĂ©el. Pour rĂ©soudre cette problĂ©matique, l'objectif est de dĂ©velopper un nouvel algorithme de dĂ©tection en temps rĂ©el d'endroits dĂ©jĂ  visitĂ©s, et qui fonctionne peu importe la taille de l'environnement. La dĂ©tection de fermetures de boucle, c'est-Ă -dire la reconnaissance des endroits dĂ©jĂ  visitĂ©s, est rĂ©alisĂ©e par un algorithme probabiliste robuste d'Ă©valuation de la similitude entre les images acquises par une camĂ©ra Ă  intervalles rĂ©guliers. Pour gĂ©rer efficacement la charge de calcul de cet algorithme, la mĂ©moire du robot est divisĂ©e en mĂ©moires Ă  long terme (base de donnĂ©es), Ă  court terme et de travail (mĂ©moires vives). La mĂ©moire de travail garde les images les plus caractĂ©ristiques de l'environnement afin de respecter la contrainte d'exĂ©cution temps rĂ©el. Lorsque la contrainte de temps rĂ©el est atteinte, les images des endroits vus les moins souvent depuis longtemps sont transfĂ©rĂ©es de la mĂ©moire de travail Ă  la mĂ©moire Ă  long terme. Ces images transfĂ©rĂ©es peuvent ĂȘtre rĂ©cupĂ©rĂ©es de la mĂ©moire Ă  long terme Ă  la mĂ©moire de travail lorsqu'une image voisine dans la mĂ©moire de travail reçoit une haute probabilitĂ© que le robot soit dĂ©jĂ  passĂ© par cet endroit, augmentant ainsi la capacitĂ© de dĂ©tecter des endroits dĂ©jĂ  visitĂ©s avec les prochaines images acquises. Le systĂšme a Ă©tĂ© testĂ© avec des donnĂ©es prĂ©alablement prises sur le campus de l'UniversitĂ© de Sherbrooke afin d'Ă©valuer sa performance sur de longues distances, ainsi qu'avec quatre autres ensembles de donnĂ©es standards afin d'Ă©valuer sa capacitĂ© d'adaptation avec diffĂ©rents environnements. Les rĂ©sultats suggĂšrent que l'algorithme atteint les objectifs fixĂ©s et permet d'obtenir des performances supĂ©rieures que les approches existantes. Ce nouvel algorithme de dĂ©tection de fermeture de boucle peut ĂȘtre utilisĂ© directement comme une technique de SLAM topologique ou en parallĂšle avec une technique de SLAM existante afin de dĂ©tecter les endroits dĂ©jĂ  visitĂ©s par un robot autonome. Lors d'une dĂ©tection de boucle, la carte globale peut alors ĂȘtre corrigĂ©e en utilisant la nouvelle contrainte crĂ©Ă©e entre le nouveau et l'ancien endroit semblable

    Towards Visual Localization, Mapping and Moving Objects Tracking by a Mobile Robot: a Geometric and Probabilistic Approach

    Get PDF
    Dans cette thĂšse, nous rĂ©solvons le problĂšme de reconstruire simultanĂ©ment une reprĂ©sentation de la gĂ©omĂ©trie du monde, de la trajectoire de l'observateur, et de la trajectoire des objets mobiles, Ă  l'aide de la vision. Nous divisons le problĂšme en trois Ă©tapes : D'abord, nous donnons une solution au problĂšme de la cartographie et localisation simultanĂ©es pour la vision monoculaire qui fonctionne dans les situations les moins bien conditionnĂ©es gĂ©omĂ©triquement. Ensuite, nous incorporons l'observabilitĂ© 3D instantanĂ©e en dupliquant le matĂ©riel de vision avec traitement monoculaire. Ceci Ă©limine les inconvĂ©nients inhĂ©rents aux systĂšmes stĂ©rĂ©o classiques. Nous ajoutons enfin la dĂ©tection et suivi des objets mobiles proches en nous servant de cette observabilitĂ© 3D. Nous choisissons une reprĂ©sentation Ă©parse et ponctuelle du monde et ses objets. La charge calculatoire des algorithmes de perception est allĂ©gĂ©e en focalisant activement l'attention aux rĂ©gions de l'image avec plus d'intĂ©rĂȘt. ABSTRACT : In this thesis we give new means for a machine to understand complex and dynamic visual scenes in real time. In particular, we solve the problem of simultaneously reconstructing a certain representation of the world's geometry, the observer's trajectory, and the moving objects' structures and trajectories, with the aid of vision exteroceptive sensors. We proceeded by dividing the problem into three main steps: First, we give a solution to the Simultaneous Localization And Mapping problem (SLAM) for monocular vision that is able to adequately perform in the most ill-conditioned situations: those where the observer approaches the scene in straight line. Second, we incorporate full 3D instantaneous observability by duplicating vision hardware with monocular algorithms. This permits us to avoid some of the inherent drawbacks of classic stereo systems, notably their limited range of 3D observability and the necessity of frequent mechanical calibration. Third, we add detection and tracking of moving objects by making use of this full 3D observability, whose necessity we judge almost inevitable. We choose a sparse, punctual representation of both the world and the moving objects in order to alleviate the computational payload of the image processing algorithms, which are required to extract the necessary geometrical information out of the images. This alleviation is additionally supported by active feature detection and search mechanisms which focus the attention to those image regions with the highest interest. This focusing is achieved by an extensive exploitation of the current knowledge available on the system (all the mapped information), something that we finally highlight to be the ultimate key to success

    Towards topological mapping with vision-based simultaneous localization and map building

    Full text link
    Although the theory of Simultaneous Localization and Map Building (SLAM) is well developed, there are many challenges to overcome when incorporating vision sensors into SLAM systems. Visual sensors have different properties when compared to range finding sensors and therefore require different considerations. Existing vision-based SLAM algorithms extract point landmarks, which are required for SLAM algorithms such as the Kalman filter. Under this restriction, the types of image features that can be used are limited and the full advantages of vision not realized. This thesis examines the theoretical formulation of the SLAM problem and the characteristics of visual information in the SLAM domain. It also examines different representations of uncertainty, features and environments. It identifies the necessity to develop a suitable framework for vision-based SLAM systems and proposes a framework called VisionSLAM, which utilizes an appearance-based landmark representation and topological map structure to model metric relations between landmarks. A set of Haar feature filters are used to extract image structure statistics, which are robust against illumination changes, have good uniqueness property and can be computed in real time. The algorithm is able to resolve and correct false data associations and is robust against random correlation resulting from perceptual aliasing. The algorithm has been tested extensively in a natural outdoor environment

    Exploitation des données cartographiques pour la perception de véhicules intelligents

    Get PDF
    This thesis is situated in the domains of robotics and data fusion, and concerns geographic information systems. We study the utility of adding digital maps, which model the urban environment in which the vehicle evolves, as a virtual sensor improving the perception results. Indeed, the maps contain a phenomenal quantity of information about the environment : its geometry, topology and additional contextual information. In this work, we extract road surface geometry and building models in order to deduce the context and the characteristics of each detected object. Our method is based on an extension of occupancy grids : the evidential perception grids. It permits to model explicitly the uncertainty related to the map and sensor data. By this means, the approach presents also the advantage of representing homogeneously the data originating from various sources : lidar, camera or maps. The maps are handled on equal terms with the physical sensors. This approach allows us to add geographic information without imputing unduly importance to it, which is essential in presence of errors. In our approach, the information fusion result, stored in a perception grid, is used to predict the stateof environment on the next instant. The fact of estimating the characteristics of dynamic elements does not satisfy the hypothesis of static world. Therefore, it is necessary to adjust the level of certainty attributed to these pieces of information. We do so by applying the temporal discounting. Due to the fact that existing methods are not well suited for this application, we propose a family of discoun toperators that take into account the type of handled information. The studied algorithms have been validated through tests on real data. We have thus developed the prototypes in Matlab and the C++ software based on Pacpus framework. Thanks to them, we present the results of experiments performed in real conditions.La plupart des logiciels contrĂŽlant les vĂ©hicules intelligents traite de la comprĂ©hension de la scĂšne. De nombreuses mĂ©thodes existent actuellement pour percevoir les obstacles de façon automatique. La majoritĂ© d’entre elles emploie ainsi les capteurs extĂ©roceptifs comme des camĂ©ras ou des lidars. Cette thĂšse porte sur les domaines de la robotique et de la fusion d’information et s’intĂ©resse aux systĂšmes d’information gĂ©ographique. Nous Ă©tudions ainsi l’utilitĂ© d’ajouter des cartes numĂ©riques, qui cartographient le milieu urbain dans lequel Ă©volue le vĂ©hicule, en tant que capteur virtuel amĂ©liorant les rĂ©sultats de perception. Les cartes contiennent en effet une quantitĂ© phĂ©nomĂ©nale d’information sur l’environnement : sa gĂ©omĂ©trie, sa topologie ainsi que d’autres informations contextuelles. Dans nos travaux, nous avons extrait la gĂ©omĂ©trie des routes et des modĂšles de bĂątiments afin de dĂ©duire le contexte et les caractĂ©ristiques de chaque objet dĂ©tectĂ©. Notre mĂ©thode se base sur une extension de grilles d’occupations : les grilles de perception crĂ©dibilistes. Elle permet de modĂ©liser explicitement les incertitudes liĂ©es aux donnĂ©es de cartes et de capteurs. Elle prĂ©sente Ă©galement l’avantage de reprĂ©senter de façon uniforme les donnĂ©es provenant de diffĂ©rentes sources : lidar, camĂ©ra ou cartes. Les cartes sont traitĂ©es de la mĂȘme façon que les capteurs physiques. Cette dĂ©marche permet d’ajouter les informations gĂ©ographiques sans pour autant leur donner trop d’importance, ce qui est essentiel en prĂ©sence d’erreurs. Dans notre approche, le rĂ©sultat de la fusion d’information contenu dans une grille de perception est utilisĂ© pour prĂ©dire l’état de l’environnement Ă  l’instant suivant. Le fait d’estimer les caractĂ©ristiques des Ă©lĂ©ments dynamiques ne satisfait donc plus l’hypothĂšse du monde statique. Par consĂ©quent, il est nĂ©cessaire d’ajuster le niveau de certitude attribuĂ© Ă  ces informations. Nous y parvenons en appliquant l’affaiblissement temporel. Étant donnĂ© que les mĂ©thodes existantes n’étaient pas adaptĂ©es Ă  cette application, nous proposons une famille d’opĂ©rateurs d’affaiblissement prenant en compte le type d’information traitĂ©e. Les algorithmes Ă©tudiĂ©s ont Ă©tĂ© validĂ©s par des tests sur des donnĂ©es rĂ©elles. Nous avons donc dĂ©veloppĂ© des prototypes en Matlab et des logiciels en C++ basĂ©s sur la plate-forme Pacpus. GrĂące Ă  eux nous prĂ©sentons les rĂ©sultats des expĂ©riences effectuĂ©s en conditions rĂ©elles
    corecore