1,399 research outputs found

    Autonomous Navigation and Mapping using Monocular Low-Resolution Grayscale Vision

    Get PDF
    Vision has been a powerful tool for navigation of intelligent and man-made systems ever since the cybernetics revolution in the 1970s. There have been two basic approaches to the navigation of computer controlled systems: The self-contained bottom-up development of sensorimotor abilities, namely perception and mobility, and the top-down approach, namely artificial intelligence, reasoning and knowledge based methods. The three-fold goal of autonomous exploration, mapping and localization of a mobile robot however, needs to be developed within a single framework. An algorithm is proposed to answer the challenges of autonomous corridor navigation and mapping by a mobile robot equipped with a single forward-facing camera. Using a combination of corridor ceiling lights, visual homing, and entropy, the robot is able to perform straight line navigation down the center of an unknown corridor. Turning at the end of a corridor is accomplished using Jeffrey divergence and time-to-collision, while deflection from dead ends and blank walls uses a scalar entropy measure of the entire image. When combined, these metrics allow the robot to navigate in both textured and untextured environments. The robot can autonomously explore an unknown indoor environment, recovering from difficult situations like corners, blank walls, and initial heading toward a wall. While exploring, the algorithm constructs a Voronoi-based topo-geometric map with nodes representing distinctive places like doors, water fountains, and other corridors. Because the algorithm is based entirely upon low-resolution (32 x 24) grayscale images, processing occurs at over 1000 frames per second

    Indoor simultaneous localization and mapping based on fringe projection profilometry

    Full text link
    Simultaneous Localization and Mapping (SLAM) plays an important role in outdoor and indoor applications ranging from autonomous driving to indoor robotics. Outdoor SLAM has been widely used with the assistance of LiDAR or GPS. For indoor applications, the LiDAR technique does not satisfy the accuracy requirement and the GPS signals will be lost. An accurate and efficient scene sensing technique is required for indoor SLAM. As the most promising 3D sensing technique, the opportunities for indoor SLAM with fringe projection profilometry (FPP) systems are obvious, but methods to date have not fully leveraged the accuracy and speed of sensing that such systems offer. In this paper, we propose a novel FPP-based indoor SLAM method based on the coordinate transformation relationship of FPP, where the 2D-to-3D descriptor-assisted is used for mapping and localization. The correspondences generated by matching descriptors are used for fast and accurate mapping, and the transform estimation between the 2D and 3D descriptors is used to localize the sensor. The provided experimental results demonstrate that the proposed indoor SLAM can achieve the localization and mapping accuracy around one millimeter

    Survey of computer vision algorithms and applications for unmanned aerial vehicles

    Get PDF
    This paper presents a complete review of computer vision algorithms and vision-based intelligent applications, that are developed in the field of the Unmanned Aerial Vehicles (UAVs) in the latest decade. During this time, the evolution of relevant technologies for UAVs; such as component miniaturization, the increase of computational capabilities, and the evolution of computer vision techniques have allowed an important advance in the development of UAVs technologies and applications. Particularly, computer vision technologies integrated in UAVs allow to develop cutting-edge technologies to cope with aerial perception difficulties; such as visual navigation algorithms, obstacle detection and avoidance and aerial decision-making. All these expert technologies have developed a wide spectrum of application for UAVs, beyond the classic military and defense purposes. Unmanned Aerial Vehicles and Computer Vision are common topics in expert systems, so thanks to the recent advances in perception technologies, modern intelligent applications are developed to enhance autonomous UAV positioning, or automatic algorithms to avoid aerial collisions, among others. Then, the presented survey is based on artificial perception applications that represent important advances in the latest years in the expert system field related to the Unmanned Aerial Vehicles. In this paper, the most significant advances in this field are presented, able to solve fundamental technical limitations; such as visual odometry, obstacle detection, mapping and localization, et cetera. Besides, they have been analyzed based on their capabilities and potential utility. Moreover, the applications and UAVs are divided and categorized according to different criteria.This research is supported by the Spanish Government through the CICYT projects (TRA2015-63708-R and TRA2013-48314-C3-1-R)

    Perception-aware Tag Placement Planning for Robust Localization of UAVs in Indoor Construction Environments

    Full text link
    Tag-based visual-inertial localization is a lightweight method for enabling autonomous data collection missions of low-cost unmanned aerial vehicles (UAVs) in indoor construction environments. However, finding the optimal tag configuration (i.e., number, size, and location) on dynamic construction sites remains challenging. This paper proposes a perception-aware genetic algorithm-based tag placement planner (PGA-TaPP) to determine the optimal tag configuration using 4D-BIM, considering the project progress, safety requirements, and UAV's localizability. The proposed method provides a 4D plan for tag placement by maximizing the localizability in user-specified regions of interest (ROIs) while limiting the installation costs. Localizability is quantified using the Fisher information matrix (FIM) and encapsulated in navigable grids. The experimental results show the effectiveness of our method in finding an optimal 4D tag placement plan for the robust localization of UAVs on under-construction indoor sites.Comment: [Final draft] This material may be downloaded for personal use only. Any other use requires prior permission of the American Society of Civil Engineers and the Journal of Computing in Civil Engineerin

    MRSL: AUTONOMOUS NEURAL NETWORK-BASED SELF-STABILIZING SYSTEM

    Get PDF
    Stabilizing and localizing the positioning systems autonomously in the areas without GPS accessibility is a difficult task. In this thesis we describe a methodology called Most Reliable Straight Line (MRSL) for stabilizing and positioning camera-based objects in 3-D space. The camera-captured images are used to identify easy-to-track points “interesting points� and track them on two consecutive images. The distance between each of interesting points on the two consecutive images are compared and one with the maximum length is assigned to MRSL, which is used to indicate the deviation from the original position. To correct this our trained algorithm is deployed to reduce the deviation by issuing relevant commands, this action is repeated until MRSL converges to zero. To test the accuracy and robustness, the algorithm was deployed to control positioning of a Quadcopter. It was demonstrated that the Quadcopter (a) was highly robust to any external forces, (b) can fly even if the Quadcopter experiences loss of engine, (c) can fly smoothly and positions itself on a desired location
    • …
    corecore