426 research outputs found

    Visual-UWB Navigation System for Unknown Environments

    Full text link
    Navigation applications relying on the Global Navigation Satellite System (GNSS) are limited in indoor environments and GNSS-denied outdoor terrains such as dense urban or forests. In this paper, we present a novel accurate, robust and low-cost GNSS-independent navigation system, which is composed of a monocular camera and Ultra-wideband (UWB) transceivers. Visual techniques have gained excellent results when computing the incremental motion of the sensor, and UWB methods have proved to provide promising localization accuracy due to the high time resolution of the UWB ranging signals. However, the monocular visual techniques with scale ambiguity are not suitable for applications requiring metric results, and UWB methods assume that the positions of the UWB transceiver anchor are pre-calibrated and known, thus precluding their application in unknown and challenging environments. To this end, we advocate leveraging the monocular camera and UWB to create a map of visual features and UWB anchors. We propose a visual-UWB Simultaneous Localization and Mapping (SLAM) algorithm which tightly combines visual and UWB measurements to form a joint non-linear optimization problem on Lie-Manifold. The 6 Degrees of Freedom (DoF) state of the vehicles and the map are estimated by minimizing the UWB ranging errors and landmark reprojection errors. Our navigation system starts with an exploratory task which performs the real-time visual-UWB SLAM to obtain the global map, then the navigation task by reusing this global map. The tasks can be performed by different vehicles in terms of equipped sensors and payload capability in a heterogeneous team. We validate our system on the public datasets, achieving typical centimeter accuracy and 0.1% scale error.Comment: Proceedings of the 31st International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2018

    Autonomous Navigation in Complex Indoor and Outdoor Environments with Micro Aerial Vehicles

    Get PDF
    Micro aerial vehicles (MAVs) are ideal platforms for surveillance and search and rescue in confined indoor and outdoor environments due to their small size, superior mobility, and hover capability. In such missions, it is essential that the MAV is capable of autonomous flight to minimize operator workload. Despite recent successes in commercialization of GPS-based autonomous MAVs, autonomous navigation in complex and possibly GPS-denied environments gives rise to challenging engineering problems that require an integrated approach to perception, estimation, planning, control, and high level situational awareness. Among these, state estimation is the first and most critical component for autonomous flight, especially because of the inherently fast dynamics of MAVs and the possibly unknown environmental conditions. In this thesis, we present methodologies and system designs, with a focus on state estimation, that enable a light-weight off-the-shelf quadrotor MAV to autonomously navigate complex unknown indoor and outdoor environments using only onboard sensing and computation. We start by developing laser and vision-based state estimation methodologies for indoor autonomous flight. We then investigate fusion from heterogeneous sensors to improve robustness and enable operations in complex indoor and outdoor environments. We further propose estimation algorithms for on-the-fly initialization and online failure recovery. Finally, we present planning, control, and environment coverage strategies for integrated high-level autonomy behaviors. Extensive online experimental results are presented throughout the thesis. We conclude by proposing future research opportunities

    Fast Monte-Carlo Localization on Aerial Vehicles using Approximate Continuous Belief Representations

    Full text link
    Size, weight, and power constrained platforms impose constraints on computational resources that introduce unique challenges in implementing localization algorithms. We present a framework to perform fast localization on such platforms enabled by the compressive capabilities of Gaussian Mixture Model representations of point cloud data. Given raw structural data from a depth sensor and pitch and roll estimates from an on-board attitude reference system, a multi-hypothesis particle filter localizes the vehicle by exploiting the likelihood of the data originating from the mixture model. We demonstrate analysis of this likelihood in the vicinity of the ground truth pose and detail its utilization in a particle filter-based vehicle localization strategy, and later present results of real-time implementations on a desktop system and an off-the-shelf embedded platform that outperform localization results from running a state-of-the-art algorithm on the same environment

    Focus Is All You Need: Loss Functions For Event-based Vision

    Full text link
    Event cameras are novel vision sensors that output pixel-level brightness changes ("events") instead of traditional video frames. These asynchronous sensors offer several advantages over traditional cameras, such as, high temporal resolution, very high dynamic range, and no motion blur. To unlock the potential of such sensors, motion compensation methods have been recently proposed. We present a collection and taxonomy of twenty two objective functions to analyze event alignment in motion compensation approaches (Fig. 1). We call them Focus Loss Functions since they have strong connections with functions used in traditional shape-from-focus applications. The proposed loss functions allow bringing mature computer vision tools to the realm of event cameras. We compare the accuracy and runtime performance of all loss functions on a publicly available dataset, and conclude that the variance, the gradient and the Laplacian magnitudes are among the best loss functions. The applicability of the loss functions is shown on multiple tasks: rotational motion, depth and optical flow estimation. The proposed focus loss functions allow to unlock the outstanding properties of event cameras.Comment: 29 pages, 19 figures, 4 table

    로버 항법을 위한 자가보정 영상관성 오도메트리

    Get PDF
    학위논문 (석사)-- 서울대학교 대학원 : 공과대학 기계항공공학부, 2019. 2. 박찬국.This master's thesis presents a direct visual odometry robust to illumination changes and a self-calibrated visual-inertial odometry for a rover localization using an IMU and a stereo camera. Most of the previous vision-based localization algorithms are vulnerable to sudden brightness changes due to strong sunlight or a variance of the exposure time, that violates Lambertian surface assumption. Meanwhile, to decrease the error accumulation of a visual odometry, an IMU can be employed to fill gaps between successive images. However, extrinsic parameters for a visual-inertial system should be computed precisely since they play an important role in making a bridge between the visual and inertial coordinate frames, spatially as well as temporally. This thesis proposes a bucketed illumination model to account for partial and global illumination changes along with a framework of a direct visual odometry for a rover localization. Furthermore, this study presents a self-calibrated visual-inertial odometry in which the time-offset and relative pose of an IMU and a stereo camera are estimated by using point feature measurements. Specifically, based on the extended Kalman filter pose estimator, the calibration parameters are augmented in the filter state. The proposed visual odometry is evaluated through the open source dataset where images are captured in a Lunar-like environment. In addition to this, we design a rover using commercially available sensors, and a field testing of the rover confirms that the self-calibrated visual-inertial odometry decreases a localization error in terms of a return position by 76.4% when compared to the visual-inertial odometry without the self-calibration.본 논문에서는 로버 항법 시스템을 위해 관성측정장치와 스테레오 카메라를 사용하여 빛 변화에 강건한 직접 방식 영상 오도메트리와 자가 보정 영상관성 항법 알고리즘을 제안한다. 기존 대부분의 영상기반 항법 알고리즘들은 램버션 표면 가정을 위배하는 야외의 강한 햇빛 혹은 일정하지 않은 카메라의 노출 시간으로 인해 영상의 밝기 변화에 취약하였다. 한편, 영상 오도메트리의 오차 누적을 줄이기 위해 관성측정장치를 사용할 수 있지만, 영상관성 시스템에 대한 외부 교정 변수는 공간 및 시간적으로 영상 및 관성 좌표계를 연결하기 때문에 사전에 정확하게 계산되어야 한다. 본 논문은 로버 항법을 위해 지역 및 전역적인 빛 변화를 설명하는 직접 방식 영상 오도메트리의 버킷 밝기 모델을 제안한다. 또한, 본 연구에서는 스트레오 카메라에서 측정된 특징점을 이용하여 관성측정장치와 카메라간의 시간 오프셋과 상대 위치 및 자세를 추정하는 자가 보정 영상관성 항법 알고리즘을 제시한다. 특히, 제안하는 영상관성 알고리즘은 확장 칼만 필터에 기반하며 교정 파라미터를 필터의 상태변수에 확장하였다. 제안한 직접방식 영상 오도메트리는 달 유사환경에서 촬영된 오픈소스 데이터셋을 통해 그 성능을 검증하였다. 또한 상용 센서 및 로버 플랫폼을 이용하여 테스트 로버를 설계하였고, 이를 통해 영상관성 시스템을 자가 보정 할 경우 그렇지 않은 경우 보다 회기 위치 오차(return position error)가 76.4% 감소됨을 확인하였다.Abstract Contents List of Tables List of Figures Chapter 1 Introduction 1.1 Motivation and background 1.2 Objectives and contributions Chapter 2 Related Works 2.1 Visual odometry 2.2 Visual-inertial odometry Chapter 3 Direct Visual Odometry at Outdoor 3.1 Direct visual odometry 3.1.1 Notations 3.1.2 Camera projection model 3.1.3 Photometric error 3.2 The proposed algorithm 3.2.1 Problem formulation 3.2.2 Bucketed illumination model 3.2.3 Adaptive prior weight 3.3 Experimental results 3.3.1 Synthetic image sequences 3.3.2 MAV datasets 3.3.3 Planetary rover datasets Chapter 4 Self-Calibrated Visual-Inertial Odometry 4.1 State representation 4.1.1 IMU state 4.1.2 Calibration parameter state 4.2 State-propagation 4.3 Measurement-update 4.3.1 Point feature measurement 4.3.2 Measurement error modeling 4.4 Experimental results 4.4.1 Hardware setup 4.4.2 Vision front-end design 4.4.3 Rover field testing Chapter 5 Conclusions 5.1 Conclusion and summary 5.2 Future works Bibliography Chapter A Derivation of Photometric Error Jacobian 국문 초록Maste

    A Large Scale Inertial Aided Visual Simultaneous Localization And Mapping (SLAM) System For Small Mobile Platforms

    Get PDF
    In this dissertation we present a robust simultaneous mapping and localization scheme that can be deployed on a computationally limited, small unmanned aerial system. This is achieved by developing a key frame based algorithm that leverages the multiprocessing capacity of modern low power mobile processors. The novelty of the algorithm lies in the design to make it robust against rapid exploration while keeping the computational time to a minimum. A novel algorithm is developed where the time critical components of the localization and mapping system are computed in parallel utilizing the multiple cores of the processor. The algorithm uses a scale and rotation invariant state of the art binary descriptor for landmark description making it suitable for compact large scale map representation and robust tracking. This descriptor is also used in loop closure detection making the algorithm efficient by eliminating any need for separate descriptors in a Bag of Words scheme. Effectiveness of the algorithm is demonstrated by performance evaluation in indoor and large scale outdoor dataset. We demonstrate the efficiency and robustness of the algorithm by successful six degree of freedom (6 DOF) pose estimation in challenging indoor and outdoor environment. Performance of the algorithm is validated on a quadcopter with onboard computation

    Visual guidance of unmanned aerial manipulators

    Get PDF
    The ability to fly has greatly expanded the possibilities for robots to perform surveillance, inspection or map generation tasks. Yet it was only in recent years that research in aerial robotics was mature enough to allow active interactions with the environment. The robots responsible for these interactions are called aerial manipulators and usually combine a multirotor platform and one or more robotic arms. The main objective of this thesis is to formalize the concept of aerial manipulator and present guidance methods, using visual information, to provide them with autonomous functionalities. A key competence to control an aerial manipulator is the ability to localize it in the environment. Traditionally, this localization has required external infrastructure of sensors (e.g., GPS or IR cameras), restricting the real applications. Furthermore, localization methods with on-board sensors, exported from other robotics fields such as simultaneous localization and mapping (SLAM), require large computational units becoming a handicap in vehicles where size, load, and power consumption are important restrictions. In this regard, this thesis proposes a method to estimate the state of the vehicle (i.e., position, orientation, velocity and acceleration) by means of on-board, low-cost, light-weight and high-rate sensors. With the physical complexity of these robots, it is required to use advanced control techniques during navigation. Thanks to their redundancy on degrees-of-freedom, they offer the possibility to accomplish not only with mobility requirements but with other tasks simultaneously and hierarchically, prioritizing them depending on their impact to the overall mission success. In this work we present such control laws and define a number of these tasks to drive the vehicle using visual information, guarantee the robot integrity during flight, and improve the platform stability or increase arm operability. The main contributions of this research work are threefold: (1) Present a localization technique to allow autonomous navigation, this method is specifically designed for aerial platforms with size, load and computational burden restrictions. (2) Obtain control commands to drive the vehicle using visual information (visual servo). (3) Integrate the visual servo commands into a hierarchical control law by exploiting the redundancy of the robot to accomplish secondary tasks during flight. These tasks are specific for aerial manipulators and they are also provided. All the techniques presented in this document have been validated throughout extensive experimentation with real robotic platforms.La capacitat de volar ha incrementat molt les possibilitats dels robots per a realitzar tasques de vigilància, inspecció o generació de mapes. Tot i això, no és fins fa pocs anys que la recerca en robòtica aèria ha estat prou madura com per començar a permetre interaccions amb l’entorn d’una manera activa. Els robots per a fer-ho s’anomenen manipuladors aeris i habitualment combinen una plataforma multirotor i un braç robòtic. L’objectiu d’aquesta tesi és formalitzar el concepte de manipulador aeri i presentar mètodes de guiatge, utilitzant informació visual, per dotar d’autonomia aquest tipus de vehicles. Una competència clau per controlar un manipulador aeri és la capacitat de localitzar-se en l’entorn. Tradicionalment aquesta localització ha requerit d’infraestructura sensorial externa (GPS, càmeres IR, etc.), limitant així les aplicacions reals. Pel contrari, sistemes de localització exportats d’altres camps de la robòtica basats en sensors a bord, com per exemple mètodes de localització i mapejat simultànis (SLAM), requereixen de gran capacitat de còmput, característica que penalitza molt en vehicles on la mida, pes i consum elèctric son grans restriccions. En aquest sentit, aquesta tesi proposa un mètode d’estimació d’estat del robot (posició, velocitat, orientació i acceleració) a partir de sensors instal·lats a bord, de baix cost, baix consum computacional i que proporcionen mesures a alta freqüència. Degut a la complexitat física d’aquests robots, és necessari l’ús de tècniques de control avançades. Gràcies a la seva redundància de graus de llibertat, aquests robots ens ofereixen la possibilitat de complir amb els requeriments de mobilitat i, simultàniament, realitzar tasques de manera jeràrquica, ordenant-les segons l’impacte en l’acompliment de la missió. En aquest treball es presenten aquestes lleis de control, juntament amb la descripció de tasques per tal de guiar visualment el vehicle, garantir la integritat del robot durant el vol, millorar de l’estabilitat del vehicle o augmentar la manipulabilitat del braç. Aquesta tesi es centra en tres aspectes fonamentals: (1) Presentar una tècnica de localització per dotar d’autonomia el robot. Aquest mètode està especialment dissenyat per a plataformes amb restriccions de capacitat computacional, mida i pes. (2) Obtenir les comandes de control necessàries per guiar el vehicle a partir d’informació visual. (3) Integrar aquestes accions dins una estructura de control jeràrquica utilitzant la redundància del robot per complir altres tasques durant el vol. Aquestes tasques son específiques per a manipuladors aeris i també es defineixen en aquest document. Totes les tècniques presentades en aquesta tesi han estat avaluades de manera experimental amb plataformes robòtiques real
    corecore