13 research outputs found

    On-board and Ground Visual Pose Estimation Techniques for UAV Control

    Get PDF
    In this paper, two techniques to control UAVs (Unmanned Aerial Vehicles), based on visual information are presented. The first one is based on the detection and tracking of planar structures from an on-board camera, while the second one is based on the detection and 3D reconstruction of the position of the UAV based on an external camera system. Both strategies are tested with a VTOL (Vertical take-off and landing) UAV, and results show good behavior of the visual systems (precision in the estimation and frame rate) when estimating the helicopterÂżs position and using the extracted information to control the UAV

    Sky-GVINS: a Sky-segmentation Aided GNSS-Visual-Inertial System for Robust Navigation in Urban Canyons

    Full text link
    Integrating Global Navigation Satellite Systems (GNSS) in Simultaneous Localization and Mapping (SLAM) systems draws increasing attention to a global and continuous localization solution. Nonetheless, in dense urban environments, GNSS-based SLAM systems will suffer from the Non-Line-Of-Sight (NLOS) measurements, which might lead to a sharp deterioration in localization results. In this paper, we propose to detect the sky area from the up-looking camera to improve GNSS measurement reliability for more accurate position estimation. We present Sky-GVINS: a sky-aware GNSS-Visual-Inertial system based on a recent work called GVINS. Specifically, we adopt a global threshold method to segment the sky regions and non-sky regions in the fish-eye sky-pointing image and then project satellites to the image using the geometric relationship between satellites and the camera. After that, we reject satellites in non-sky regions to eliminate NLOS signals. We investigated various segmentation algorithms for sky detection and found that the Otsu algorithm reported the highest classification rate and computational efficiency, despite the algorithm's simplicity and ease of implementation. To evaluate the effectiveness of Sky-GVINS, we built a ground robot and conducted extensive real-world experiments on campus. Experimental results show that our method improves localization accuracy in both open areas and dense urban environments compared to the baseline method. Finally, we also conduct a detailed analysis and point out possible further directions for future research. For detailed information, visit our project website at https://github.com/SJTU-ViSYS/Sky-GVINS

    Comparison of Semantic Segmentation Approaches for Horizon/Sky Line Detection

    Full text link
    Horizon or skyline detection plays a vital role towards mountainous visual geo-localization, however most of the recently proposed visual geo-localization approaches rely on \textbf{user-in-the-loop} skyline detection methods. Detecting such a segmenting boundary fully autonomously would definitely be a step forward for these localization approaches. This paper provides a quantitative comparison of four such methods for autonomous horizon/sky line detection on an extensive data set. Specifically, we provide the comparison between four recently proposed segmentation methods; one explicitly targeting the problem of horizon detection\cite{Ahmad15}, second focused on visual geo-localization but relying on accurate detection of skyline \cite{Saurer16} and other two proposed for general semantic segmentation -- Fully Convolutional Networks (FCN) \cite{Long15} and SegNet\cite{Badrinarayanan15}. Each of the first two methods is trained on a common training set \cite{Baatz12} comprised of about 200 images while models for the third and fourth method are fine tuned for sky segmentation problem through transfer learning using the same data set. Each of the method is tested on an extensive test set (about 3K images) covering various challenging geographical, weather, illumination and seasonal conditions. We report average accuracy and average absolute pixel error for each of the presented formulation.Comment: Proceedings of the International Joint Conference on Neural Networks (IJCNN) (oral presentation), IEEE Computational Intelligence Society, 201

    Horizon Line Detection: Edge-less and Edge-based Methods

    Get PDF
    Planetary rover localization is a challenging problem due to unavailability ofconventional localization cues e.g. GPS, architectural landmarks etc. Hori-zon line (boundary segmenting sky and non-sky regions) nds its applicationsfor smooth navigation of UAVs/MAVs, visual geo-localization of mountain-ous images, port security and ship detection and has proven to be a promisingvisual cue for outdoor robot/vehicle localization.Prominent methods for horizon line detection are based on faulty as-sumptions and rely on mere edge detection which is inherently a non-stableapproach due to parameter choices. We investigate the use of supervisedmachine learning for horizon line detection. Specically we propose two dif-ferent machine learning based methods; one relying on edge detection andclassication while other solely based on classication. Given a query image;an edge or classication map is rst built and converted into a multi-stagegraph problem. Dynamic programming is then used to nd a shortest pathwhich conforms to the detected horizon line in the given image. For the rstmethod we provide a detailed quantitative analysis for various texture fea-tures (SIFT, LBP, HOG and their combinations) used to train an SupportVector Machine (SVM) classier and dierent choices (binary edges, classi-ed edge score, gradient score and their combinations) for the nodal costsfor Dynamic Programming. For the second method we investigate the use ofdense classication maps for horizon line detection. We use Support VectorMachines (SVMs) and Convolutional Neural Networks (CNNs) as our classi-er choices and use raw intensity patches as features. Dynamic Programmingis then applied on the resultant dense classier score image to nd the hori-zon line. Both proposed formulations are compared with a prominent edgebased method on three dierent data sets: City (Reno) Skyline, Basalt Hillsand Web data sets and outperform the previous method by a high margin

    Expansion of Skydog Aircraft Model Control System by Remote and Autonomous Control by Android Application

    Get PDF
    Tématem práce je návrh a implementace Android aplikace, která bude ovládat autopilota modelu letadla Skydog pomocí bezdrátové komunikace. Aplikace směrem od modelu přijímá data ze senzorů instalovaných v modelu letadla, které zpracuje a odešle zpět instrukce pro autopilota. V případě hrozící srážky modelu s překážkou nebo terénem aplikace odešle autopilotovi instrukce k vyhnutí se překážce. K nalezení bezkolizní trati letu je využito algoritmu RRT. Databázi známých překážek a digitální model terénu aplikace očekává ve formátu XML a GeoTIFF v tomto pořadí.The thesis aims to design and implement an Android application with ability to control the autopilot of the Skydog aircraft model using the wireless telemetry. The application shall receive data from an aircraft model gathered from various installed sensors. These data shall be then processed and corresponding instructions for autopilot shall be sent back. When collision with terrain or obstacle is detected, the application shall send instructions to autopilot to avoid such collision. RRT algorithm is used to find collision-free flight trajectory. Database of known obstacles and digital terrain model are provided to application in formats XML and GeoTIFF respectively.

    Machine Learning based Mountainous Skyline Detection and Visual Geo-Localization

    Get PDF
    With the ubiquitous availability of geo-tagged imagery and increased computational power, geo-localization has captured a lot of attention from researchers in computer vision and image retrieval communities. Significant progress has been made in urban environments with stable man-made structures and geo-referenced street imagery of frequently visited tourist attractions. However, geo-localization of natural/mountain scenes is more challenging due to changed vegetations, lighting, seasonal changes and lack of geo-tagged imagery. Conventional approaches for mountain/natural geo-localization mostly rely on mountain peaks and valley information, visible skylines and ridges etc. Skyline (boundary segmenting sky and non-sky regions) has been established to be a robust natural feature for mountainous images, which can be matched with the synthetic skylines generated from publicly available terrain maps such as Digital Elevation Models (DEMs). Skyline or visible horizon finds further applications in various other contexts e.g. smooth navigation of Unmanned Aerial Vehicles (UAVs)/Micro Aerial Vehicles (MAVs), port security, ship detection and outdoor robot/vehicle localization.\parProminent methods for skyline/horizon detection are based on non-realistic assumptions and rely on mere edge detection and/or linear line fitting using Hough transform. We investigate the use of supervised machine learning for skyline detection. Specifically we propose two novel machine learning based methods, one relying on edge detection and classification while other solely based on classification. Given a query image, an edge or classification map is first built and converted into a multi-stage graph problem. Dynamic programming is then used to find a shortest path which conforms to the detected skyline in the given image. For the first method, we provide a detailed quantitative analysis for various texture features (Scale Invariant Feature Transform (SIFT), Local Binary Patterns (LBP), Histogram of Oriented Gradients (HOG) and their combinations) used to train a Support Vector Machine (SVM) classifier and different choices (binary edges, classified edge score, gradient score and their combinations) for the nodal costs for Dynamic Programming (DP). For the second method, we investigate the use of dense classification maps for horizon line detection. We use Support Vector Machines (SVMs) and Convolutional Neural Networks (CNNs) as our classifier choices and use normalized intensity patches as features. Both proposed formulations are compared with a prominent edge based method on two different data sets.\par We propose a fusion strategy which boosts the performance of the edge-less approach using edge information. The fusion approach, which has been tested on an additional challenging data set, outperforms each of the two methods alone. Further, we demonstrate the capability of our formulations to detect absence of horizon boundary and detection of partial horizon lines. This could be of great value in applications where a confidence measure of the detection is necessary e.g. localization of planetary rovers/robots. In an extended work, we compare our edge-less skyline detection approach against deep learning networks recently proposed for semantic segmentation on an additional data set. Specifically, we compare our proposed fusion formulation with Fully Convolutional Network (FCN), SegNet and another classical supervised learning based method.\par We further propose a visual geo-localization pipeline based on evolutionary computing; where Particle Swarm Optimization (PSO) is adopted to find/refine an orientation estimate by minimizing the cost function based on horizon-ness probability of pixels. The dense classification score image resulting from our edge-less/fusion approach is used as a fitness measure to guide the particles toward best solution where the rendered horizon from DEM perfectly aligns with the actual horizon from the image without even requiring its explicit detection. The effectiveness of the proposed geo-localization pipeline is evaluated on a decent sized data set

    Global Pose Estimation from Aerial Images : Registration with Elevation Models

    Full text link

    Persistent vision-based search and track using multiple UAVs

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2007.Includes bibliographical references (p. 93-98).Unmanned aerial vehicles (UAVs) have attracted interest for their ability to carry out missions such as border patrol, urban traffic monitoring, persistent surveillance, and search and rescue operations. Most of these missions require the ability to detect and track objects of interest on or near the ground. In addition, most of the missions are inherently long-duration, requiring multiple UAVs to cooperate over time periods longer than the endurance of a single vehicle. This thesis presents a framework to enable such missions to be carried out autonomously and robustly. First, a technique for vision-based target detection and bearing determination that utilizes a video camera onboard each UAV is presented. The technique is designed to detect the presence of targets of interest in the camera video stream and determine the bearing from the UAV to the target even when the video data is noisy. Next, a cooperative, bearings-only target estimation algorithm is presented. The algorithm is shown to provide better estimates of a target's position and velocity in three dimensions than could be achieved by a single vehicle, while being computationally efficient and naturally distributable among multiple UAVs.(cont. )Next, a task assignment algorithm that incorporates closed-loop feedback on the performance of individual UAVs and sensor suites is developed, enabling underperforming UAVs to be dynamically swapped out by the tasking system. Finally, flight results from several persistent, multiple-target search and track experiments conducted on MIT's Real-time indoor Autonomous Vehicle test ENvironment (RAVEN) are presented.by Brett Bethke.S.M
    corecore