5 research outputs found

    Machine Learning based Mountainous Skyline Detection and Visual Geo-Localization

    Get PDF
    With the ubiquitous availability of geo-tagged imagery and increased computational power, geo-localization has captured a lot of attention from researchers in computer vision and image retrieval communities. Significant progress has been made in urban environments with stable man-made structures and geo-referenced street imagery of frequently visited tourist attractions. However, geo-localization of natural/mountain scenes is more challenging due to changed vegetations, lighting, seasonal changes and lack of geo-tagged imagery. Conventional approaches for mountain/natural geo-localization mostly rely on mountain peaks and valley information, visible skylines and ridges etc. Skyline (boundary segmenting sky and non-sky regions) has been established to be a robust natural feature for mountainous images, which can be matched with the synthetic skylines generated from publicly available terrain maps such as Digital Elevation Models (DEMs). Skyline or visible horizon finds further applications in various other contexts e.g. smooth navigation of Unmanned Aerial Vehicles (UAVs)/Micro Aerial Vehicles (MAVs), port security, ship detection and outdoor robot/vehicle localization.\parProminent methods for skyline/horizon detection are based on non-realistic assumptions and rely on mere edge detection and/or linear line fitting using Hough transform. We investigate the use of supervised machine learning for skyline detection. Specifically we propose two novel machine learning based methods, one relying on edge detection and classification while other solely based on classification. Given a query image, an edge or classification map is first built and converted into a multi-stage graph problem. Dynamic programming is then used to find a shortest path which conforms to the detected skyline in the given image. For the first method, we provide a detailed quantitative analysis for various texture features (Scale Invariant Feature Transform (SIFT), Local Binary Patterns (LBP), Histogram of Oriented Gradients (HOG) and their combinations) used to train a Support Vector Machine (SVM) classifier and different choices (binary edges, classified edge score, gradient score and their combinations) for the nodal costs for Dynamic Programming (DP). For the second method, we investigate the use of dense classification maps for horizon line detection. We use Support Vector Machines (SVMs) and Convolutional Neural Networks (CNNs) as our classifier choices and use normalized intensity patches as features. Both proposed formulations are compared with a prominent edge based method on two different data sets.\par We propose a fusion strategy which boosts the performance of the edge-less approach using edge information. The fusion approach, which has been tested on an additional challenging data set, outperforms each of the two methods alone. Further, we demonstrate the capability of our formulations to detect absence of horizon boundary and detection of partial horizon lines. This could be of great value in applications where a confidence measure of the detection is necessary e.g. localization of planetary rovers/robots. In an extended work, we compare our edge-less skyline detection approach against deep learning networks recently proposed for semantic segmentation on an additional data set. Specifically, we compare our proposed fusion formulation with Fully Convolutional Network (FCN), SegNet and another classical supervised learning based method.\par We further propose a visual geo-localization pipeline based on evolutionary computing; where Particle Swarm Optimization (PSO) is adopted to find/refine an orientation estimate by minimizing the cost function based on horizon-ness probability of pixels. The dense classification score image resulting from our edge-less/fusion approach is used as a fitness measure to guide the particles toward best solution where the rendered horizon from DEM perfectly aligns with the actual horizon from the image without even requiring its explicit detection. The effectiveness of the proposed geo-localization pipeline is evaluated on a decent sized data set

    HETEROGENEOUS MULTI-SENSOR FUSION FOR 2D AND 3D POSE ESTIMATION

    Get PDF
    Sensor fusion is a process in which data from different sensors is combined to acquire an output that cannot be obtained from individual sensors. This dissertation first considers a 2D image level real world problem from rail industry and proposes a novel solution using sensor fusion, then proceeds further to the more complicated 3D problem of multi sensor fusion for UAV pose estimation. One of the most important safety-related tasks in the rail industry is an early detection of defective rolling stock components. Railway wheels and wheel bearings are two components prone to damage due to their interactions with the brakes and railway track, which makes them a high priority when rail industry investigates improvements to current detection processes. The main contribution of this dissertation in this area is development of a computer vision method for automatically detecting the defective wheels that can potentially become a replacement for the current manual inspection procedure. The algorithm fuses images taken by wayside thermal and vision cameras and uses the outcome for the wheel defect detection. As a byproduct, the process will also include a method for detecting hot bearings from the same images. We evaluate our algorithm using simulated and real data images from UPRR in North America and it will be shown in this dissertation that using sensor fusion techniques the accuracy of the malfunction detection can be improved. After the 2D application, the more complicated 3D application is addressed. Precise, robust and consistent localization is an important subject in many areas of science such as vision-based control, path planning, and SLAM. Each of different sensors employed to estimate the pose have their strengths and weaknesses. Sensor fusion is a known approach that combines the data measured by different sensors to achieve a more accurate or complete pose estimation and to cope with sensor outages. In this dissertation, a new approach to 3D pose estimation for a UAV in an unknown GPS-denied environment is presented. The proposed algorithm fuses the data from an IMU, a camera, and a 2D LiDAR to achieve accurate localization. Among the employed sensors, LiDAR has not received proper attention in the past; mostly because a 2D LiDAR can only provide pose estimation in its scanning plane and thus it cannot obtain full pose estimation in a 3D environment. A novel method is introduced in this research that enables us to employ a 2D LiDAR to improve the full 3D pose estimation accuracy acquired from an IMU and a camera. To the best of our knowledge 2D LiDAR has never been employed for 3D localization without a prior map and it is shown in this dissertation that our method can significantly improve the precision of the localization algorithm. The proposed approach is evaluated and justified by simulation and real world experiments

    UAV flight controller using UDOO

    Get PDF
    Hoy en día, el sector de los vehículos aéreos no tripulados está creciendo rápidamente ya que existen multitud de aplicaciones que pueden ser potencialmente sustituidas por el uso de UAV. Aunque existen aplicaciones en las que es más adecuado utilizar UAV de ala fija, en la mayoría de los casos se utilizan multirotores debido a que son sistemas capaces de realizar un despegue y aterrizaje vertical (VTOL), lo que les hace idóneos para vuelos estacionarios. Los multirotores tienen multitud de ventajas, sin embargo, desde el punto de vista del control son sistemas sub-actuados, por lo que precisan de un sistema de control para estabilizarse. En el presente proyecto, se ha desarrollado el sistema de control vuelo de un cuadricóptero controlado remotamente, basado en la regulación PID. El sistema propuesto se ha implementado en la placa UDOO, que contiene un microcontrolador ARM A9 y un Arduino Due. Además se han realizado pruebas de simulación gracias al modelado en Matlab. Los resultados obtenidos tras los ensayos realizados en el banco de pruebas, muestran que el sistema de control de estabilización diseñado es capaz de mantenerse en torno al punto deseado con un error bastante pequeño. En definitiva, el sistema diseñado cumple con los objetivos de crear un UAV de bajo coste, con un sistema de control adecuado y que nos sirva como base para crear más adelante un UAV autónomo.Nowadays, the unmanned aerial vehicle sector is quickly growing because there are a lot of applications that can be potentially substituted for the use of UAV. Although there are applications in which is more appropiate to use fixed-wing UAV, in most cases multirotors are prefered because they are systems able to do vertical take-off and landing (VTOL), so they are suitable for stationary flights. Multirotors have a lot of advantages, however from the point of view of control they are sub-actuated systems, so that they need a control system to stabilize themselves. In this proyect, a flight control system is developed for a remotely controlled quadcopter, based on a PID controller. The system proposed is implemented on the UDOO board, which contains an ARM A9 microcontroller and an Arduino Due compatible. Also, we have done simulation tests based on the quadcopter model implemented in Matlab. The results obtained after the experiments done in the test bench, shows that the stabilization control system is able to track the setpoint with a small error. In conclusion, the designed system meets the objetives of creating a low-cost UAV with an adecuated control system that can be used in the future as base of an autonomous quadcopter.Ingeniería Electrónica Industrial y Automátic

    A Data Fusion System for Attitude Estimation of Low-cost Miniature UAVs

    No full text
    Miniature unmanned aerial vehicles (UAVs) have attracted wide interest from researchers and developers because of their broad applications. In order to make a miniature UAV platform popular for civilian applications, one critical concern is the overall cost. However, lower cost generally means lower navigational accuracy and insufficient flight control performance, mainly due to the low graded avionics on the UAV. This paper introduces a data fusion system based on several low-priced sensors to improve the attitude estimation of a low-cost miniature fixed-wing UAV platform. The characteristics of each sensor and the calculation of attitude angles are carefully studied. The algorithms and implementation of the fusion system are described and explained in details. Ground test results with three sensor fusions are compared and analyzed, and flight test comparison results with two sensor fusions are also presented
    corecore