759 research outputs found

    Open source R for applying machine learning to RPAS remote sensing images

    Get PDF
    The increase in the number of remote sensing platforms, ranging from satellites to close-range Remotely Piloted Aircraft System (RPAS), is leading to a growing demand for new image processing and classification tools. This article presents a comparison of the Random Forest (RF) and Support Vector Machine (SVM) machine-learning algorithms for extracting land-use classes in RPAS-derived orthomosaic using open source R packages. The camera used in this work captures the reflectance of the Red, Blue, Green and Near Infrared channels of a target. The full dataset is therefore a 4-channel raster image. The classification performance of the two methods is tested at varying sizes of training sets. The SVM and RF are evaluated using Kappa index, classification accuracy and classification error as accuracy metrics. The training sets are randomly obtained as subset of 2 to 20% of the total number of raster cells, with stratified sampling according to the land-use classes. Ten runs are done for each training set to calculate the variance in results. The control dataset consists of an independent classification obtained by photointerpretation. The validation is carried out(i) using the K-Fold cross validation, (ii) using the pixels from the validation test set, and (iii) using the pixels from the full test set. Validation with K-fold and with the validation dataset show SVM give better results, but RF prove to be more performing when training size is larger. Classification error and classification accuracy follow the trend of Kappa index

    Survey of Bayesian Networks Applications to Intelligent Autonomous Vehicles

    Full text link
    This article reviews the applications of Bayesian Networks to Intelligent Autonomous Vehicles (IAV) from the decision making point of view, which represents the final step for fully Autonomous Vehicles (currently under discussion). Until now, when it comes making high level decisions for Autonomous Vehicles (AVs), humans have the last word. Based on the works cited in this article and analysis done here, the modules of a general decision making framework and its variables are inferred. Many efforts have been made in the labs showing Bayesian Networks as a promising computer model for decision making. Further research should go into the direction of testing Bayesian Network models in real situations. In addition to the applications, Bayesian Network fundamentals are introduced as elements to consider when developing IAVs with the potential of making high level judgement calls.Comment: 34 pages, 2 figures, 3 table

    A Vision-Based Automatic Safe landing-Site Detection System

    Get PDF
    An automatic safe landing-site detection system is proposed for aircraft emergency landing, based on visible information acquired by aircraft-mounted cameras. Emergency landing is an unplanned event in response to emergency situations. If, as is unfortunately usually the case, there is no airstrip or airfield that can be reached by the un-powered aircraft, a crash landing or ditching has to be carried out. Identifying a safe landing-site is critical to the survival of passengers and crew. Conventionally, the pilot chooses the landing-site visually by looking at the terrain through the cockpit. The success of this vital decision greatly depends on the external environmental factors that can impair human vision, and on the pilot\u27s flight experience that can vary significantly among pilots. Therefore, we propose a robust, reliable and efficient detection system that is expected to alleviate the negative impact of these factors. In this study, we focus on the detection mechanism of the proposed system and assume that the image enhancement for increased visibility and image stitching for a larger field-of-view have already been performed on terrain images acquired by aircraft-mounted cameras. Specifically, we first propose a hierarchical elastic horizon detection algorithm to identify ground in rile image. Then the terrain image is divided into non-overlapping blocks which are clustered according to a roughness measure. Adjacent smooth blocks are merged to form potential landing-sites whose dimensions are measured with principal component analysis and geometric transformations. If the dimensions of a candidate region exceed the minimum requirement for safe landing, the potential landing-site is considered a safe candidate and highlighted on the human machine interface. At the end, the pilot makes the final decision by confirming one of the candidates, also considering other factors such as wind speed and wind direction, etc

    Aerial Vehicles

    Get PDF
    This book contains 35 chapters written by experts in developing techniques for making aerial vehicles more intelligent, more reliable, more flexible in use, and safer in operation.It will also serve as an inspiration for further improvement of the design and application of aeral vehicles. The advanced techniques and research described here may also be applicable to other high-tech areas such as robotics, avionics, vetronics, and space

    Autonomous Drone Landings on an Unmanned Marine Vehicle using Deep Reinforcement Learning

    Get PDF
    This thesis describes with the integration of an Unmanned Surface Vehicle (USV) and an Unmanned Aerial Vehicle (UAV, also commonly known as drone) in a single Multi-Agent System (MAS). In marine robotics, the advantage offered by a MAS consists of exploiting the key features of a single robot to compensate for the shortcomings in the other. In this way, a USV can serve as the landing platform to alleviate the need for a UAV to be airborne for long periods time, whilst the latter can increase the overall environmental awareness thanks to the possibility to cover large portions of the prevailing environment with a camera (or more than one) mounted on it. There are numerous potential applications in which this system can be used, such as deployment in search and rescue missions, water and coastal monitoring, and reconnaissance and force protection, to name but a few. The theory developed is of a general nature. The landing manoeuvre has been accomplished mainly identifying, through artificial vision techniques, a fiducial marker placed on a flat surface serving as a landing platform. The raison d'etre for the thesis was to propose a new solution for autonomous landing that relies solely on onboard sensors and with minimum or no communications between the vehicles. To this end, initial work solved the problem while using only data from the cameras mounted on the in-flight drone. In the situation in which the tracking of the marker is interrupted, the current position of the USV is estimated and integrated into the control commands. The limitations of classic control theory used in this approached suggested the need for a new solution that empowered the flexibility of intelligent methods, such as fuzzy logic or artificial neural networks. The recent achievements obtained by deep reinforcement learning (DRL) techniques in end-to-end control in playing the Atari video-games suite represented a fascinating while challenging new way to see and address the landing problem. Therefore, novel architectures were designed for approximating the action-value function of a Q-learning algorithm and used to map raw input observation to high-level navigation actions. In this way, the UAV learnt how to land from high latitude without any human supervision, using only low-resolution grey-scale images and with a level of accuracy and robustness. Both the approaches have been implemented on a simulated test-bed based on Gazebo simulator and the model of the Parrot AR-Drone. The solution based on DRL was further verified experimentally using the Parrot Bebop 2 in a series of trials. The outcomes demonstrate that both these innovative methods are both feasible and practicable, not only in an outdoor marine scenario but also in indoor ones as well
    corecore