1,200 research outputs found

    Autonomous Drone Landings on an Unmanned Marine Vehicle using Deep Reinforcement Learning

    Get PDF
    This thesis describes with the integration of an Unmanned Surface Vehicle (USV) and an Unmanned Aerial Vehicle (UAV, also commonly known as drone) in a single Multi-Agent System (MAS). In marine robotics, the advantage offered by a MAS consists of exploiting the key features of a single robot to compensate for the shortcomings in the other. In this way, a USV can serve as the landing platform to alleviate the need for a UAV to be airborne for long periods time, whilst the latter can increase the overall environmental awareness thanks to the possibility to cover large portions of the prevailing environment with a camera (or more than one) mounted on it. There are numerous potential applications in which this system can be used, such as deployment in search and rescue missions, water and coastal monitoring, and reconnaissance and force protection, to name but a few. The theory developed is of a general nature. The landing manoeuvre has been accomplished mainly identifying, through artificial vision techniques, a fiducial marker placed on a flat surface serving as a landing platform. The raison d'etre for the thesis was to propose a new solution for autonomous landing that relies solely on onboard sensors and with minimum or no communications between the vehicles. To this end, initial work solved the problem while using only data from the cameras mounted on the in-flight drone. In the situation in which the tracking of the marker is interrupted, the current position of the USV is estimated and integrated into the control commands. The limitations of classic control theory used in this approached suggested the need for a new solution that empowered the flexibility of intelligent methods, such as fuzzy logic or artificial neural networks. The recent achievements obtained by deep reinforcement learning (DRL) techniques in end-to-end control in playing the Atari video-games suite represented a fascinating while challenging new way to see and address the landing problem. Therefore, novel architectures were designed for approximating the action-value function of a Q-learning algorithm and used to map raw input observation to high-level navigation actions. In this way, the UAV learnt how to land from high latitude without any human supervision, using only low-resolution grey-scale images and with a level of accuracy and robustness. Both the approaches have been implemented on a simulated test-bed based on Gazebo simulator and the model of the Parrot AR-Drone. The solution based on DRL was further verified experimentally using the Parrot Bebop 2 in a series of trials. The outcomes demonstrate that both these innovative methods are both feasible and practicable, not only in an outdoor marine scenario but also in indoor ones as well

    Design and integration of vision based sensors for unmanned aerial vehicles navigation and guidance

    Get PDF
    In this paper we present a novel Navigation and Guidance System (NGS) for Unmanned Aerial Vehicles (UAVs) based on Vision Based Navigation (VBN) and other avionics sensors. The main objective of our research is to design a lowcost and low-weight/volume NGS capable of providing the required level of performance in all flight phases of modern small- to medium-size UAVs, with a special focus on automated precision approach and landing, where VBN techniques can be fully exploited in a multisensory integrated architecture. Various existing techniques for VBN are compared and the Appearance-based Navigation (ABN) approach is selected for implementation

    Automatic Fire Detection Using Computer Vision Techniques for UAV-based Forest Fire Surveillance

    Get PDF
    Due to their rapid response capability and maneuverability, extended operational range, and improved personnel safety, unmanned aerial vehicles (UAVs) with vision-based systems have great potentials for forest fire surveillance and detection. Over the last decade, it has shown an increasingly strong demand for UAV-based forest fire detection systems, as they can avoid many drawbacks of other forest fire detection systems based on satellites, manned aerial vehicles, and ground equipments. Despite this, the existing UAV-based forest fire detection systems still possess numerous practical issues for their use in operational conditions. In particular, the successful forest fire detection remains difficult, given highly complicated and non-structured environments of forest, smoke blocking the fire, motion of cameras mounted on UAVs, and analogues of flame characteristics. These adverse effects can seriously cause either false alarms or alarm failures. In order to successfully execute missions and meet their corresponding performance criteria and overcome these ever-increasing challenges, investigations on how to reduce false alarm rates, increase the probability of successful detection, and enhance adaptive capabilities to various circumstances are strongly demanded to improve the reliability and accuracy of forest fire detection system. According to the above-mentioned requirements, this thesis concentrates on the development of reliable and accurate forest fire detection algorithms which are applicable to UAVs. These algorithms provide a number of contributions, which include: (1) a two-layered forest fire detection method is designed considering both color and motion features of fire; it is expected to greatly improve the forest fire detection performance, while significantly reduce the motion of background caused by the movement of UAV; (2) a forest fire detection scheme is devised combining both visual and infrared images for increasing the accuracy and reliability of forest fire alarms; and (3) a learning-based fire detection approach is developed for distinguishing smoke (which is widely considered as an early signal of fire) from other analogues and achieving early stage fire detection

    Classification of urban areas from GeoEye-1 imagery through texture features based on Histograms of Equivalent Patterns

    Get PDF
    A family of 26 non-parametric texture descriptors based on Histograms of Equivalent Patterns (HEP) has been tested, many of them for the first time in remote sensing applications, to improve urban classification through object-based image analysis of GeoEye-1 imagery. These HEP descriptors have been compared to the widely known texture measures derived from the gray-level co-occurrence matrix (GLCM). All the five finally selected HEP descriptors (Local Binary Patterns, Improved Local Binary Patterns, Binary Gradient Contours and two different combinations of Completed Local Binary Patterns) performed faster in terms of execution time and yielded significantly better accuracy figures than GLCM features. Moreover, the HEP texture descriptors provided additional information to the basic spectral features from the GeoEye-1's bands (R, G, B, NIR, PAN) significantly improving overall accuracy values by around 3%. Conversely, and in statistic terms, strategies involving GLCM texture derivatives did not improve the classification accuracy achieved from only the spectral information. Lastly, both approaches (HEP and GLCM) showed similar behavior with regard to the training set size applied

    Autonomous Sailboat Navigation

    Get PDF
    The purpose of this study was to investigate novel methods on an unmanned sailing boat, which enables it to sail fully autonomously, navigate safely, and perform long-term missions. The author used robotic sailing boat prototypes for field experiments as his main research method. Two robotic sailing boats have been developed especially for this purpose. A compact software model of a sailing boat's behaviour allowed for further evaluation of routing and obstacle avoidance methods in a computer simulation. The results of real-world experiments and computer simulations are validated against each other. It has been demonstrated that autonomous boat sailing is possible by the effective combination of appropriate new and novel techniques that will allow autonomous sailing boats to create appropriate routes, to react properly on obstacles and to carry out sailing manoeuvres by controlling rudder and sails. Novel methods for weather routing, collision avoidance, and autonomous manoeuvre execution have been proposed and successfully demonstrated. The combination of these techniques in a layered hybrid subsumption architecture make robotic sailing boats a promising tool for many applications, especially in ocean observation

    HETEROGENEOUS MULTI-SENSOR FUSION FOR 2D AND 3D POSE ESTIMATION

    Get PDF
    Sensor fusion is a process in which data from different sensors is combined to acquire an output that cannot be obtained from individual sensors. This dissertation first considers a 2D image level real world problem from rail industry and proposes a novel solution using sensor fusion, then proceeds further to the more complicated 3D problem of multi sensor fusion for UAV pose estimation. One of the most important safety-related tasks in the rail industry is an early detection of defective rolling stock components. Railway wheels and wheel bearings are two components prone to damage due to their interactions with the brakes and railway track, which makes them a high priority when rail industry investigates improvements to current detection processes. The main contribution of this dissertation in this area is development of a computer vision method for automatically detecting the defective wheels that can potentially become a replacement for the current manual inspection procedure. The algorithm fuses images taken by wayside thermal and vision cameras and uses the outcome for the wheel defect detection. As a byproduct, the process will also include a method for detecting hot bearings from the same images. We evaluate our algorithm using simulated and real data images from UPRR in North America and it will be shown in this dissertation that using sensor fusion techniques the accuracy of the malfunction detection can be improved. After the 2D application, the more complicated 3D application is addressed. Precise, robust and consistent localization is an important subject in many areas of science such as vision-based control, path planning, and SLAM. Each of different sensors employed to estimate the pose have their strengths and weaknesses. Sensor fusion is a known approach that combines the data measured by different sensors to achieve a more accurate or complete pose estimation and to cope with sensor outages. In this dissertation, a new approach to 3D pose estimation for a UAV in an unknown GPS-denied environment is presented. The proposed algorithm fuses the data from an IMU, a camera, and a 2D LiDAR to achieve accurate localization. Among the employed sensors, LiDAR has not received proper attention in the past; mostly because a 2D LiDAR can only provide pose estimation in its scanning plane and thus it cannot obtain full pose estimation in a 3D environment. A novel method is introduced in this research that enables us to employ a 2D LiDAR to improve the full 3D pose estimation accuracy acquired from an IMU and a camera. To the best of our knowledge 2D LiDAR has never been employed for 3D localization without a prior map and it is shown in this dissertation that our method can significantly improve the precision of the localization algorithm. The proposed approach is evaluated and justified by simulation and real world experiments

    Mechatronic Systems

    Get PDF
    Mechatronics, the synergistic blend of mechanics, electronics, and computer science, has evolved over the past twenty five years, leading to a novel stage of engineering design. By integrating the best design practices with the most advanced technologies, mechatronics aims at realizing high-quality products, guaranteeing at the same time a substantial reduction of time and costs of manufacturing. Mechatronic systems are manifold and range from machine components, motion generators, and power producing machines to more complex devices, such as robotic systems and transportation vehicles. With its twenty chapters, which collect contributions from many researchers worldwide, this book provides an excellent survey of recent work in the field of mechatronics with applications in various fields, like robotics, medical and assistive technology, human-machine interaction, unmanned vehicles, manufacturing, and education. We would like to thank all the authors who have invested a great deal of time to write such interesting chapters, which we are sure will be valuable to the readers. Chapters 1 to 6 deal with applications of mechatronics for the development of robotic systems. Medical and assistive technologies and human-machine interaction systems are the topic of chapters 7 to 13.Chapters 14 and 15 concern mechatronic systems for autonomous vehicles. Chapters 16-19 deal with mechatronics in manufacturing contexts. Chapter 20 concludes the book, describing a method for the installation of mechatronics education in schools

    Mind the Gap: Developments in Autonomous Driving Research and the Sustainability Challenge

    Get PDF
    Scientific knowledge on autonomous-driving technology is expanding at a faster-than-ever pace. As a result, the likelihood of incurring information overload is particularly notable for researchers, who can struggle to overcome the gap between information processing requirements and information processing capacity. We address this issue by adopting a multi-granulation approach to latent knowledge discovery and synthesis in large-scale research domains. The proposed methodology combines citation-based community detection methods and topic modeling techniques to give a concise but comprehensive overview of how the autonomous vehicle (AV) research field is conceptually structured. Thirteen core thematic areas are extracted and presented by mining the large data-rich environments resulting from 50 years of AV research. The analysis demonstrates that this research field is strongly oriented towards examining the technological developments needed to enable the widespread rollout of AVs, whereas it largely overlooks the wide-ranging sustainability implications of this sociotechnical transition. On account of these findings, we call for a broader engagement of AV researchers with the sustainability concept and we invite them to increase their commitment to conducting systematic investigations into the sustainability of AV deployment. Sustainability research is urgently required to produce an evidence-based understanding of what new sociotechnical arrangements are needed to ensure that the systemic technological change introduced by AV-based transport systems can fulfill societal functions while meeting the urgent need for more sustainable transport solutions

    Using learning from demonstration to enable automated flight control comparable with experienced human pilots

    Get PDF
    Modern autopilots fall under the domain of Control Theory which utilizes Proportional Integral Derivative (PID) controllers that can provide relatively simple autonomous control of an aircraft such as maintaining a certain trajectory. However, PID controllers cannot cope with uncertainties due to their non-adaptive nature. In addition, modern autopilots of airliners contributed to several air catastrophes due to their robustness issues. Therefore, the aviation industry is seeking solutions that would enhance safety. A potential solution to achieve this is to develop intelligent autopilots that can learn how to pilot aircraft in a manner comparable with experienced human pilots. This work proposes the Intelligent Autopilot System (IAS) which provides a comprehensive level of autonomy and intelligent control to the aviation industry. The IAS learns piloting skills by observing experienced teachers while they provide demonstrations in simulation. A robust Learning from Demonstration approach is proposed which uses human pilots to demonstrate the task to be learned in a flight simulator while training datasets are captured. The datasets are then used by Artificial Neural Networks (ANNs) to generate control models automatically. The control models imitate the skills of the experienced pilots when performing the different piloting tasks while handling flight uncertainties such as severe weather conditions and emergency situations. Experiments show that the IAS performs learned skills and tasks with high accuracy even after being presented with limited examples which are suitable for the proposed approach that relies on many single-hidden-layer ANNs instead of one or few large deep ANNs which produce a black-box that cannot be explained to the aviation regulators. The results demonstrate that the IAS is capable of imitating low-level sub-cognitive skills such as rapid and continuous stabilization attempts in stormy weather conditions, and high-level strategic skills such as the sequence of sub-tasks necessary to takeoff, land, and handle emergencies
    • …
    corecore