145 research outputs found

    Filtering based multi-sensor data fusion algorithm for a reliable unmanned surface vehicle navigation

    Get PDF
    When considering the working conditions under which an unmanned surface vehicle (USV) operates, the navigational sensors, which already have inherent uncertainties, are subjected to environment influences that can affect the accuracy, security and reliability of USV navigation. To combat this, multi-sensor data fusion algorithms will be developed in this paper to deal with the raw sensor measurements from three kinds of commonly used sensors and calculate improved navigational data for USV operation in a practical environment. Unscented Kalman Filter, as an advanced filtering technology dedicated to dealing with non-linear systems, has been adopted as the underlying algorithm with the performance validated within various computer-based simulations where practical, dynamic navigational influences, such as ocean currents, provide force against the vessel’s structure, are to be considered

    Robust Multi-sensor Data Fusion for Practical Unmanned Surface Vehicles (USVs) Navigation

    Get PDF
    The development of practical Unmanned Surface Vehicles (USVs) are attracting increasing attention driven by their assorted military and commercial application potential. However, addressing the uncertainties presented in practical navigational sensor measurements of an USV in maritime environment remain the main challenge of the development. This research aims to develop a multi-sensor data fusion system to autonomously provide an USV reliable navigational information on its own positions and headings as well as to detect dynamic target ships in the surrounding environment in a holistic fashion. A multi-sensor data fusion algorithm based on Unscented Kalman Filter (UKF) has been developed to generate more accurate estimations of USV’s navigational data considering practical environmental disturbances. A novel covariance matching adaptive estimation algorithm has been proposed to deal with the issues caused by unknown and varying sensor noise in practice to improve system robustness. Certain measures have been designed to determine the system reliability numerically, to recover USV trajectory during short term sensor signal loss, and to autonomously detect and discard permanently malfunctioned sensors, and thereby enabling potential sensor faults tolerance. The performance of the algorithms have been assessed by carrying out theoretical simulations as well as using experimental data collected from a real-world USV projected collaborated with Plymouth University. To increase the degree of autonomy of USVs in perceiving surrounding environments, target detection and prediction algorithms using an Automatic Identification System (AIS) in conjunction with a marine radar have been proposed to provide full detections of multiple dynamic targets in a wider coverage range, remedying the narrow detection range and sensor uncertainties of the AIS. The detection algorithms have been validated in simulations using practical environments with water current effects. The performance of developed multi-senor data fusion system in providing reliable navigational data and perceiving surrounding environment for USV navigation have been comprehensively demonstrated

    Autonomous Drone Landings on an Unmanned Marine Vehicle using Deep Reinforcement Learning

    Get PDF
    This thesis describes with the integration of an Unmanned Surface Vehicle (USV) and an Unmanned Aerial Vehicle (UAV, also commonly known as drone) in a single Multi-Agent System (MAS). In marine robotics, the advantage offered by a MAS consists of exploiting the key features of a single robot to compensate for the shortcomings in the other. In this way, a USV can serve as the landing platform to alleviate the need for a UAV to be airborne for long periods time, whilst the latter can increase the overall environmental awareness thanks to the possibility to cover large portions of the prevailing environment with a camera (or more than one) mounted on it. There are numerous potential applications in which this system can be used, such as deployment in search and rescue missions, water and coastal monitoring, and reconnaissance and force protection, to name but a few. The theory developed is of a general nature. The landing manoeuvre has been accomplished mainly identifying, through artificial vision techniques, a fiducial marker placed on a flat surface serving as a landing platform. The raison d'etre for the thesis was to propose a new solution for autonomous landing that relies solely on onboard sensors and with minimum or no communications between the vehicles. To this end, initial work solved the problem while using only data from the cameras mounted on the in-flight drone. In the situation in which the tracking of the marker is interrupted, the current position of the USV is estimated and integrated into the control commands. The limitations of classic control theory used in this approached suggested the need for a new solution that empowered the flexibility of intelligent methods, such as fuzzy logic or artificial neural networks. The recent achievements obtained by deep reinforcement learning (DRL) techniques in end-to-end control in playing the Atari video-games suite represented a fascinating while challenging new way to see and address the landing problem. Therefore, novel architectures were designed for approximating the action-value function of a Q-learning algorithm and used to map raw input observation to high-level navigation actions. In this way, the UAV learnt how to land from high latitude without any human supervision, using only low-resolution grey-scale images and with a level of accuracy and robustness. Both the approaches have been implemented on a simulated test-bed based on Gazebo simulator and the model of the Parrot AR-Drone. The solution based on DRL was further verified experimentally using the Parrot Bebop 2 in a series of trials. The outcomes demonstrate that both these innovative methods are both feasible and practicable, not only in an outdoor marine scenario but also in indoor ones as well

    Kelpie: A ROS-based multi-robot simulator for water surface and aerial vehicles

    Get PDF
    Testing and debugging real hardware is a time consuming task, in particular for the case of aquatic robots, for which it is necessary to transport and deploy the robots on the water. Performing waterborne and airborne field experiments with expensive hardware embedded in not yet fully functional prototypes is a highly risky endeavour. In this sense, physics-based 3D simulators are key for a fast paced and affordable development of such robotic systems. This paper contributes with a modular, open-source, and soon to be freely online available, ROS-based multi-robot simulator specially focused for aerial and water surface vehicles. This simulator is being developed as part of the RIVERWATCH experiment in the ECHORD european FP7 project. This experiment aims at demonstrating a multi-robot system for remote monitoring of riverine environments.info:eu-repo/semantics/acceptedVersio

    WaterScenes: A Multi-Task 4D Radar-Camera Fusion Dataset and Benchmark for Autonomous Driving on Water Surfaces

    Full text link
    Autonomous driving on water surfaces plays an essential role in executing hazardous and time-consuming missions, such as maritime surveillance, survivors rescue, environmental monitoring, hydrography mapping and waste cleaning. This work presents WaterScenes, the first multi-task 4D radar-camera fusion dataset for autonomous driving on water surfaces. Equipped with a 4D radar and a monocular camera, our Unmanned Surface Vehicle (USV) proffers all-weather solutions for discerning object-related information, including color, shape, texture, range, velocity, azimuth, and elevation. Focusing on typical static and dynamic objects on water surfaces, we label the camera images and radar point clouds at pixel-level and point-level, respectively. In addition to basic perception tasks, such as object detection, instance segmentation and semantic segmentation, we also provide annotations for free-space segmentation and waterline segmentation. Leveraging the multi-task and multi-modal data, we conduct numerous experiments on the single modality of radar and camera, as well as the fused modalities. Results demonstrate that 4D radar-camera fusion can considerably enhance the robustness of perception on water surfaces, especially in adverse lighting and weather conditions. WaterScenes dataset is public on https://waterscenes.github.io

    Image segmentation in marine environments using convolutional LSTM for temporal context

    Get PDF
    Unmanned surface vehicles (USVs) carry a wealth of possible applications, many of which are limited by the vehicle's level of autonomy. The development of efficient and robust computer vision algorithms is a key factor in improving this, as they permit autonomous detection and thereby avoidance of obstacles. Recent developments in convolutional neural networks (CNNs), and the collection of increasingly diverse datasets, present opportunities for improved computer vision algorithms requiring less data and computational power. One area of potential improvement is the utilisation of temporal context from USV camera feeds in the form of sequential video frames to consistently identify obstacles in diverse marine environments under challenging conditions. This paper documents the implementation of this through long short-term memory (LSTM) cells in existing CNN structures and the exploration of parameters affecting their efficacy. It is found that LSTM cells are promising for achieving improved performance; however, there are weaknesses associated with network training procedures and datasets. Several novel network architectures are presented and compared using a state-of-the-art benchmarking method. It is shown that LSTM cells allow for better model performance with fewer training iterations, but that this advantage diminishes with additional training

    USE OF ARTIFICIAL FIDUCIAL MARKERS FOR USV SWARM COORDINATION

    Get PDF
    Typical swarm algorithms (leader-follower, artificial potentials, etc.) rely on knowledge about the pose of each vehicle and inter-vehicle proximity. This information is often obtained via Global Positioning System (GPS) and communicated via radio-frequency means.. This research examines the capabilities and limitations of using a fiducial marker system in conjunction with an artificial potential field algorithm to achieve inter-vehicle localization and coordinate the motion of unmanned surface vessels operating together in an environment where satellite and radio communications are inhibited. Using Gazebo, a physics-based robotic simulation environment, a virtual model is developed for incorporating fiducial markers on a group of autonomous surface vessels. A control framework using MATLAB and the Robot Operating System (ROS) is developed that integrates image processing, AprilTag fiducial marker detection, and artificial potential control algorithms. This architecture receives multiple video streams, detects AprilTags, and extracts pose information to control the forward motion and inter-vehicle spacing in a swarm of autonomous surface vessels. This control architecture is tested for a variety of trajectories and tuned so that the swarm can successfully maintain formation control.Lieutenant, United States NavyApproved for public release. Distribution is unlimited

    Maritime Object Detection, Tracking, and Classification Using Lidar and Vision-Based Sensor Fusion

    Get PDF
    Autonomous Surface Vehicles have the capability of replacing dull, dirty, and dangerous jobs in the maritime field. However, few successful ASV systems exist today, as there is a need for greater sensing capabilities. Furthermore, a successful ASV system requires object detection and recognition capabilities to enable autonomous navigation and situational awareness. This thesis demonstrates an application of LiDAR sensors in maritime environments for object detection, classification, and camera sensor fusion. This is accomplished through the integration of a high-fidelity GPS/INS system, 3D LiDAR sensors, and a pair of cameras. After rotating LiDAR returns into a global reference frame, they are reduced to a 3D occupancy grid. Objects are then extracted and classified with a Support Vector Machine (SVM) classifier. The LiDAR returns, when converted from a global frame to a camera frame, then allow the cameras to process a region of their imaging frame to assist in the classification of objects using color-based features. The SVM implementation results in an overall accuracy 98.7% for 6 classes. The transformation into pixel coordinates is shown here to be successful, with an angular error of 2 degrees, attributed to measurement error propagated through rotations

    Application of Multi-Sensor Fusion Technology in Target Detection and Recognition

    Get PDF
    Application of multi-sensor fusion technology has drawn a lot of industrial and academic interest in recent years. The multi-sensor fusion methods are widely used in many applications, such as autonomous systems, remote sensing, video surveillance, and the military. These methods can obtain the complementary properties of targets by considering multiple sensors. On the other hand, they can achieve a detailed environment description and accurate detection of interest targets based on the information from different sensors.This book collects novel developments in the field of multi-sensor, multi-source, and multi-process information fusion. Articles are expected to emphasize one or more of the three facets: architectures, algorithms, and applications. Published papers dealing with fundamental theoretical analyses, as well as those demonstrating their application to real-world problems
    • …
    corecore