167 research outputs found

    Sea-Surface Object Detection Based on Electro-Optical Sensors: A Review

    Get PDF
    Sea-surface object detection is critical for navigation safety of autonomous ships. Electrooptical (EO) sensors, such as video cameras, complement radar on board in detecting small obstacle sea-surface objects. Traditionally, researchers have used horizon detection, background subtraction, and foreground segmentation techniques to detect sea-surface objects. Recently, deep learning-based object detection technologies have been gradually applied to sea-surface object detection. This article demonstrates a comprehensive overview of sea-surface object-detection approaches where the advantages and drawbacks of each technique are compared, covering four essential aspects: EO sensors and image types, traditional object-detection methods, deep learning methods, and maritime datasets collection. In particular, sea-surface object detections based on deep learning methods are thoroughly analyzed and compared with highly influential public datasets introduced as benchmarks to verify the effectiveness of these approaches. The arti

    Detection of bodies in maritime rescue operations using Unmanned Aerial Vehicles with multispectral cameras

    Get PDF
    In this study, we use unmanned aerial vehicles equipped with multispectral cameras to search for bodies in maritime rescue operations. A series of flights were performed in open‐water scenarios in the northwest of Spain, using a certified aquatic rescue dummy in dangerous areas and real people when the weather conditions allowed it. The multispectral images were aligned and used to train a convolutional neural network for body detection. An exhaustive evaluation was performed to assess the best combination of spectral channels for this task. Three approaches based on a MobileNet topology were evaluated, using (a) the full image, (b) a sliding window, and (c) a precise localization method. The first method classifies an input image as containing a body or not, the second uses a sliding window to yield a class for each subimage, and the third uses transposed convolutions returning a binary output in which the body pixels are marked. In all cases, the MobileNet architecture was modified by adding custom layers and preprocessing the input to align the multispectral camera channels. Evaluation shows that the proposed methods yield reliable results, obtaining the best classification performance when combining green, red‐edge, and near‐infrared channels. We conclude that the precise localization approach is the most suitable method, obtaining a similar accuracy as the sliding window but achieving a spatial localization close to 1 m. The presented system is about to be implemented for real maritime rescue operations carried out by Babcock Mission Critical Services Spain.This study was performed in collaboration with BabcockMCS Spain and funded by the Galicia Region Government through the Civil UAVs Initiative program, the Spanish Government’s Ministry of Economy, Industry, and Competitiveness through the RTC‐2014‐1863‐8 and INAER4‐14Y (IDI‐20141234) projects, and the grant number 730897 under the HPC‐EUROPA3 project supported by Horizon 2020

    Ship recognition on the sea surface using aerial images taken by Uav : a deep learning approach

    Get PDF
    Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial TechnologiesOceans are very important for mankind, because they are a very important source of food, they have a very large impact on the global environmental equilibrium, and it is over the oceans that most of the world commerce is done. Thus, maritime surveillance and monitoring, in particular identifying the ships used, is of great importance to oversee activities like fishing, marine transportation, navigation in general, illegal border encroachment, and search and rescue operations. In this thesis, we used images obtained with Unmanned Aerial Vehicles (UAVs) over the Atlantic Ocean to identify what type of ship (if any) is present in a given location. Images generated from UAV cameras suffer from camera motion, scale variability, variability in the sea surface and sun glares. Extracting information from these images is challenging and is mostly done by human operators, but advances in computer vision technology and development of deep learning techniques in recent years have made it possible to do so automatically. We used four of the state-of-art pretrained deep learning network models, namely VGG16, Xception, ResNet and InceptionResNet trained on ImageNet dataset, modified their original structure using transfer learning based fine tuning techniques and then trained them on our dataset to create new models. We managed to achieve very high accuracy (99.6 to 99.9% correct classifications) when classifying the ships that appear on the images of our dataset. With such a high success rate (albeit at the cost of high computing power), we can proceed to implement these algorithms on maritime patrol UAVs, and thus improve Maritime Situational Awareness

    A horizon line annotation tool for streamlining autonomous sea navigation experiments

    Full text link
    Horizon line (or sea line) detection (HLD) is a critical component in multiple marine autonomous navigation tasks, such as identifying the navigation area (i.e., the sea), obstacle detection and geo-localization, and digital video stabilization. A recent survey highlighted several weaknesses of such detectors, particularly on sea conditions lacking from the most extensive dataset currently used by HLD researchers. Experimental validation of more robust HLDs involves collecting an extensive set of these lacking sea conditions and annotating each collected image with the correct position and orientation of the horizon line. The annotation task is daunting without a proper tool. Therefore, we present the first public annotation software with tailored features to make the sea line annotation process fast and easy. The software is available at: https://drive.google.com/drive/folders/1c0ZmvYDckuQCPIWfh_70P7E1A_DWlIvF?usp=sharin

    Software Defined Multi-Spectral Imaging for Arctic Sensor Networks

    Get PDF
    Availability of off-the-shelf infrared sensors combined with high definition visible cameras has made possible the construction of a Software Defined Multi-Spectral Imager (SDMSI) combining long-wave, near-infrared and visible imaging. The SDMSI requires a real-time embedded processor to fuse images and to create real-time depth maps for opportunistic uplink in sensor networks. Researchers at Embry Riddle Aeronautical University working with University of Alaska Anchorage at the Arctic Domain Awareness Center and the University of Colorado Boulder have built several versions of a low-cost drop-in-place SDMSI to test alternatives for power efficient image fusion. The SDMSI is intended for use in field applications including marine security, search and rescue operations and environmental surveys in the Arctic region. Based on Arctic marine sensor network mission goals, the team has designed the SDMSI to include features to rank images based on saliency and to provide on camera fusion and depth mapping. A major challenge has been the design of the camera computing system to operate within a 10 to 20 Watt power budget. This paper presents a power analysis of three options: 1) multi-core, 2) field programmable gate array with multi-core, and 3) graphics processing units with multi-core. For each test, power consumed for common fusion workloads has been measured at a range of frame rates and resolutions. Detailed analyses from our power efficiency comparison for workloads specific to stereo depth mapping and sensor fusion are summarized. Preliminary mission feasibility results from testing with off-the-shelf long-wave infrared and visible cameras in Alaska and Arizona are also summarized to demonstrate the value of the SDMSI for applications such as ice tracking, ocean color, soil moisture, animal and marine vessel detection and tracking. The goal is to select the most power efficient solution for the SDMSI for use on UAVs (Unoccupied Aerial Vehicles) and other drop-in-place installations in the Arctic. The prototype selected will be field tested in Alaska in the summer of 2016

    The automatic detection subsystem

    Get PDF
    Marques, M. M., Lobo, V., Aguiar, A. P., Silva, J. E., de Sousa, J. B., Nunes, M. D. F., Ribeiro, R. A., Bernardino, A., Cruz, G., & Marques, J. S. (2021). An unmanned aircraft system for maritime operations: The automatic detection subsystem. Marine Technology Society Journal, 55(1), 38-49. https://doi.org/10.4031/MTSJ.55.1.4 --- This work was funded by POFC (Programa Operacional Factores de Competitividade) within the National Strategic Reference Framework (QREN) under grant agreement 2013/034063 (SEAGULL, Project Number 34063).This paper addresses the development of an integrated system to support maritime situation awareness based on unmanned aerial vehicles (UAVs), empha-sizing the role of the automatic detection subsystem. One of the main topics of research in the SEAGULL project was the automatic detection of sea vessels from sensors onboard the UAV, to help human operators in the generation of situational awareness of maritime events such as (a) detection and geo-referencing of oil spills or hazardous and noxious substances, (b) tracking systems (e.g., vessels, ship-wrecks, lifeboats, debris), (c) recognizing behavioral patterns (e.g., vessels rendez-vous, high-speed vessels, atypical patterns of navigation), and (d) monitoring environmental parameters and indicators. We describe a system composed of optical sensors, an embedded computer, communication systems, and a vessel detection algorithm that can run in real time in the embedded UAV hardware and provide to human operators vessel detections with low latency, high precision rates (about 99%), and suitable recalls (>50%), which is comparable to other more computationally intensive state-of-the-art approaches. Field test results, including the detection of lifesavers and multiple vessels in red-green-and-blue (RGB) and thermal images, are presented and discussed.publishersversionpublishe

    Adversarial AI Testcases for Maritime Autonomous Systems

    Get PDF
    Contemporary maritime operations such as shipping are a vital component constituting global trade and defence. The evolution towards maritime autonomous systems, often providing significant benefits (e.g., cost, physical safety), requires the utilisation of artificial intelligence (AI) to automate the functions of a conventional crew. However, unsecured AI systems can be plagued with vulnerabilities naturally inherent within complex AI models. The adversarial AI threat, primarily only evaluated in a laboratory environment, increases the likelihood of strategic adversarial exploitation and attacks on mission-critical AI, including maritime autonomous systems. This work evaluates AI threats to maritime autonomous systems in situ. The results show that multiple attacks can be used against real-world maritime autonomous systems with a range of lethality. However, the effects of AI attacks vary in a dynamic and complex environment from that proposed in lower entropy laboratory environments. We propose a set of adversarial test examples and demonstrate their use, specifically in the marine environment. The results of this paper highlight security risks and deliver a set of principles to mitigate threats to AI, throughout the AI lifecycle, in an evolving threat landscape.</jats:p

    Deep Learning-Based Object Detection in Maritime Unmanned Aerial Vehicle Imagery: Review and Experimental Comparisons

    Full text link
    With the advancement of maritime unmanned aerial vehicles (UAVs) and deep learning technologies, the application of UAV-based object detection has become increasingly significant in the fields of maritime industry and ocean engineering. Endowed with intelligent sensing capabilities, the maritime UAVs enable effective and efficient maritime surveillance. To further promote the development of maritime UAV-based object detection, this paper provides a comprehensive review of challenges, relative methods, and UAV aerial datasets. Specifically, in this work, we first briefly summarize four challenges for object detection on maritime UAVs, i.e., object feature diversity, device limitation, maritime environment variability, and dataset scarcity. We then focus on computational methods to improve maritime UAV-based object detection performance in terms of scale-aware, small object detection, view-aware, rotated object detection, lightweight methods, and others. Next, we review the UAV aerial image/video datasets and propose a maritime UAV aerial dataset named MS2ship for ship detection. Furthermore, we conduct a series of experiments to present the performance evaluation and robustness analysis of object detection methods on maritime datasets. Eventually, we give the discussion and outlook on future works for maritime UAV-based object detection. The MS2ship dataset is available at \href{https://github.com/zcj234/MS2ship}{https://github.com/zcj234/MS2ship}.Comment: 32 pages, 18 figure

    A Deep Learning-Based Automatic Object Detection Method for Autonomous Driving Ships

    Get PDF
    An important feature of an Autonomous Surface Vehicles (ASV) is its capability of automatic object detection to avoid collisions, obstacles and navigate on their own. Deep learning has made some significant headway in solving fundamental challenges associated with object detection and computer vision. With tremendous demand and advancement in the technologies associated with ASVs, a growing interest in applying deep learning techniques in handling challenges pertaining to autonomous ship driving has substantially increased over the years. In this thesis, we study, design, and implement an object recognition framework that detects and recognizes objects found in the sea. We first curated a Sea-object Image Dataset (SID) specifically for this project. Then, by utilizing a pre-trained RetinaNet model on a large-scale object detection dataset named Microsoft COCO, we further fine-tune it on our SID dataset. We focused on sea objects that may potentially cause collisions or other types of maritime accidents. Our final model can effectively detect various types of floating or surrounding objects and classify them into one of the ten predefined significant classes, which are buoy, ship, island, pier, person, waves, rocks, buildings, lighthouse, and fish. Experimental results have demonstrated its good performance

    Unsupervised maritime target detection

    Get PDF
    The unsupervised detection of maritime targets in grey scale video is a difficult problem in maritime video surveillance. Most approaches assume that the camera is static and employ pixel-wise background modelling techniques for foreground detection; other methods rely on colour or thermal information to detect targets. These methods fail in real-world situations when the static camera assumption is violated, and colour or thermal data is unavailable. In defence and security applications, prior information and training samples of targets may be unavailable for training a classifier; the learning of a one class classifier for the background may be impossible as well. Thus, an unsupervised online approach that attempts to learn from the scene data is highly desirable. In this thesis, the characteristics of the maritime scene and the ocean texture are exploited for foreground detection. Two fast and effective methods are investigated for target detection. Firstly, online regionbased background texture models are explored for describing the appearance of the ocean. This approach avoids the need for frame registration because the model is built spatially rather than temporally. The texture appearance of the ocean is described using Local Binary Pattern (LBP) descriptors. Two models are proposed: one model is a Gaussian Mixture (GMM) and the other, referred to as a Sparse Texture Model (STM), is a set of histogram texture distributions. The foreground detections are optimized using a Graph Cut (GC) that enforces spatial coherence. Secondly, feature tracking is investigated as a means of detecting stable features in an image frame that typically correspond to maritime targets; unstable features are background regions. This approach is a Track-Before-Detect (TBD) concept and it is implemented using a hierarchical scheme for motion estimation, and matching of Scale- Invariant Feature Transform (SIFT) appearance features. The experimental results show that these approaches are feasible for foreground detection in maritime video when the camera is either static or moving. Receiver Operating Characteristic (ROC) curves were generated for five test sequences and the Area Under the ROC Curve (AUC) was analyzed for the performance of the proposed methods. The texture models, without GC optimization, achieved an AUC of 0.85 or greater on four out of the five test videos. At 50% True Positive Rate (TPR), these four test scenarios had a False Positive Rate (FPR) of less than 2%. With the GC optimization, an AUC of greater than 0.8 was achieved for all the test cases and the FPR was reduced in all cases when compared to the results without the GC. In comparison to the state of the art in background modelling for maritime scenes, our texture model methods achieved the best performance or comparable performance. The two texture models executed at a reasonable processing frame rate. The experimental results for TBD show that one may detect target features using a simple track score based on the track length. At 50% TPR a FPR of less than 4% is achieved for four out of the five test scenarios. These results are very promising for maritime target detection
    • 

    corecore