186 research outputs found
Sea-Surface Object Detection Based on Electro-Optical Sensors: A Review
Sea-surface object detection is critical for navigation safety of autonomous ships. Electrooptical (EO) sensors, such as video cameras, complement radar on board in detecting small obstacle
sea-surface objects. Traditionally, researchers have used horizon detection, background subtraction, and
foreground segmentation techniques to detect sea-surface objects. Recently, deep learning-based object
detection technologies have been gradually applied to sea-surface object detection. This article demonstrates a comprehensive overview of sea-surface object-detection approaches where the advantages
and drawbacks of each technique are compared, covering four essential aspects: EO sensors and image
types, traditional object-detection methods, deep learning methods, and maritime datasets collection. In
particular, sea-surface object detections based on deep learning methods are thoroughly analyzed and
compared with highly influential public datasets introduced as benchmarks to verify the effectiveness of
these approaches. The arti
Application of Multi-Sensor Fusion Technology in Target Detection and Recognition
Application of multi-sensor fusion technology has drawn a lot of industrial and academic interest in recent years. The multi-sensor fusion methods are widely used in many applications, such as autonomous systems, remote sensing, video surveillance, and the military. These methods can obtain the complementary properties of targets by considering multiple sensors. On the other hand, they can achieve a detailed environment description and accurate detection of interest targets based on the information from different sensors.This book collects novel developments in the field of multi-sensor, multi-source, and multi-process information fusion. Articles are expected to emphasize one or more of the three facets: architectures, algorithms, and applications. Published papers dealing with fundamental theoretical analyses, as well as those demonstrating their application to real-world problems
Ship recognition on the sea surface using aerial images taken by Uav : a deep learning approach
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial TechnologiesOceans are very important for mankind, because they are a very important source of
food, they have a very large impact on the global environmental equilibrium, and it is
over the oceans that most of the world commerce is done. Thus, maritime surveillance
and monitoring, in particular identifying the ships used, is of great importance to
oversee activities like fishing, marine transportation, navigation in general, illegal
border encroachment, and search and rescue operations. In this thesis, we used images
obtained with Unmanned Aerial Vehicles (UAVs) over the Atlantic Ocean to identify
what type of ship (if any) is present in a given location. Images generated from UAV
cameras suffer from camera motion, scale variability, variability in the sea surface and
sun glares. Extracting information from these images is challenging and is mostly done
by human operators, but advances in computer vision technology and development of
deep learning techniques in recent years have made it possible to do so automatically.
We used four of the state-of-art pretrained deep learning network models, namely
VGG16, Xception, ResNet and InceptionResNet trained on ImageNet dataset, modified
their original structure using transfer learning based fine tuning techniques and then
trained them on our dataset to create new models. We managed to achieve very high
accuracy (99.6 to 99.9% correct classifications) when classifying the ships that appear
on the images of our dataset. With such a high success rate (albeit at the cost of high
computing power), we can proceed to implement these algorithms on maritime patrol
UAVs, and thus improve Maritime Situational Awareness
Ocean remote sensing techniques and applications: a review (Part II)
As discussed in the first part of this review paper, Remote Sensing (RS) systems are great tools to study various oceanographic parameters. Part I of this study described different passive and active RS systems and six applications of RS in ocean studies, including Ocean Surface Wind (OSW), Ocean Surface Current (OSC), Ocean Wave Height (OWH), Sea Level (SL), Ocean Tide (OT), and Ship Detection (SD). In Part II, the remaining nine important applications of RS systems for ocean environments, including Iceberg, Sea Ice (SI), Sea Surface temperature (SST), Ocean Surface Salinity (OSS), Ocean Color (OC), Ocean Chlorophyll (OCh), Ocean Oil Spill (OOS), Underwater Ocean, and Fishery are comprehensively reviewed and discussed. For each application, the applicable RS systems, their advantages and disadvantages, various RS and Machine Learning (ML) techniques, and several case studies are discussed.Peer ReviewedPostprint (published version
2015 Oil Observing Tools: A Workshop Report
Since 2010, the National Oceanic and Atmospheric Administration (NOAA) and the National Aeronautics and Space Administration (NASA) have provided satellite-based pollution surveillance in United States waters to regulatory agencies such as the United States Coast Guard (USCG). These technologies provide agencies with useful information regarding possible oil discharges. Unfortunately, there has been confusion as to how to interpret the images collected by these satellites and other aerial platforms, which can generate misunderstandings during spill events. Remote sensor packages on aircraft and satellites have advantages and disadvantages vis-à -vis human observers, because they do not “see” features or surface oil the same way. In order to improve observation capabilities during oil spills, applicable technologies must be identified, and then evaluated with respect to their advantages and disadvantages for the incident. In addition, differences between sensors (e.g., visual, IR, multispectral sensors, radar) and platform packages (e.g., manned/unmanned aircraft, satellites) must be understood so that reasonable approaches can be made if applicable and then any data must be correctly interpreted for decision support. NOAA convened an Oil Observing Tools Workshop to focus on the above actions and identify training gaps for oil spill observers and remote sensing interpretation to improve future oil surveillance, observation, and mapping during spills. The Coastal Response Research Center (CRRC) assisted NOAA’s Office of Response and Restoration (ORR) with this effort. The workshop was held on October 20-22, 2015 at NOAA’s Gulf of Mexico Disaster Response Center in Mobile, AL. The expected outcome of the workshop was an improved understanding, and greater use of technology to map and assess oil slicks during actual spill events. Specific workshop objectives included:
•Identify new developments in oil observing technologies useful for real-time (or near real-time) mapping of spilled oil during emergency events.
•Identify merits and limitations of current technologies and their usefulness to emergency response mapping of oil and reliable prediction of oil surface transport and trajectory forecasts.Current technologies include: the traditional human aerial observer, unmanned aircraft surveillance systems, aircraft with specialized senor packages, and satellite earth observing systems.
•Assess training needs for visual observation (human observers with cameras) and sensor technologies (including satellites) to build skills and enhance proper interpretation for decision support during actual events
Autonomous real-time infrared detection of sub-surface vessels for unmanned aircraft systems
The threat of small self-propelled semi-submersible vessels cannot be understated;
payloads from drugs to weapons of mass destruction could be housed in these small,
inconspicuous vessels. With a current apprehension rate of approximately 10%, a
method resulting in increased interdiction of this illegal traffic is required for national
security both in the ports along the coastlines of Canada, as well as the rest of North
America. A smart, autonomous payload containing an infrared imaging device, designed
for use in small unmanned aircraft systems for the specific mission of detecting
self-propelled semi-submersibles over the vast ocean coastline will address the current
security needs.
Thermal imagery of the disturbed colder water layers, driven to the surface by the
vessel will allow for the detection of this traffic using long wave infrared technology.
Infrared signatures of ship wakes are highly variable in both persistence and temperature
contrast as compared to the surrounding surface water, thus infrared imaging
devices with a high resolution, a high responsivity, and a very low minimum resolvable
temperature will be required to provide high quality imagery for airborne detection
of the thermal wake.
A theoretical understanding of the physics associated with the energy collected
by the infrared sensor and the resulting infrared images is provided. Explanation
of the factors affecting the resulting image with respect to the camera properties
are detailed. A variety of examples of airborne thermal images are presented, with
detailed explanations of the imaged scenes based on theory and sensor characteristics
provided in the previous sections.
Infrared images taken over the Atlantic and Pacific oceans from manned and unmanned
aircraft platforms are presented. Temperature measurements taken using
Vemco Minilog II temperature loggers confirmed the thermal stratification of the upper
5 meters of the water. Thermal scarring due to upwelled colder water to the surface was noted during the day time under normal conditions, with temperature
differences found to be consistent with the measured temperature profile. A custom
gimbal system, with corresponding ground control station for real-time, visual
feedback is presented.
An algorithm for the detection of submerged vessel ship wakes using a LWIR
camera, specifically for a small unmanned aircraft, with limited power, space, and
computing power is developed. A time sequential processing method is presented to
reduce the required computing, while allowing high frame rate, real-time operation.
Moreover, a windowed triple-vote method is continually applied to ensure that the
detection mode is correctly set by the algorithm, while ignoring unexpected targets in
the image. A simple background estimation method is presented to remove any nonuniformity
in the captured images, resulting in a high detection rate with low false
alarms. Finally, a complete, mission-ready payload system is prepared for small UA
platforms, with an accuracy rate greater than 97% for the detection of self-propelled
semi-submersible vessels
Deep Learning Based Multi-Modal Fusion Architectures for Maritime Vessel Detection
Object detection is a fundamental computer vision task for many real-world applications. In the maritime environment, this task is challenging due to varying light, view distances, weather conditions, and sea waves. In addition, light reflection, camera motion and illumination changes may cause to false detections. To address this challenge, we present three fusion architectures to fuse two imaging modalities: visible and infrared. These architectures can provide complementary information from two modalities in different levels: pixel-level, feature-level, and decision-level. They employed deep learning for performing fusion and detection. We investigate the performance of the proposed architectures conducting a real marine image dataset, which is captured by color and infrared cameras on-board a vessel in the Finnish archipelago. The cameras are employed for developing autonomous ships, and collect data in a range of operation and climatic conditions. Experiments show that feature-level fusion architecture outperforms the state-of-the-art other fusion level architectures
Remote sensing for cost-effective blue carbon accounting
Blue carbon ecosystems (BCE) include mangrove forests, tidal marshes, and seagrass meadows, all of which are currently under threat, putting their contribution to mitigating climate change at risk. Although certain challenges and trade-offs exist, remote sensing offers a promising avenue for transparent, replicable, and cost-effective accounting of many BCE at unprecedented temporal and spatial scales. The United Nations Framework Convention on Climate Change (UNFCCC) has issued guidelines for developing blue carbon inventories to incorporate into Nationally Determined Contributions (NDCs). Yet, there is little guidance on remote sensing techniques for monitoring, reporting, and verifying blue carbon assets. This review constructs a unified roadmap for applying remote sensing technologies to develop cost-effective carbon inventories for BCE – from local to global scales. We summarise and discuss (1) current standard guidelines for blue carbon inventories; (2) traditional and cutting-edge remote sensing technologies for mapping blue carbon habitats; (3) methods for translating habitat maps into carbon estimates; and (4) a decision tree to assist users in determining the most suitable approach depending on their areas of interest, budget, and required accuracy of blue carbon assessment. We designed this work to support UNFCCC-approved IPCC guidelines with specific recommendations on remote sensing techniques for GHG inventories. Overall, remote sensing technologies are robust and cost-effective tools for monitoring, reporting, and verifying blue carbon assets and projects. Increased appreciation of these techniques can promote a technological shift towards greater policy and industry uptake, enhancing the scalability of blue carbon as a Natural Climate Solution worldwide
- …