5,313 research outputs found
Challenges in video based object detection in maritime scenario using computer vision
This paper discusses the technical challenges in maritime image processing
and machine vision problems for video streams generated by cameras. Even well
documented problems of horizon detection and registration of frames in a video
are very challenging in maritime scenarios. More advanced problems of
background subtraction and object detection in video streams are very
challenging. Challenges arising from the dynamic nature of the background,
unavailability of static cues, presence of small objects at distant
backgrounds, illumination effects, all contribute to the challenges as
discussed here
Automatic detection, tracking and counting of birds in marine video content
Robust automatic detection of moving objects in a marine context is a multi-faceted problem due to the complexity of the observed scene. The dynamic nature of the sea caused by waves, boat wakes, and weather conditions poses huge challenges for the development of a stable background model. Moreover, camera motion, reflections, lightning and illumination changes may contribute to false detections. Dynamic background subtraction (DBGS) is widely considered as a solution to tackle this issue in the scope of vessel detection for maritime traffic analysis. In this paper, the DBGS techniques suggested for ships are investigated and optimized for the monitoring and tracking of birds in marine video content. In addition to background subtraction, foreground candidates are filtered by a classifier based on their feature descriptors in order to remove non-bird objects. Different types of classifiers have been evaluated and results on a ground truth labeled dataset of challenging video fragments show similar levels of precision and recall of about 95% for the best performing classifier. The remaining foreground items are counted and birds are tracked along the video sequence using spatio-temporal motion prediction. This allows marine scientists to study the presence and behavior of birds
Survey on video anomaly detection in dynamic scenes with moving cameras
The increasing popularity of compact and inexpensive cameras, e.g.~dash
cameras, body cameras, and cameras equipped on robots, has sparked a growing
interest in detecting anomalies within dynamic scenes recorded by moving
cameras. However, existing reviews primarily concentrate on Video Anomaly
Detection (VAD) methods assuming static cameras. The VAD literature with moving
cameras remains fragmented, lacking comprehensive reviews to date. To address
this gap, we endeavor to present the first comprehensive survey on Moving
Camera Video Anomaly Detection (MC-VAD). We delve into the research papers
related to MC-VAD, critically assessing their limitations and highlighting
associated challenges. Our exploration encompasses three application domains:
security, urban transportation, and marine environments, which in turn cover
six specific tasks. We compile an extensive list of 25 publicly-available
datasets spanning four distinct environments: underwater, water surface,
ground, and aerial. We summarize the types of anomalies these datasets
correspond to or contain, and present five main categories of approaches for
detecting such anomalies. Lastly, we identify future research directions and
discuss novel contributions that could advance the field of MC-VAD. With this
survey, we aim to offer a valuable reference for researchers and practitioners
striving to develop and advance state-of-the-art MC-VAD methods.Comment: Under revie
Overview of contextual tracking approaches in information fusion
Proceedings of: Geospatial InfoFusion III. 2-3 May 2013 Baltimore, Maryland, United States.Many information fusion solutions work well in the intended scenarios; but the applications, supporting data, and capabilities change over varying contexts. One example is weather data for electro-optical target trackers of which standards have evolved over decades. The operating conditions of: technology changes, sensor/target variations, and the contextual environment can inhibit performance if not included in the initial systems design. In this paper, we seek to define and categorize different types of contextual information. We describe five contextual information categories that support target tracking: (1) domain knowledge from a user to aid the information fusion process through selection, cueing, and analysis, (2) environment-to-hardware processing for sensor management, (3) known distribution of entities for situation/threat assessment, (4) historical traffic behavior for situation awareness patterns of life (POL), and (5) road information for target tracking and identification. Appropriate characterization and representation of contextual information is needed for future high-level information fusion systems design to take advantage of the large data content available for a priori knowledge target tracking algorithm construction, implementation, and application.Publicad
Deep Learning-Based Object Detection in Maritime Unmanned Aerial Vehicle Imagery: Review and Experimental Comparisons
With the advancement of maritime unmanned aerial vehicles (UAVs) and deep
learning technologies, the application of UAV-based object detection has become
increasingly significant in the fields of maritime industry and ocean
engineering. Endowed with intelligent sensing capabilities, the maritime UAVs
enable effective and efficient maritime surveillance. To further promote the
development of maritime UAV-based object detection, this paper provides a
comprehensive review of challenges, relative methods, and UAV aerial datasets.
Specifically, in this work, we first briefly summarize four challenges for
object detection on maritime UAVs, i.e., object feature diversity, device
limitation, maritime environment variability, and dataset scarcity. We then
focus on computational methods to improve maritime UAV-based object detection
performance in terms of scale-aware, small object detection, view-aware,
rotated object detection, lightweight methods, and others. Next, we review the
UAV aerial image/video datasets and propose a maritime UAV aerial dataset named
MS2ship for ship detection. Furthermore, we conduct a series of experiments to
present the performance evaluation and robustness analysis of object detection
methods on maritime datasets. Eventually, we give the discussion and outlook on
future works for maritime UAV-based object detection. The MS2ship dataset is
available at
\href{https://github.com/zcj234/MS2ship}{https://github.com/zcj234/MS2ship}.Comment: 32 pages, 18 figure
SeaDSC: A video-based unsupervised method for dynamic scene change detection in unmanned surface vehicles
Recently, there has been an upsurge in the research on maritime vision, where
a lot of works are influenced by the application of computer vision for
Unmanned Surface Vehicles (USVs). Various sensor modalities such as camera,
radar, and lidar have been used to perform tasks such as object detection,
segmentation, object tracking, and motion planning. A large subset of this
research is focused on the video analysis, since most of the current vessel
fleets contain the camera's onboard for various surveillance tasks. Due to the
vast abundance of the video data, video scene change detection is an initial
and crucial stage for scene understanding of USVs. This paper outlines our
approach to detect dynamic scene changes in USVs. To the best of our
understanding, this work represents the first investigation of scene change
detection in the maritime vision application. Our objective is to identify
significant changes in the dynamic scenes of maritime video data, particularly
those scenes that exhibit a high degree of resemblance. In our system for
dynamic scene change detection, we propose completely unsupervised learning
method. In contrast to earlier studies, we utilize a modified cutting-edge
generative picture model called VQ-VAE-2 to train on multiple marine datasets,
aiming to enhance the feature extraction. Next, we introduce our innovative
similarity scoring technique for directly calculating the level of similarity
in a sequence of consecutive frames by utilizing grid calculation on retrieved
features. The experiments were conducted using a nautical video dataset called
RoboWhaler to showcase the efficient performance of our technique.Comment: WACV 2024 conferenc
Assessing High Dynamic Range Imagery Performance for Object Detection in Maritime Environments
The field of autonomous robotics has benefited from the implementation of convolutional neural networks in vision-based situational awareness. These strategies help identify surface obstacles and nearby vessels. This study proposes the introduction of high dynamic range cameras on autonomous surface vessels because these cameras capture images at different levels of exposure revealing more detail than fixed exposure cameras. To see if this introduction will be beneficial for autonomous vessels this research will create a dataset of labeled high dynamic range images and single exposure images, then train object detection networks with these datasets to compare the performance of these networks. Faster-RCNN, SSD, and YOLOv5 were used to compare. Results determined Faster-RCNN and YOLOv5 networks trained on fixed exposure images outperformed their HDR counterparts while SSDs performed better when using HDR images. Better fixed exposure network performance is likely attributed to better feature extraction for fixed exposure images. Despite performance metrics, HDR images prove more beneficial in cases of extreme light exposure since features are not lost
- …