3,709 research outputs found
Machine learning methods for discriminating natural targets in seabed imagery
The research in this thesis concerns feature-based machine learning processes and methods for discriminating qualitative natural targets in seabed imagery. The applications considered, typically involve time-consuming manual processing stages in an industrial setting. An aim of the research is to facilitate a means of assisting human analysts by expediting the tedious interpretative tasks, using machine methods. Some novel approaches are devised and investigated for solving the application problems.
These investigations are compartmentalised in four coherent case studies linked by common underlying technical themes and methods. The first study addresses pockmark discrimination in a digital bathymetry model. Manual identification and mapping of even a relatively small number of these landform objects is an expensive process. A novel, supervised machine learning approach to automating the task is presented. The process maps the boundaries of ≈ 2000 pockmarks in seconds - a task that would take days for a human analyst to complete. The second case study investigates different feature creation methods for automatically discriminating sidescan sonar image textures characteristic of Sabellaria spinulosa colonisation.
Results from a comparison of several textural feature creation methods on sonar waterfall imagery show that Gabor filter banks yield some of the best results. A further empirical investigation into the filter bank features created on sonar mosaic imagery leads to the identification of a useful configuration and filter parameter ranges for discriminating the target textures in the imagery. Feature saliency estimation is a vital stage in the machine process. Case study three concerns distance measures for the evaluation and ranking of features on sonar imagery. Two novel consensus methods for creating a more robust ranking are proposed. Experimental results show that the consensus methods can improve robustness over a range of feature parameterisations and various seabed texture
classification tasks. The final case study is more qualitative in nature and brings together a number of ideas, applied to the classification of target regions in real-world
sonar mosaic imagery.
A number of technical challenges arose and these were
surmounted by devising a novel, hybrid unsupervised method. This fully automated machine approach was compared with a supervised approach in an application to the problem of image-based sediment type discrimination. The hybrid unsupervised method produces a plausible class map in a few minutes of processing time. It is concluded that the versatile, novel process should be generalisable to the discrimination of other subjective natural targets in real-world seabed imagery, such as Sabellaria textures and pockmarks (with appropriate features and feature tuning.) Further, the full automation
of pockmark and Sabellaria discrimination is feasible within this framework
A review of marine geomorphometry, the quantitative study of the seafloor
Geomorphometry, the science of quantitative terrain characterization, has traditionally focused on the investigation of terrestrial landscapes. However, the dramatic increase in the availability of digital bathymetric data and the
increasing ease by which geomorphometry can be investigated using geographic information systems (GISs) and spatial analysis software has prompted interest in employing geomorphometric techniques to investigate the marine environment. Over the last decade or so, a multitude of geomorphometric techniques (e.g. terrain attributes, feature extraction,
automated classification) have been applied to characterize
seabed terrain from the coastal zone to the deep sea. Geomorphometric techniques are, however, not as varied, nor as
extensively applied, in marine as they are in terrestrial environments. This is at least partly due to difficulties associated with capturing, classifying, and validating terrain characteristics underwater. There is, nevertheless, much common
ground between terrestrial and marine geomorphometry applications and it is important that, in developing marine geomorphometry, we learn from experiences in terrestrial studies. However, not all terrestrial solutions can be adopted by
marine geomorphometric studies since the dynamic, four-dimensional (4-D) nature of the marine environment causes
its own issues throughout the geomorphometry workflow.
For instance, issues with underwater positioning, variations
in sound velocity in the water column affecting acousticbased mapping, and our inability to directly observe and
measure depth and morphological features on the seafloor
are all issues specific to the application of geomorphometry in the marine environment. Such issues fuel the need for
a dedicated scientific effort in marine geomorphometry.
This review aims to highlight the relatively recent growth
of marine geomorphometry as a distinct discipline, and offers
the first comprehensive overview of marine geomorphometry
to date. We address all the five main steps of geomorphometry, from data collection to the application of terrain attributes
and features. We focus on how these steps are relevant to marine geomorphometry and also highlight differences and similarities from terrestrial geomorphometry. We conclude with
recommendations and reflections on the future of marine geomorphometry. To ensure that geomorphometry is used and
developed to its full potential, there is a need to increase
awareness of (1) marine geomorphometry amongst scientists already engaged in terrestrial geomorphometry, and of
(2) geomorphometry as a science amongst marine scientists
with a wide range of backgrounds and experiences.peer-reviewe
Characterising the ocean frontier : a review of marine geomorphometry
Geomorphometry, the science that quantitatively describes terrains, has traditionally focused on the investigation
of terrestrial landscapes. However, the dramatic increase in the availability of digital bathymetric data and the increasing
ease by which geomorphometry can be investigated using Geographic Information Systems (GIS) has prompted interest in
employing geomorphometric techniques to investigate the marine environment. Over the last decade, a suite of
geomorphometric techniques have been applied (e.g. terrain attributes, feature extraction, automated classification) to investigate the characterisation of seabed terrain from the coastal zone to the deep sea. Geomorphometric techniques are,
however, not as varied, nor as extensively applied, in marine as they are in terrestrial environments. This is at least partly due
to difficulties associated with capturing, classifying, and validating terrain characteristics underwater. There is nevertheless
much common ground between terrestrial and marine geomorphology applications and it is important that, in developing the
science and application of marine geomorphometry, we build on the lessons learned from terrestrial studies. We note, however, that not all terrestrial solutions can be adopted by marine geomorphometric studies since the dynamic, four-
dimensional nature of the marine environment causes its own issues, boosting the need for a dedicated scientific effort in
marine geomorphometry.
This contribution offers the first comprehensive review of marine geomorphometry to date. It addresses all the five main
steps of geomorphometry, from data collection to the application of terrain attributes and features. We focus on how these steps are relevant to marine geomorphometry and also highlight differences from terrestrial geomorphometry. We conclude
with recommendations and reflections on the future of marine geomorphometry.peer-reviewe
Visually Augmented Navigation for Autonomous Underwater Vehicles
As autonomous underwater vehicles (AUVs) are becoming routinely used in an exploratory context for ocean science, the goal of visually augmented navigation (VAN) is to improve the near-seafloor navigation precision of such vehicles without imposing the burden of having to deploy additional infrastructure. This is in contrast to traditional acoustic long baseline navigation techniques, which require the deployment, calibration, and eventual recovery of a transponder network. To achieve this goal, VAN is formulated within a vision-based simultaneous localization and mapping (SLAM) framework that exploits the systems-level complementary aspects of a camera and strap-down sensor suite. The result is an environmentally based navigation technique robust to the peculiarities of low-overlap underwater imagery. The method employs a view-based representation where camera-derived relative-pose measurements provide spatial constraints, which enforce trajectory consistency and also serve as a mechanism for loop closure, allowing for error growth to be independent of time for revisited imagery. This article outlines the multisensor VAN framework and demonstrates it to have compelling advantages over a purely vision-only approach by: 1) improving the robustness of low-overlap underwater image registration; 2) setting the free gauge scale; and 3) allowing for a disconnected camera-constraint topology.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86054/1/reustice-16.pd
Acoustic data optimisation for seabed mapping with visual and computational data mining
Oceans cover 70% of Earth’s surface but little is known about their waters.
While the echosounders, often used for exploration of our oceans, have developed at
a tremendous rate since the WWII, the methods used to analyse and interpret the data
still remain the same. These methods are inefficient, time consuming, and often
costly in dealing with the large data that modern echosounders produce. This PhD
project will examine the complexity of the de facto seabed mapping technique by
exploring and analysing acoustic data with a combination of data mining and visual
analytic methods.
First we test the redundancy issues in multibeam echosounder (MBES) data
by using the component plane visualisation of a Self Organising Map (SOM). A total
of 16 visual groups were identified among the 132 statistical data descriptors. The
optimised MBES dataset had 35 attributes from 16 visual groups and represented a
73% reduction in data dimensionality. A combined Principal Component Analysis
(PCA) + k-means was used to cluster both the datasets. The cluster results were
visually compared as well as internally validated using four different internal
validation methods.
Next we tested two novel approaches in singlebeam echosounder (SBES)
data processing and clustering – using visual exploration for outlier detection and
direct clustering of time series echo returns. Visual exploration identified further
outliers the automatic procedure was not able to find. The SBES data were then
clustered directly. The internal validation indices suggested the optimal number of
clusters to be three. This is consistent with the assumption that the SBES time series
represented the subsurface classes of the seabed.
Next the SBES data were joined with the corresponding MBES data based on
identification of the closest locations between MBES and SBES. Two algorithms,
PCA + k-means and fuzzy c-means were tested and results visualised. From visual
comparison, the cluster boundary appeared to have better definitions when compared
to the clustered MBES data only. The results seem to indicate that adding SBES did
in fact improve the boundary definitions.
Next the cluster results from the analysis chapters were validated against
ground truth data using a confusion matrix and kappa coefficients. For MBES, the
classes derived from optimised data yielded better accuracy compared to that of the
original data. For SBES, direct clustering was able to provide a relatively reliable
overview of the underlying classes in survey area. The combined MBES + SBES
data provided by far the best accuracy for mapping with almost a 10% increase in
overall accuracy compared to that of the original MBES data.
The results proved to be promising in optimising the acoustic data and
improving the quality of seabed mapping. Furthermore, these approaches have the
potential of significant time and cost saving in the seabed mapping process. Finally
some future directions are recommended for the findings of this research project with
the consideration that this could contribute to further development of seabed
mapping problems at mapping agencies worldwide
Large-area visually augmented navigation for autonomous underwater vehicles
Submitted to the Joint Program in Applied Ocean Science & Engineering
in partial fulfillment of the requirements for the degree of Doctor of Philosophy
at the Massachusetts Institute of Technology
and the Woods Hole Oceanographic Institution
June 2005This thesis describes a vision-based, large-area, simultaneous localization and mapping (SLAM) algorithm that respects the low-overlap imagery constraints typical of autonomous underwater vehicles (AUVs) while exploiting the inertial sensor information that is routinely available on such platforms. We adopt a systems-level approach exploiting the complementary aspects of inertial sensing and visual perception from a calibrated pose-instrumented platform. This systems-level strategy yields a robust solution to underwater imaging that
overcomes many of the unique challenges of a marine environment (e.g., unstructured terrain, low-overlap imagery, moving light source). Our large-area SLAM algorithm recursively incorporates relative-pose constraints using a view-based representation that exploits exact sparsity in the Gaussian canonical form. This sparsity allows for efficient O(n) update complexity in the number of images composing the view-based map by utilizing recent multilevel relaxation techniques. We show that our algorithmic formulation is inherently sparse unlike other feature-based canonical SLAM algorithms, which impose sparseness via pruning approximations. In particular, we investigate
the sparsification methodology employed by sparse extended information filters (SEIFs)
and offer new insight as to why, and how, its approximation can lead to inconsistencies in
the estimated state errors. Lastly, we present a novel algorithm for efficiently extracting consistent marginal covariances useful for data association from the information matrix. In summary, this thesis advances the current state-of-the-art in underwater visual navigation by demonstrating end-to-end automatic processing of the largest visually navigated dataset to date using data collected from a survey of the RMS Titanic (path length over 3 km and 3100 m2 of mapped area). This accomplishment embodies the summed contributions of this thesis to several current SLAM research issues including scalability, 6 degree of
freedom motion, unstructured environments, and visual perception.This work was funded in part by the CenSSIS ERC of the National Science Foundation
under grant EEC-9986821, in part by the Woods Hole Oceanographic Institution through a
grant from the Penzance Foundation, and in part by a NDSEG Fellowship awarded through
the Department of Defense
Internet of Underwater Things and Big Marine Data Analytics -- A Comprehensive Survey
The Internet of Underwater Things (IoUT) is an emerging communication
ecosystem developed for connecting underwater objects in maritime and
underwater environments. The IoUT technology is intricately linked with
intelligent boats and ships, smart shores and oceans, automatic marine
transportations, positioning and navigation, underwater exploration, disaster
prediction and prevention, as well as with intelligent monitoring and security.
The IoUT has an influence at various scales ranging from a small scientific
observatory, to a midsized harbor, and to covering global oceanic trade. The
network architecture of IoUT is intrinsically heterogeneous and should be
sufficiently resilient to operate in harsh environments. This creates major
challenges in terms of underwater communications, whilst relying on limited
energy resources. Additionally, the volume, velocity, and variety of data
produced by sensors, hydrophones, and cameras in IoUT is enormous, giving rise
to the concept of Big Marine Data (BMD), which has its own processing
challenges. Hence, conventional data processing techniques will falter, and
bespoke Machine Learning (ML) solutions have to be employed for automatically
learning the specific BMD behavior and features facilitating knowledge
extraction and decision support. The motivation of this paper is to
comprehensively survey the IoUT, BMD, and their synthesis. It also aims for
exploring the nexus of BMD with ML. We set out from underwater data collection
and then discuss the family of IoUT data communication techniques with an
emphasis on the state-of-the-art research challenges. We then review the suite
of ML solutions suitable for BMD handling and analytics. We treat the subject
deductively from an educational perspective, critically appraising the material
surveyed.Comment: 54 pages, 11 figures, 19 tables, IEEE Communications Surveys &
Tutorials, peer-reviewed academic journa
Sonar image interpretation for sub-sea operations
Mine Counter-Measure (MCM) missions are conducted to neutralise underwater
explosives. Automatic Target Recognition (ATR) assists operators by
increasing the speed and accuracy of data review. ATR embedded on vehicles
enables adaptive missions which increase the speed of data acquisition. This
thesis addresses three challenges; the speed of data processing, robustness of
ATR to environmental conditions and the large quantities of data required to
train an algorithm.
The main contribution of this thesis is a novel ATR algorithm. The algorithm
uses features derived from the projection of 3D boxes to produce a set of 2D
templates. The template responses are independent of grazing angle, range
and target orientation. Integer skewed integral images, are derived to accelerate
the calculation of the template responses. The algorithm is compared
to the Haar cascade algorithm. For a single model of sonar and cylindrical
targets the algorithm reduces the Probability of False Alarm (PFA) by 80%
at a Probability of Detection (PD) of 85%. The algorithm is trained on target
data from another model of sonar. The PD is only 6% lower even though no
representative target data was used for training.
The second major contribution is an adaptive ATR algorithm that uses local
sea-floor characteristics to address the problem of ATR robustness with
respect to the local environment. A dual-tree wavelet decomposition of the
sea-floor and an Markov Random Field (MRF) based graph-cut algorithm is
used to segment the terrain. A Neural Network (NN) is then trained to filter
ATR results based on the local sea-floor context. It is shown, for the Haar
Cascade algorithm, that the PFA can be reduced by 70% at a PD of 85%.
Speed of data processing is addressed using novel pre-processing techniques.
The standard three class MRF, for sonar image segmentation, is formulated
using graph-cuts. Consequently, a 1.2 million pixel image is segmented in
1.2 seconds. Additionally, local estimation of class models is introduced to
remove range dependent segmentation quality. Finally, an A* graph search
is developed to remove the surface return, a line of saturated pixels often
detected as false alarms by ATR. The A* search identifies the surface return
in 199 of 220 images tested with a runtime of 2.1 seconds. The algorithm is
robust to the presence of ripples and rocks
Exactly Sparse Delayed-State Filters for View-Based SLAM
This paper reports the novel insight that the simultaneous localization and mapping (SLAM) information matrix is exactly sparse in a delayed-state framework. Such a framework is used in view-based representations of the environment that rely upon scan-matching raw sensor data to obtain virtual observations of robot motion with respect to a place it has previously been. The exact sparseness of the delayed-state information matrix is in contrast to other recent feature-based SLAM information algorithms, such as sparse extended information filter or thin junction-tree filter, since these methods have to make approximations in order to force the feature-based SLAM information matrix to be sparse. The benefit of the exact sparsity of the delayed-state framework is that it allows one to take advantage of the information space parameterization without incurring any sparse approximation error. Therefore, it can produce equivalent results to the full-covariance solution. The approach is validated experimentally using monocular imagery for two datasets: a test-tank experiment with ground truth, and a remotely operated vehicle survey of the RMS Titanic.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86062/1/reustice-25.pd
- …