19 research outputs found
Image based real-time ice load prediction tool for ship and offshore platform in managed ice field
The increased activities in arctic water warrant modelling of ice properties and ice-structure
interaction forces to ensure safe operations of ships and offshore platforms. Several established
analytical and numerical ice force estimation models can be found in the literature. Recently,
researchers have been working on Machine Learning (ML) based, data-driven force predictors
trained on experimental data and field measurement. Application of both traditional and ML-based
image processing for extracting information from ice floe images has also been reported in recent
literature; because extraction of ice features from real-time videos and images can significantly
improve ice force prediction.
However, there exists room for improvement in those studies. For example, accurate extraction of
ice floe information is still challenging because of their complex and varied shapes, colour
similarities and reflection of light on them. Besides, real ice floes are often found in groups with
overlapped and/or connected boundaries, making detecting even more challenging due to weaker
edges in such situations. The development of an efficient coupled model, which will extract
information from the ice floe images and train a force predictor based on the extracted dataset, is
still an open problem.
This research presents two Hybrid force prediction models. Instead of using analytical or
numerical approaches, the Hybrid models directly extract floe characteristics from the images and
later train ML-based force predictors using those extracted floe parameters. The first model
extracted ice features from images using traditional image processing techniques and then used
SVM and FFNN to develop two separate force predictors. The improved ice image processing
technique used here can extract useful ice properties from a closely connected, unevenly
illuminated floe field with various floe sizes and shapes. The second model extracted ice features
from images using RCNN and then trained two separate force predictors using SVM and FFNN,
similar to the first model.
The dataset for training SVM and FFNN force predictors involved variables extracted from the
image (floe number, density, sizes, etc.) and variables taken from the experimental analysis results
(ship speed, floe thickness, force etc.). The performance of both Hybrid models in terms of image
segmentation and force prediction, are analyzed and compared to establish their validity and
applicability.
Nevertheless, there exists room for further development of the proposed Hybrid models. For
example, extend the current models to include more data and investigate other machine learning
and deep learning-based network architectures to predict the ice force directly from the image as
an input
Image Processing for Ice Parameter Identification in Ice Management
Various types of remotely sensed data and imaging technology will aid the
development of sea-ice observation to, for instance, support estimation of ice
forces critical to Dynamic Positioning (DP) operations in Arctic waters. The
use of cameras as sensors for offshore operations in ice-covered regions will
be explored for measurements of ice statistics and ice properties, as part of a
sea-ice monitoring system. This thesis focuses on the algorithms for image
processing supporting an ice management system to provide useful ice information
to dynamic ice estimators and for decision support. The ice information
includes ice concentration, ice types, ice floe position and floe size distribution,
and other important factors in the analysis of ice-structure interaction in an ice
field.
The Otsu thresholding and k-means clustering methods are employed to identify
the ice from the water and to calculate ice concentration. Both methods
are effective for model-ice images. However, the k-means method is more effective
than the Otsu method for the sea-ice images with a large amounts of
brash ice and slush.
The derivative edge detection and morphology edge detection methods are
used to try to find the boundaries of the ice floes. Because of the inability
of both methods to separate connected ice floes in the images, the watershed
transform and the gradient vector flow (GVF) snake algorithm are applied.
In the watershed-based method, the grayscale sea-ice image is first converted
into a binary image and the watershed algorithm is carried out to segment the
image. A chain code is then used to check the concavities of floe boundaries.
The segmented neighboring regions that have no concave corners between
them are merged, and over-segmentation lines are removed automatically.
This method is applicable to separate the seemingly connected floes
whose junctions are invisible or lost in the images.
In the GVF snake-based method, the seeds for each ice floe are first obtained
by calculating the distance transform of the binarized image. Based on these
seeds, the snake contours with proper locations and radii are initialized, and
the GVF snakes are then evolved automatically to detect floe boundaries and
separate the connected floes. Because some holes and smaller ice pieces may
be contained inside larger floes, all the segmented ice floes are arranged in
order of increasing size after segmentation. The morphological cleaning is
then performed to the arranged ice floes in sequence to enhance their shapes,
resulting in individual ice floes identification. This method is applicable to
identify non-ridged ice floes, especially in the marginal ice zone and managed ice resulting from offshore operations in sea-ice.
For ice engineering, both model-scale and full-scale ice will be discussed. In
the model-scale, the ice floes in the model-ice images are modeled as square
shapes with predefined side lengths. To adopt the GVF snake-based method for
model-ice images, three criteria are proposed to check whether it is necessary
to reinitialize the contours and segment a second time based on the size and
shape of model-ice floe. In the full-scale, sea-ice images are shown to be
more difficult than the model-ice images analyzed. In addition to non-uniform
illumination, shadows and impurities, which are common issues in both sea-ice
and model-ice image processing, various types of ice (e.g., slush, brash, etc.),
irregular floe sizes and shapes, and geometric distortion are challenges in seaice
image processing. For sea-ice image processing, the “light ice” and “dark
ice” are first obtained by using the Otsu thresholding and k-means clustering
methods. Then, the “light ice” and “dark ice” are segmented and enhanced
by using the GVF snake-based method. Based on the identification result,
different types of sea-ice are distinguished, and the image is divided into four
layers: ice floes, brash pieces, slush, and water. This then makes it possible
to present a color map of the ice floes and brash pieces based on sizes. It
also makes it possible to present the corresponding ice floe size distribution
histogram
Shoreline extraction based on an active connection matrix (ACM) image enhancement strategy
Coastal environments are facing constant changes over time due to their dynamic nature and geological, geomorphological, hydrodynamic, biological, climatic and anthropogenic factors. For these reasons, the monitoring of these areas is crucial for the safeguarding of the cultural heritage and the populations living there. The focus of this paper is shoreline extraction by means of an experimental algorithm, called J-Net Dynamic (Semeion Research Center of Sciences of Communication, Rome, Italy). It was tested on two types of image: a very high resolution (VHR) multispectral image (WorldView-2) and a high resolution (HR) radar synthetic aperture radar (SAR) image (Sentinel-1). The extracted shorelines were compared with those manually digitized for both images independently. The results obtained with the J-Net Dynamic algorithm were also compared with common algorithms, widely used in the literature, including theWorldView water index and the Canny edge detector. The results show that the experimental algorithm is more effective than the others, as it improves shoreline extraction accuracy both in the optical and SAR images
BGM-Net: Boundary-Guided Multiscale Network for Breast Lesion Segmentation in Ultrasound.
Automatic and accurate segmentation of breast lesion regions from ultrasonography is an essential step for ultrasound-guided diagnosis and treatment. However, developing a desirable segmentation method is very difficult due to strong imaging artifacts e.g., speckle noise, low contrast and intensity inhomogeneity, in breast ultrasound images. To solve this problem, this paper proposes a novel boundary-guided multiscale network (BGM-Net) to boost the performance of breast lesion segmentation from ultrasound images based on the feature pyramid network (FPN). First, we develop a boundary-guided feature enhancement (BGFE) module to enhance the feature map for each FPN layer by learning a boundary map of breast lesion regions. The BGFE module improves the boundary detection capability of the FPN framework so that weak boundaries in ambiguous regions can be correctly identified. Second, we design a multiscale scheme to leverage the information from different image scales in order to tackle ultrasound artifacts. Specifically, we downsample each testing image into a coarse counterpart, and both the testing image and its coarse counterpart are input into BGM-Net to predict a fine and a coarse segmentation maps, respectively. The segmentation result is then produced by fusing the fine and the coarse segmentation maps so that breast lesion regions are accurately segmented from ultrasound images and false detections are effectively removed attributing to boundary feature enhancement and multiscale image information. We validate the performance of the proposed approach on two challenging breast ultrasound datasets, and experimental results demonstrate that our approach outperforms state-of-the-art methods
Texture analysis of multimodal magnetic resonance images in support of diagnostic classification of childhood brain tumours
Primary brain tumours are recognised as the most common form of solid tumours in children, with pilocytic astrocytoma, medulloblastoma and ependymoma being found most frequently. Despite their high mortality rate, early detection can be facilitated through the use of Magnetic Resonance Imaging (MRI), which is the preferred scanning technique for paediatric patients. MRI offers a variety of imaging sequences through structural and functional imaging, as well as providing complementary tissue information. However visual examination of MR images provides limited ability to characterise distinct histological types of brain tumours. In order to improve diagnostic classification, we explore the use of a computer-aided system based on texture analysis (TA) methods. TA has been applied on conventional MRI but has been less commonly studied on diffusion MRI of brain-related pathology. Furthermore, the combination of textural features derived from both imaging approaches has not yet been widely studied. In this thesis, the aim of the research is to investigate TA based on multi-centre multimodal MRI, in order to provide more comprehensive information and develop an automated processing framework for the classification of childhood brain tumours
Building extraction from airborne laser scanning data : an analysis of the state of the art
This article provides an overview of building extraction approaches applied to Airborne Laser Scanning (ALS) data by examining elements used in original publications, such as data set area, accuracy measures, reference data for accuracy assessment, and the use of auxiliary data. We succinctly analyzed the most cited publication for each year between 1998 and 2014, resulting in 54 ISI-indexed articles and 14 non-ISI indexed publications. Based on this, we position some built-in features of ALS to create a comprehensive picture of the state of the art and the progress through the years. Our analyses revealed trends and remaining challenges that impact the community. The results show remaining deficiencies, such as inconsistent accuracy assessment measures, limitations of independent reference data sources for accuracy assessment, relatively few documented applications of the methods to wide area data sets, and the lack of transferability studies and measures. Finally, we predict some future trends and identify some gaps which existing approaches may not exhaustively cover. Despite these deficiencies, this comprehensive literature analysis demonstrates that ALS data is certainly a valuable source of spatial information for building extraction. When taking into account the short civilian history of ALS one can conclude that ALS has become well established in the scientific community and seems to become indispensable in many application fields.(VLID)174964
Dynamical models and machine learning for supervised segmentation
This thesis is concerned with the problem of how to outline regions of interest in medical images, when
the boundaries are weak or ambiguous and the region shapes are irregular. The focus on machine learning
and interactivity leads to a common theme of the need to balance conflicting requirements. First,
any machine learning method must strike a balance between how much it can learn and how well it
generalises. Second, interactive methods must balance minimal user demand with maximal user control.
To address the problem of weak boundaries,methods of supervised texture classification are investigated
that do not use explicit texture features. These methods enable prior knowledge about the image to
benefit any segmentation framework. A chosen dynamic contour model, based on probabilistic boundary
tracking, combines these image priors with efficient modes of interaction. We show the benefits of the
texture classifiers over intensity and gradient-based image models, in both classification and boundary
extraction.
To address the problem of irregular region shape, we devise a new type of statistical shape model
(SSM) that does not use explicit boundary features or assume high-level similarity between region
shapes. First, the models are used for shape discrimination, to constrain any segmentation framework
by way of regularisation. Second, the SSMs are used for shape generation, allowing probabilistic segmentation
frameworks to draw shapes from a prior distribution. The generative models also include
novel methods to constrain shape generation according to information from both the image and user
interactions.
The shape models are first evaluated in terms of discrimination capability, and shown to out-perform
other shape descriptors. Experiments also show that the shape models can benefit a standard type of
segmentation algorithm by providing shape regularisers. We finally show how to exploit the shape
models in supervised segmentation frameworks, and evaluate their benefits in user trials