1,011 research outputs found

    Image Processing for Ice Parameter Identification in Ice Management

    Get PDF
    Various types of remotely sensed data and imaging technology will aid the development of sea-ice observation to, for instance, support estimation of ice forces critical to Dynamic Positioning (DP) operations in Arctic waters. The use of cameras as sensors for offshore operations in ice-covered regions will be explored for measurements of ice statistics and ice properties, as part of a sea-ice monitoring system. This thesis focuses on the algorithms for image processing supporting an ice management system to provide useful ice information to dynamic ice estimators and for decision support. The ice information includes ice concentration, ice types, ice floe position and floe size distribution, and other important factors in the analysis of ice-structure interaction in an ice field. The Otsu thresholding and k-means clustering methods are employed to identify the ice from the water and to calculate ice concentration. Both methods are effective for model-ice images. However, the k-means method is more effective than the Otsu method for the sea-ice images with a large amounts of brash ice and slush. The derivative edge detection and morphology edge detection methods are used to try to find the boundaries of the ice floes. Because of the inability of both methods to separate connected ice floes in the images, the watershed transform and the gradient vector flow (GVF) snake algorithm are applied. In the watershed-based method, the grayscale sea-ice image is first converted into a binary image and the watershed algorithm is carried out to segment the image. A chain code is then used to check the concavities of floe boundaries. The segmented neighboring regions that have no concave corners between them are merged, and over-segmentation lines are removed automatically. This method is applicable to separate the seemingly connected floes whose junctions are invisible or lost in the images. In the GVF snake-based method, the seeds for each ice floe are first obtained by calculating the distance transform of the binarized image. Based on these seeds, the snake contours with proper locations and radii are initialized, and the GVF snakes are then evolved automatically to detect floe boundaries and separate the connected floes. Because some holes and smaller ice pieces may be contained inside larger floes, all the segmented ice floes are arranged in order of increasing size after segmentation. The morphological cleaning is then performed to the arranged ice floes in sequence to enhance their shapes, resulting in individual ice floes identification. This method is applicable to identify non-ridged ice floes, especially in the marginal ice zone and managed ice resulting from offshore operations in sea-ice. For ice engineering, both model-scale and full-scale ice will be discussed. In the model-scale, the ice floes in the model-ice images are modeled as square shapes with predefined side lengths. To adopt the GVF snake-based method for model-ice images, three criteria are proposed to check whether it is necessary to reinitialize the contours and segment a second time based on the size and shape of model-ice floe. In the full-scale, sea-ice images are shown to be more difficult than the model-ice images analyzed. In addition to non-uniform illumination, shadows and impurities, which are common issues in both sea-ice and model-ice image processing, various types of ice (e.g., slush, brash, etc.), irregular floe sizes and shapes, and geometric distortion are challenges in seaice image processing. For sea-ice image processing, the “light ice” and “dark ice” are first obtained by using the Otsu thresholding and k-means clustering methods. Then, the “light ice” and “dark ice” are segmented and enhanced by using the GVF snake-based method. Based on the identification result, different types of sea-ice are distinguished, and the image is divided into four layers: ice floes, brash pieces, slush, and water. This then makes it possible to present a color map of the ice floes and brash pieces based on sizes. It also makes it possible to present the corresponding ice floe size distribution histogram

    Remote sensing satellite image processing techniques for image classification: a comprehensive survey

    Get PDF
    This paper is a brief survey of advance technological aspects of Digital Image Processing which are applied to remote sensing images obtained from various satellite sensors. In remote sensing, the image processing techniques can be categories in to four main processing stages: Image preprocessing, Enhancement, Transformation and Classification. Image pre-processing is the initial processing which deals with correcting radiometric distortions, atmospheric distortion and geometric distortions present in the raw image data. Enhancement techniques are applied to preprocessed data in order to effectively display the image for visual interpretation. It includes techniques to effectively distinguish surface features for visual interpretation. Transformation aims to identify particular feature of earth’s surface and classification is a process of grouping the pixels, that produces effective thematic map of particular land use and land cover

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    Remote Sensing of the Oceans

    Get PDF
    This book covers different topics in the framework of remote sensing of the oceans. Latest research advancements and brand-new studies are presented that address the exploitation of remote sensing instruments and simulation tools to improve the understanding of ocean processes and enable cutting-edge applications with the aim of preserving the ocean environment and supporting the blue economy. Hence, this book provides a reference framework for state-of-the-art remote sensing methods that deal with the generation of added-value products and the geophysical information retrieval in related fields, including: Oil spill detection and discrimination; Analysis of tropical cyclones and sea echoes; Shoreline and aquaculture area extraction; Monitoring coastal marine litter and moving vessels; Processing of SAR, HF radar and UAV measurements

    Optical Tracking for Relative Positioning in Automated Aerial Refueling

    Get PDF
    An algorithm is designed to extract features from video of an air refueling tanker for use in determining the precise relative position of a receiver aircraft. The algorithm is based on receiving a known estimate of the tanker aircraft\u27s position and attitude. The algorithm then uses a known feature model of the tanker to predict the location of those features on a video frame. A corner detector is used to extract features from the video. The measured corners are then associated with known features and tracked from frame to frame. For each frame, the associated features are used to calculate three dimensional pointing vectors to the features of the tanker. These vectors are passed to a navigation algorithm which uses extended Kalman filters, as well as data-linked INS data to solve for the relative position of the tanker. The algorithms were tested using data from a flight test accomplished by the USAF Test Pilot School using a C-12C as a simulated tanker and a Learjet LJ-24 as the simulated receiver. The system was able to provide at least a dozen useful measurements per frame, with and without projection error

    Multidimensional image analysis of cardiac function in MRI

    Get PDF
    Cardiac morphology is a key indicator of cardiac health. Important metrics that are currently in clinical use are left-ventricle cardiac ejection fraction, cardiac muscle (myocardium) mass, myocardium thickness and myocardium thickening over the cardiac cycle. Advances in imaging technologies have led to an increase in temporal and spatial resolution. Such an increase in data presents a laborious task for medical practitioners to analyse. In this thesis, measurement of the cardiac left-ventricle function is achieved by developing novel methods for the automatic segmentation of the left-ventricle blood-pool and the left ventricle myocardium boundaries. A preliminary challenge faced in this task is the removal of noise from Magnetic Resonance Imaging (MRI) data, which is addressed by using advanced data filtering procedures. Two mechanisms for left-ventricle segmentation are employed. Firstly segmentation of the left ventricle blood-pool for the measurement of ejection fraction is undertaken in the signal intensity domain. Utilising the high discrimination between blood and tissue, a novel methodology based on a statistical partitioning method offers success in localising and segmenting the blood pool of the left ventricle. From this initialisation, the estimation of the outer wall (epi-cardium) of the left ventricle can be achieved using gradient information and prior knowledge. Secondly, a more involved method for extracting the myocardium of the leftventricle is developed, that can better perform segmentation in higher dimensions. Spatial information is incorporated in the segmentation by employing a gradient-based boundary evolution. A level-set scheme is implemented and a novel formulation for the extraction of the cardiac muscle is introduced. Two surfaces, representing the inner and the outer boundaries of the left-ventricle, are simultaneously evolved using a coupling function and supervised with a probabilistic model of expertly assisted manual segmentations

    Modelling, Test and Practice of Steel Structures

    Get PDF
    This reprint provides an international forum for the presentation and discussion of the latest developments in structural-steel research and its applications. The topics of this reprint include the modelling, testing and practice of steel structures and steel-based composite structures. A total of 17 high-quality, original papers dealing with all aspects of steel-structures research, including modelling, testing, and construction research on material properties, components, assemblages, connection, and structural behaviors, are included for publication

    ImageNet Large Scale Visual Recognition Challenge

    Get PDF
    The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements.Comment: 43 pages, 16 figures. v3 includes additional comparisons with PASCAL VOC (per-category comparisons in Table 3, distribution of localization difficulty in Fig 16), a list of queries used for obtaining object detection images (Appendix C), and some additional reference
    corecore