10 research outputs found

    Automatic Side-Scan Sonar Image Enhancement in Curvelet Transform Domain

    Get PDF
    We propose a novel automatic side-scan sonar image enhancement algorithm based on curvelet transform. The proposed algorithm uses the curvelet transform to construct a multichannel enhancement structure based on human visual system (HVS) and adopts a new adaptive nonlinear mapping scheme to modify the curvelet transform coefficients in each channel independently and automatically. Firstly, the noisy and low-contrast sonar image is decomposed into a low frequency channel and a series of high frequency channels by using curvelet transform. Secondly, a new nonlinear mapping scheme, which coincides with the logarithmic nonlinear enhancement characteristic of the HVS perception, is designed without any parameter tuning to adjust the curvelet transform coefficients in each channel. Finally, the enhanced image can be reconstructed with the modified coefficients via inverse curvelet transform. The enhancement is achieved by amplifying subtle features, improving contrast, and eliminating noise simultaneously. Experiment results show that the proposed algorithm produces better enhanced results than state-of-the-art algorithms

    A Novel Multiscale Edge Detection Approach Based on Nonsubsampled Contourlet Transform and Edge Tracking

    Get PDF
    Edge detection is a fundamental task in many computer vision applications. In this paper, we propose a novel multiscale edge detection approach based on the nonsubsampled contourlet transform (NSCT): a fully shift-invariant, multiscale, and multidirection transform. Indeed, unlike traditional wavelets, contourlets have the ability to fully capture directional and other geometrical features for images with edges. Firstly, compute the NSCT of the input image. Secondly, the K-means clustering algorithm is applied to each level of the NSCT for distinguishing noises from edges. Thirdly, we select the edge point candidates of the input image by identifying the NSCT modulus maximum at each scale. Finally, the edge tracking algorithm from coarser to finer is proposed to improve robustness against spurious responses and accuracy in the location of the edges. Experimental results show that the proposed method achieves better edge detection performance compared with the typical methods. Furthermore, the proposed method also works well for noisy images

    Remote Sensing of the Oceans

    Get PDF
    This book covers different topics in the framework of remote sensing of the oceans. Latest research advancements and brand-new studies are presented that address the exploitation of remote sensing instruments and simulation tools to improve the understanding of ocean processes and enable cutting-edge applications with the aim of preserving the ocean environment and supporting the blue economy. Hence, this book provides a reference framework for state-of-the-art remote sensing methods that deal with the generation of added-value products and the geophysical information retrieval in related fields, including: Oil spill detection and discrimination; Analysis of tropical cyclones and sea echoes; Shoreline and aquaculture area extraction; Monitoring coastal marine litter and moving vessels; Processing of SAR, HF radar and UAV measurements

    A Tutorial on Speckle Reduction in Synthetic Aperture Radar Images

    Get PDF
    Speckle is a granular disturbance, usually modeled as a multiplicative noise, that affects synthetic aperture radar (SAR) images, as well as all coherent images. Over the last three decades, several methods have been proposed for the reduction of speckle, or despeckling, in SAR images. Goal of this paper is making a comprehensive review of despeckling methods since their birth, over thirty years ago, highlighting trends and changing approaches over years. The concept of fully developed speckle is explained. Drawbacks of homomorphic filtering are pointed out. Assets of multiresolution despeckling, as opposite to spatial-domain despeckling, are highlighted. Also advantages of undecimated, or stationary, wavelet transforms over decimated ones are discussed. Bayesian estimators and probability density function (pdf) models in both spatial and multiresolution domains are reviewed. Scale-space varying pdf models, as opposite to scale varying models, are promoted. Promising methods following non-Bayesian approaches, like nonlocal (NL) filtering and total variation (TV) regularization, are reviewed and compared to spatial- and wavelet-domain Bayesian filters. Both established and new trends for assessment of despeckling are presented. A few experiments on simulated data and real COSMO-SkyMed SAR images highlight, on one side the costperformance tradeoff of the different methods, on the other side the effectiveness of solutions purposely designed for SAR heterogeneity and not fully developed speckle. Eventually, upcoming methods based on new concepts of signal processing, like compressive sensing, are foreseen as a new generation of despeckling, after spatial-domain and multiresolution-domain method

    SAR processing with non-linear FM chirp waveforms.

    Full text link

    Non-Standard Imaging Techniques

    Get PDF
    The first objective of the thesis is to investigate the problem of reconstructing a small-scale object (a few millimeters or smaller) in 3D. In Chapter 3, we show how this problem can be solved effectively by a new multifocus multiview 3D reconstruction procedure which includes a new Fixed-Lens multifocus image capture and a calibrated image registration technique using analytic homography transformation. The experimental results using the real and synthetic images demonstrate the effectiveness of the proposed solutions by showing that both the fixed-lens image capture and multifocus stacking with calibrated image alignment significantly reduce the errors in the camera poses and produce more complete 3D reconstructed models as compared with those by the conventional moving lens image capture and multifocus stacking. The second objective of the thesis is modelling the dual-pixel (DP) camera. In Chapter 4, to understand the potential of the DP sensor for computer vision applications, we study the formation of the DP pair which links the blur and the depth information. A mathematical DP model is proposed which can benefit depth estimation by the blur. These explorations motivate us to propose an end-to-end DDDNet (DP-based Depth and Deblur Network) to jointly estimate the depth and restore the image . Moreover, we define a reblur loss, which reflects the relationship of the DP image formation process with depth information, to regularize our depth estimate in training. To meet the requirement of a large amount of data for learning, we propose the first DP image simulator which allows us to create datasets with DP pairs from any existing RGBD dataset. As a side contribution, we collect a real dataset for further research. Extensive experimental evaluation on both synthetic and real datasets shows that our approach achieves competitive performance compared to state-of-the-art approaches. Another (third) objective of this thesis is to tackle the multifocus image fusion problem, particularly for long multifocus image sequences. Multifocus image stacking/fusion produces an in-focus image of a scene from a number of partially focused images of that scene in order to extend the depth of field. One of the limitations of the current state of the art multifocus fusion methods is not considering image registration/alignment before fusion. Consequently, fusing unregistered multifocus images produces an in-focus image containing misalignment artefacts. In Chapter 5, we propose image registration by projective transformation before fusion to remove the misalignment artefacts. We also propose a method based on 3D deconvolution to retrieve the in-focus image by formulating the multifocus image fusion problem as a 3D deconvolution problem. The proposed method achieves superior performance compared to the state of the art methods. It is also shown that, the proposed projective transformation for image registration can improve the quality of the fused images. Moreover, we implement a multifocus simulator to generate synthetic multifocus data from any RGB-D dataset. The fourth objective of this thesis is to explore new ways to detect the polarization state of light. To achieve the objective, in Chapter 6, we investigate a new optical filter namely optical rotation filter for detecting the polarization state with a fewer number of images. The proposed method can estimate polarization state using two images, one with the filter and another without. The accuracy of estimating the polarization parameters using the proposed method is almost similar to that of the existing state of the art method. In addition, the feasibility of detecting the polarization state using only one RGB image captured with the optical rotation filter is also demonstrated by estimating the image without the filter from the image with the filter using a generative adversarial network

    NASA's university program: Active projects, fiscal year 1979

    Get PDF
    Current information and related statistics are provided for each grant/contract/ cooperative agreement during the report period. Cross indexes by agreement number, field of science and engineering, and technical officer location are included
    corecore