34 research outputs found

    Speckle Noise Reduction Technique for SAR Images Using Statistical Characteristics of Speckle Noise and Discrete Wavelet Transform

    No full text
    Synthetic aperture radar (SAR) images map Earth’s surface at high resolution, regardless of the weather conditions or sunshine phenomena. Therefore, SAR images have applications in various fields. Speckle noise, which has the characteristic of multiplicative noise, degrades the image quality of SAR images, which causes information loss. This study proposes a speckle noise reduction algorithm while using the speckle reducing anisotropic diffusion (SRAD) filter, discrete wavelet transform (DWT), soft threshold, improved guided filter (IGF), and guided filter (GF), with the aim of removing speckle noise. First, the SRAD filter is applied to the SAR images, and a logarithmic transform is used to convert multiplicative noise in the resulting SRAD image into additive noise. A two-level DWT is used to divide the resulting SRAD image into one low-frequency and six high-frequency sub-band images. To remove the additive noise and preserve edge information, horizontal and vertical sub-band images employ the soft threshold; the diagonal sub-band images employ the IGF; while, the low- frequency sub-band image removes additive noise using the GF. The experiments used both standard and real SAR images. The experimental results reveal that the proposed method, in comparison to state-of-the art methods, obtains excellent speckle noise removal, while preserving the edges and maintaining low computational complexity

    Multi-Color Space Network for Salient Object Detection

    No full text
    The salient object detection (SOD) technology predicts which object will attract the attention of an observer surveying a particular scene. Most state-of-the-art SOD methods are top-down mechanisms that apply fully convolutional networks (FCNs) of various structures to RGB images, extract features from them, and train a network. However, owing to the variety of factors that affect visual saliency, securing sufficient features from a single color space is difficult. Therefore, in this paper, we propose a multi-color space network (MCSNet) to detect salient objects using various saliency cues. First, the images were converted to HSV and grayscale color spaces to obtain saliency cues other than those provided by RGB color information. Each saliency cue was fed into two parallel VGG backbone networks to extract features. Contextual information was obtained from the extracted features using atrous spatial pyramid pooling (ASPP). The features obtained from both paths were passed through the attention module, and channel and spatial features were highlighted. Finally, the final saliency map was generated using a step-by-step residual refinement module (RRM). Furthermore, the network was trained with a bidirectional loss to supervise saliency detection results. Experiments on five public benchmark datasets showed that our proposed network achieved superior performance in terms of both subjective results and objective metrics
    corecore