276 research outputs found

    Learning a Dilated Residual Network for SAR Image Despeckling

    Full text link
    In this paper, to break the limit of the traditional linear models for synthetic aperture radar (SAR) image despeckling, we propose a novel deep learning approach by learning a non-linear end-to-end mapping between the noisy and clean SAR images with a dilated residual network (SAR-DRN). SAR-DRN is based on dilated convolutions, which can both enlarge the receptive field and maintain the filter size and layer depth with a lightweight structure. In addition, skip connections and residual learning strategy are added to the despeckling model to maintain the image details and reduce the vanishing gradient problem. Compared with the traditional despeckling methods, the proposed method shows superior performance over the state-of-the-art methods on both quantitative and visual assessments, especially for strong speckle noise.Comment: 18 pages, 13 figures, 7 table

    DCANet: Dual Convolutional Neural Network with Attention for Image Blind Denoising

    Full text link
    Noise removal of images is an essential preprocessing procedure for many computer vision tasks. Currently, many denoising models based on deep neural networks can perform well in removing the noise with known distributions (i.e. the additive Gaussian white noise). However eliminating real noise is still a very challenging task, since real-world noise often does not simply follow one single type of distribution, and the noise may spatially vary. In this paper, we present a new dual convolutional neural network (CNN) with attention for image blind denoising, named as the DCANet. To the best of our knowledge, the proposed DCANet is the first work that integrates both the dual CNN and attention mechanism for image denoising. The DCANet is composed of a noise estimation network, a spatial and channel attention module (SCAM), and a CNN with a dual structure. The noise estimation network is utilized to estimate the spatial distribution and the noise level in an image. The noisy image and its estimated noise are combined as the input of the SCAM, and a dual CNN contains two different branches is designed to learn the complementary features to obtain the denoised image. The experimental results have verified that the proposed DCANet can suppress both synthetic and real noise effectively. The code of DCANet is available at https://github.com/WenCongWu/DCANet

    지문 μ˜μƒ 작음 제거 및 볡원을 μœ„ν•œ 심측 ν•©μ„±κ³± 신경망

    Get PDF
    ν•™μœ„λ…Όλ¬Έ (석사) -- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› : μžμ—°κ³Όν•™λŒ€ν•™ ν˜‘λ™κ³Όμ • 계산과학전곡, 2021. 2. κ°•λͺ…μ£Ό.Biometric authentication using fingerprints requires a method for image denoising and inpainting to extract fingerprints from degraded fingerprint images. A few deep learning models for fingerprint image denoising and inpainting were proposed in ChaLearn LAP Inpainting Competition - Track 3, ECCV 2018. In this thesis, a new deep learning model for fingerprint image denoising is proposed. The proposed model is adapted from FusionNet, which is a convolutional neural network based deep learning model for image segmentation. The performance of the proposed model was demonstrated using the dataset from the ECCV 2018 ChaLearn Competition. It was shown that the proposed model obtains better results compared with the models that achieved high performances in the competition.지문을 μ‚¬μš©ν•œ 생체 인식 인증은 ν’ˆμ§ˆμ΄ μ €ν•˜λœ 지문 μ˜μƒμ—μ„œ 지문을 μΆ”μΆœν•˜κΈ° μœ„ν•œ μ˜μƒ 작음 제거 및 볡원 방법을 ν•„μš”λ‘œ ν•œλ‹€. 지문 μ˜μƒ 작음 제거 및 볡원을 μœ„ν•œ λͺ‡ 가지 λ”₯λŸ¬λ‹ λͺ¨λΈμ΄ ChaLearn LAP Inpainting Competition - Track 3, ECCV 2018μ—μ„œ μ œμ•ˆλ˜μ—ˆλ‹€. λ³Έ λ…Όλ¬Έμ—μ„œλŠ” 지문 μ˜μƒ 작음 제거λ₯Ό μœ„ν•œ μƒˆλ‘œμš΄ λ”₯λŸ¬λ‹ λͺ¨λΈμ„ μ œμ•ˆν•œλ‹€. μ œμ•ˆλœ λͺ¨λΈμ€ μ˜μƒ 뢄할을 μœ„ν•œ ν•©μ„±κ³± 신경망 기반 λ”₯λŸ¬λ‹ λͺ¨λΈμΈ FusionNet을 μˆ˜μ •ν•˜μ—¬ μž‘μ„±ν•˜μ˜€λ‹€. μ œμ•ˆλœ λͺ¨λΈμ˜ μ„±λŠ₯은 ChaLearn Competition의 데이터셋을 μ‚¬μš©ν•˜μ—¬ κ²€μ¦λ˜μ—ˆλ‹€. 이λ₯Ό 톡해 μ œμ•ˆλœ λͺ¨λΈμ΄ λŒ€νšŒμ—μ„œ 높은 μ„±λŠ₯을 νšλ“ν•œ λ‹€λ₯Έ λͺ¨λΈλ“€μ— λΉ„ν•˜μ—¬ 더 λ‚˜μ€ κ²°κ³Όλ₯Ό μ–»μŒμ„ ν™•μΈν•˜μ˜€λ‹€.Abstract i Contents ii 1 Introduction 1 2 Related Work 3 2.1 Residual Neural Network . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Convolutional Neural Networks for Semantic Segmentation . . . . . . 4 2.2.1 U-Net . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2.2 FusionNet . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Recent Trends in Fingerprint Image Denoising . . . . . . . . . . . . . 6 3 Proposed Model 7 3.1 Model Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2 Architecture Detail . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2.1 Residual Block . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2.2 Encoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.2.3 Bridge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.2.4 Decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.3 Loss Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 4 Experiments 13 4.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.2 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.3 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.4.1 Ablation Study . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.4.2 Comparison with Other Models . . . . . . . . . . . . . . . . 17 5 Conclusion 21 Abstract (In Korean)Maste
    • …
    corecore