72,785 research outputs found

    Spatial-Adaptive Network for Single Image Denoising

    Full text link
    Previous works have shown that convolutional neural networks can achieve good performance in image denoising tasks. However, limited by the local rigid convolutional operation, these methods lead to oversmoothing artifacts. A deeper network structure could alleviate these problems, but more computational overhead is needed. In this paper, we propose a novel spatial-adaptive denoising network (SADNet) for efficient single image blind noise removal. To adapt to changes in spatial textures and edges, we design a residual spatial-adaptive block. Deformable convolution is introduced to sample the spatially correlated features for weighting. An encoder-decoder structure with a context block is introduced to capture multiscale information. With noise removal from the coarse to fine, a high-quality noisefree image can be obtained. We apply our method to both synthetic and real noisy image datasets. The experimental results demonstrate that our method can surpass the state-of-the-art denoising methods both quantitatively and visually

    Sensitivity improvements for Shack-Hartmann wavefront sensors using total variation minimisation

    Full text link
    We investigate the improvements in Shack-Hartmann wavefront sensor image processing that can be realised using total variation minimisation techniques to remove noise from these images. We perform Monte-Carlo simulation to demonstrate that at certain signal-to-noise levels, sensitivity improvements of up to one astronomical magnitude can be realised. We also present on-sky measurements taken with the CANARY adaptive optics system that demonstrate an improvement in performance when this technique is employed, and show that this algorithm can be implemented in a real-time control system. We conclude that total variation minimisation can lead to improvements in sensitivity of up to one astronomical magnitude when used with adaptive optics systems.Comment: Accepted for publication in MNRAS. Second version has type fixed (now -> not), 3rd version has corrected author lis

    Variational based Mixed Noise Removal with CNN Deep Learning Regularization

    Full text link
    In this paper, the traditional model based variational method and learning based algorithms are naturally integrated to address mixed noise removal problem. To be different from single type noise (e.g. Gaussian) removal, it is a challenge problem to accurately discriminate noise types and levels for each pixel. We propose a variational method to iteratively estimate the noise parameters, and then the algorithm can automatically classify the noise according to the different statistical parameters. The proposed variational problem can be separated into regularization, synthesis, parameter estimation and noise classification four steps with the operator splitting scheme. Each step is related to an optimization subproblem. To enforce the regularization, the deep learning method is employed to learn the natural images priori. Compared with some model based regularizations, the CNN regularizer can significantly improve the quality of the restored images. Compared with some learning based methods, the synthesis step can produce better reconstructions by analyzing the recognized noise types and levels. In our method, the convolution neutral network (CNN) can be regarded as an operator which associated to a variational functional. From this viewpoint, the proposed method can be extended to many image reconstruction and inverse problems. Numerical experiments in the paper show that our method can achieve some state-of-the-art results for mixed noise removal

    Fast and High Quality Highlight Removal from A Single Image

    Full text link
    Specular reflection exists widely in photography and causes the recorded color deviating from its true value, so fast and high quality highlight removal from a single nature image is of great importance. In spite of the progress in the past decades in highlight removal, achieving wide applicability to the large diversity of nature scenes is quite challenging. To handle this problem, we propose an analytic solution to highlight removal based on an L2 chromaticity definition and corresponding dichromatic model. Specifically, this paper derives a normalized dichromatic model for the pixels with identical diffuse color: a unit circle equation of projection coefficients in two subspaces that are orthogonal to and parallel with the illumination, respectively. In the former illumination orthogonal subspace, which is specular-free, we can conduct robust clustering with an explicit criterion to determine the cluster number adaptively. In the latter illumination parallel subspace, a property called pure diffuse pixels distribution rule (PDDR) helps map each specular-influenced pixel to its diffuse component. In terms of efficiency, the proposed approach involves few complex calculation, and thus can remove highlight from high resolution images fast. Experiments show that this method is of superior performance in various challenging cases.Comment: 11 pages, 10 figures, submitted to IEEE TI

    Adaptive Real-Time Removal of Impulse Noise in Medical Images

    Full text link
    Noise is an important factor that degrades the quality of medical images. Impulse noise is a common noise, which is caused by malfunctioning of sensor elements or errors in the transmission of images. In medical images due to presence of white foreground and black background, many pixels have intensities similar to impulse noise and distinction between noisy and regular pixels is difficult. In software techniques, the accuracy of the noise removal is more important than the algorithm's complexity. But for hardware implementation having a low complexity algorithm with an acceptable accuracy is essential. In this paper a low complexity de-noising method is proposed that removes the noise by local analysis of the image blocks. The proposed method distinguishes non-noisy pixels that have noise-like intensities. All steps are designed to have low hardware complexity. Simulation results show that for different magnetic resonance images, the proposed method removes impulse noise with an acceptable accuracy.Comment: 9 pages, 12 figures, 2 table

    Missing Data Reconstruction in Remote Sensing image with a Unified Spatial-Temporal-Spectral Deep Convolutional Neural Network

    Full text link
    Because of the internal malfunction of satellite sensors and poor atmospheric conditions such as thick cloud, the acquired remote sensing data often suffer from missing information, i.e., the data usability is greatly reduced. In this paper, a novel method of missing information reconstruction in remote sensing images is proposed. The unified spatial-temporal-spectral framework based on a deep convolutional neural network (STS-CNN) employs a unified deep convolutional neural network combined with spatial-temporal-spectral supplementary information. In addition, to address the fact that most methods can only deal with a single missing information reconstruction task, the proposed approach can solve three typical missing information reconstruction tasks: 1) dead lines in Aqua MODIS band 6; 2) the Landsat ETM+ Scan Line Corrector (SLC)-off problem; and 3) thick cloud removal. It should be noted that the proposed model can use multi-source data (spatial, spectral, and temporal) as the input of the unified framework. The results of both simulated and real-data experiments demonstrate that the proposed model exhibits high effectiveness in the three missing information reconstruction tasks listed above.Comment: To be published in IEEE Transactions on Geoscience and Remote Sensin

    External Prior Guided Internal Prior Learning for Real-World Noisy Image Denoising

    Full text link
    Most of existing image denoising methods learn image priors from either external data or the noisy image itself to remove noise. However, priors learned from external data may not be adaptive to the image to be denoised, while priors learned from the given noisy image may not be accurate due to the interference of corrupted noise. Meanwhile, the noise in real-world noisy images is very complex, which is hard to be described by simple distributions such as Gaussian distribution, making real-world noisy image denoising a very challenging problem. We propose to exploit the information in both external data and the given noisy image, and develop an external prior guided internal prior learning method for real-world noisy image denoising. We first learn external priors from an independent set of clean natural images. With the aid of learned external priors, we then learn internal priors from the given noisy image to refine the prior model. The external and internal priors are formulated as a set of orthogonal dictionaries to efficiently reconstruct the desired image. Extensive experiments are performed on several real-world noisy image datasets. The proposed method demonstrates highly competitive denoising performance, outperforming state-of-the-art denoising methods including those designed for real-world noisy images.Comment: 14 pages, 13figures, IEEE Trans. Image Processing 27(6): 2996-3010 (2018

    CFSNet: Toward a Controllable Feature Space for Image Restoration

    Full text link
    Deep learning methods have witnessed the great progress in image restoration with specific metrics (e.g., PSNR, SSIM). However, the perceptual quality of the restored image is relatively subjective, and it is necessary for users to control the reconstruction result according to personal preferences or image characteristics, which cannot be done using existing deterministic networks. This motivates us to exquisitely design a unified interactive framework for general image restoration tasks. Under this framework, users can control continuous transition of different objectives, e.g., the perception-distortion trade-off of image super-resolution, the trade-off between noise reduction and detail preservation. We achieve this goal by controlling the latent features of the designed network. To be specific, our proposed framework, named Controllable Feature Space Network (CFSNet), is entangled by two branches based on different objectives. Our framework can adaptively learn the coupling coefficients of different layers and channels, which provides finer control of the restored image quality. Experiments on several typical image restoration tasks fully validate the effective benefits of the proposed method. Code is available at https://github.com/qibao77/CFSNet.Comment: Accepted by ICCV 201

    Weighted Low-rank Tensor Recovery for Hyperspectral Image Restoration

    Full text link
    Hyperspectral imaging, providing abundant spatial and spectral information simultaneously, has attracted a lot of interest in recent years. Unfortunately, due to the hardware limitations, the hyperspectral image (HSI) is vulnerable to various degradations, such noises (random noise, HSI denoising), blurs (Gaussian and uniform blur, HSI deblurring), and down-sampled (both spectral and spatial downsample, HSI super-resolution). Previous HSI restoration methods are designed for one specific task only. Besides, most of them start from the 1-D vector or 2-D matrix models and cannot fully exploit the structurally spectral-spatial correlation in 3-D HSI. To overcome these limitations, in this work, we propose a unified low-rank tensor recovery model for comprehensive HSI restoration tasks, in which non-local similarity between spectral-spatial cubic and spectral correlation are simultaneously captured by 3-order tensors. Further, to improve the capability and flexibility, we formulate it as a weighted low-rank tensor recovery (WLRTR) model by treating the singular values differently, and study its analytical solution. We also consider the exclusive stripe noise in HSI as the gross error by extending WLRTR to robust principal component analysis (WLRTR-RPCA). Extensive experiments demonstrate the proposed WLRTR models consistently outperform state-of-the-arts in typical low level vision HSI tasks, including denoising, destriping, deblurring and super-resolution.Comment: 22 pages, 22 figure

    Phase asymmetry guided adaptive fractional-order total variation and diffusion for feature-preserving ultrasound despeckling

    Full text link
    It is essential for ultrasound despeckling to remove speckle noise while simultaneously preserving edge features for accurate diagnosis and analysis in many applications. To preserve real edges such as ramp edges and low contrast edges, we first detect edges using a phase-based measure called phase asymmetry (PAS), which can distinguish small differences in transition border regions and varies from 00 to 11, taking 00 in ideal smooth regions and taking 11 at ideal step edges. We further propose three strategies to properly preserve edges. First, in observing that fractional-order anisotropic diffusion (FAD) filter has good performance in smooth regions while the fractional-order TV (FTV) filter performs better at edges, we leverage the PAS metric to keep a balance between FAD filter and FTV filter for achieving the best performance of preserving ramp edges. Second, considering that the FAD filter fails to protect low contrast edges by solely integrating gradient information into the diffusion coefficient, we integrate the PAS metric into the diffusion coefficient to properly preserve low contrast edges. Finally, different from fixed fractional order diffusion filters neglecting the differences between smooth regions and transition border regions, an adaptive fractional order is implemented based on the PAS metric to enhance edges. The experimental results show that our method outperforms other state-of-the-art ultrasound despeckling filters in both speckle reduction and feature preservation
    • …
    corecore