29,052 research outputs found

    Single image super-resolution using self-optimizing mask via fractional-order gradient interpolation and reconstruction

    Full text link
    Image super-resolution using self-optimizing mask via fractional-order gradient interpolation and reconstruction aims to recover detailed information from low-resolution images and reconstruct them into high-resolution images. Due to the limited amount of data and information retrieved from low-resolution images, it is difficult to restore clear, artifact-free images, while still preserving enough structure of the image such as the texture. This paper presents a new single image super-resolution method which is based on adaptive fractional-order gradient interpolation and reconstruction. The interpolated image gradient via optimal fractional-order gradient is first constructed according to the image similarity and afterwards the minimum energy function is employed to reconstruct the final high-resolution image. Fractional-order gradient based interpolation methods provide an additional degree of freedom which helps optimize the implementation quality due to the fact that an extra free parameter α\alpha-order is being used. The proposed method is able to produce a rich texture detail while still being able to maintain structural similarity even under large zoom conditions. Experimental results show that the proposed method performs better than current single image super-resolution techniques.Comment: 24 pages, 13 figures, it is to appear in ISA Transaction

    Deep Artifact-Free Residual Network for Single Image Super-Resolution

    Full text link
    Recently, convolutional neural networks have shown promising performance for single-image super-resolution. In this paper, we propose Deep Artifact-Free Residual (DAFR) network which uses the merits of both residual learning and usage of ground-truth image as target. Our framework uses a deep model to extract the high-frequency information which is necessary for high-quality image reconstruction. We use a skip-connection to feed the low-resolution image to the network before the image reconstruction. In this way, we are able to use the ground-truth images as target and avoid misleading the network due to artifacts in difference image. In order to extract clean high-frequency information, we train the network in two steps. The first step is a traditional residual learning which uses the difference image as target. Then, the trained parameters of this step are transferred to the main training in the second step. Our experimental results show that the proposed method achieves better quantitative and qualitative image quality compared to the existing methods.Comment: 8 page

    Boosting High-Level Vision with Joint Compression Artifacts Reduction and Super-Resolution

    Full text link
    Due to the limits of bandwidth and storage space, digital images are usually down-scaled and compressed when transmitted over networks, resulting in loss of details and jarring artifacts that can lower the performance of high-level visual tasks. In this paper, we aim to generate an artifact-free high-resolution image from a low-resolution one compressed with an arbitrary quality factor by exploring joint compression artifacts reduction (CAR) and super-resolution (SR) tasks. First, we propose a context-aware joint CAR and SR neural network (CAJNN) that integrates both local and non-local features to solve CAR and SR in one-stage. Finally, a deep reconstruction network is adopted to predict high quality and high-resolution images. Evaluation on CAR and SR benchmark datasets shows that our CAJNN model outperforms previous methods and also takes 26.2% shorter runtime. Based on this model, we explore addressing two critical challenges in high-level computer vision: optical character recognition of low-resolution texts, and extremely tiny face detection. We demonstrate that CAJNN can serve as an effective image preprocessing method and improve the accuracy for real-scene text recognition (from 85.30% to 85.75%) and the average precision for tiny face detection (from 0.317 to 0.611).Comment: 8 pages, 6 figures, 5 tables. Accepted by the 25th ICPR (2020

    Image Reconstruction with Predictive Filter Flow

    Full text link
    We propose a simple, interpretable framework for solving a wide range of image reconstruction problems such as denoising and deconvolution. Given a corrupted input image, the model synthesizes a spatially varying linear filter which, when applied to the input image, reconstructs the desired output. The model parameters are learned using supervised or self-supervised training. We test this model on three tasks: non-uniform motion blur removal, lossy-compression artifact reduction and single image super resolution. We demonstrate that our model substantially outperforms state-of-the-art methods on all these tasks and is significantly faster than optimization-based approaches to deconvolution. Unlike models that directly predict output pixel values, the predicted filter flow is controllable and interpretable, which we demonstrate by visualizing the space of predicted filters for different tasks.Comment: https://www.ics.uci.edu/~skong2/pff.htm

    IEGAN: Multi-purpose Perceptual Quality Image Enhancement Using Generative Adversarial Network

    Full text link
    Despite the breakthroughs in quality of image enhancement, an end-to-end solution for simultaneous recovery of the finer texture details and sharpness for degraded images with low resolution is still unsolved. Some existing approaches focus on minimizing the pixel-wise reconstruction error which results in a high peak signal-to-noise ratio. The enhanced images fail to provide high-frequency details and are perceptually unsatisfying, i.e., they fail to match the quality expected in a photo-realistic image. In this paper, we present Image Enhancement Generative Adversarial Network (IEGAN), a versatile framework capable of inferring photo-realistic natural images for both artifact removal and super-resolution simultaneously. Moreover, we propose a new loss function consisting of a combination of reconstruction loss, feature loss and an edge loss counterpart. The feature loss helps to push the output image to the natural image manifold and the edge loss preserves the sharpness of the output image. The reconstruction loss provides low-level semantic information to the generator regarding the quality of the generated images compared to the original. Our approach has been experimentally proven to recover photo-realistic textures from heavily compressed low-resolution images on public benchmarks and our proposed high-resolution World100 dataset.Comment: Accepted at IEEE WACV 201

    Deep Generative Adversarial Compression Artifact Removal

    Full text link
    Compression artifacts arise in images whenever a lossy compression algorithm is applied. These artifacts eliminate details present in the original image, or add noise and small structures; because of these effects they make images less pleasant for the human eye, and may also lead to decreased performance of computer vision algorithms such as object detectors. To eliminate such artifacts, when decompressing an image, it is required to recover the original image from a disturbed version. To this end, we present a feed-forward fully convolutional residual network model trained using a generative adversarial framework. To provide a baseline, we show that our model can be also trained optimizing the Structural Similarity (SSIM), which is a better loss with respect to the simpler Mean Squared Error (MSE). Our GAN is able to produce images with more photorealistic details than MSE or SSIM based networks. Moreover we show that our approach can be used as a pre-processing step for object detection in case images are degraded by compression to a point that state-of-the art detectors fail. In this task, our GAN method obtains better performance than MSE or SSIM trained networks.Comment: ICCV 2017 Camera Ready + Acknowledgement

    Method for motion artifact reduction using a convolutional neural network for dynamic contrast enhanced MRI of the liver

    Full text link
    Purpose: To improve the quality of images obtained via dynamic contrast-enhanced MRI (DCE-MRI) that include motion artifacts and blurring using a deep learning approach. Methods: A multi-channel convolutional neural network (MARC) based method is proposed for reducing the motion artifacts and blurring caused by respiratory motion in images obtained via DCE-MRI of the liver. The training datasets for the neural network included images with and without respiration-induced motion artifacts or blurring, and the distortions were generated by simulating the phase error in k-space. Patient studies were conducted using a multi-phase T1-weighted spoiled gradient echo sequence for the liver containing breath-hold failures during data acquisition. The trained network was applied to the acquired images to analyze the filtering performance, and the intensities and contrast ratios before and after denoising were compared via Bland-Altman plots. Results: The proposed network was found to significantly reduce the magnitude of the artifacts and blurring induced by respiratory motion, and the contrast ratios of the images after processing via the network were consistent with those of the unprocessed images. Conclusion: A deep learning based method for removing motion artifacts in images obtained via DCE-MRI in the liver was demonstrated and validated.Comment: 11 pages, 6 figure

    Blind Super-Resolution With Iterative Kernel Correction

    Full text link
    Deep learning based methods have dominated super-resolution (SR) field due to their remarkable performance in terms of effectiveness and efficiency. Most of these methods assume that the blur kernel during downsampling is predefined/known (e.g., bicubic). However, the blur kernels involved in real applications are complicated and unknown, resulting in severe performance drop for the advanced SR methods. In this paper, we propose an Iterative Kernel Correction (IKC) method for blur kernel estimation in blind SR problem, where the blur kernels are unknown. We draw the observation that kernel mismatch could bring regular artifacts (either over-sharpening or over-smoothing), which can be applied to correct inaccurate blur kernels. Thus we introduce an iterative correction scheme -- IKC that achieves better results than direct kernel estimation. We further propose an effective SR network architecture using spatial feature transform (SFT) layers to handle multiple blur kernels, named SFTMD. Extensive experiments on synthetic and real-world images show that the proposed IKC method with SFTMD can provide visually favorable SR results and the state-of-the-art performance in blind SR problem.Comment: Accepted by CVPR 2019. The Table 1 in the CVF camera ready version is corrected in this pre-print versio

    DeepLens: Shallow Depth Of Field From A Single Image

    Full text link
    We aim to generate high resolution shallow depth-of-field (DoF) images from a single all-in-focus image with controllable focal distance and aperture size. To achieve this, we propose a novel neural network model comprised of a depth prediction module, a lens blur module, and a guided upsampling module. All modules are differentiable and are learned from data. To train our depth prediction module, we collect a dataset of 2462 RGB-D images captured by mobile phones with a dual-lens camera, and use existing segmentation datasets to improve border prediction. We further leverage a synthetic dataset with known depth to supervise the lens blur and guided upsampling modules. The effectiveness of our system and training strategies are verified in the experiments. Our method can generate high-quality shallow DoF images at high resolution, and produces significantly fewer artifacts than the baselines and existing solutions for single image shallow DoF synthesis. Compared with the iPhone portrait mode, which is a state-of-the-art shallow DoF solution based on a dual-lens depth camera, our method generates comparable results, while allowing for greater flexibility to choose focal points and aperture size, and is not limited to one capture setup.Comment: 11 pages, 15 figures, accepted by SIGGRAPH Asia 2018, low-resolution versio

    Demoir\'eing of Camera-Captured Screen Images Using Deep Convolutional Neural Network

    Full text link
    Taking photos of optoelectronic displays is a direct and spontaneous way of transferring data and keeping records, which is widely practiced. However, due to the analog signal interference between the pixel grids of the display screen and camera sensor array, objectionable moir\'e (alias) patterns appear in captured screen images. As the moir\'e patterns are structured and highly variant, they are difficult to be completely removed without affecting the underneath latent image. In this paper, we propose an approach of deep convolutional neural network for demoir\'eing screen photos. The proposed DCNN consists of a coarse-scale network and a fine-scale network. In the coarse-scale network, the input image is first downsampled and then processed by stacked residual blocks to remove the moir\'e artifacts. After that, the fine-scale network upsamples the demoir\'ed low-resolution image back to the original resolution. Extensive experimental results have demonstrated that the proposed technique can efficiently remove the moir\'e patterns for camera acquired screen images; the new technique outperforms the existing ones
    • …
    corecore