19,245 research outputs found

    A Learning-Based Framework for Line-Spectra Super-resolution

    Full text link
    We propose a learning-based approach for estimating the spectrum of a multisinusoidal signal from a finite number of samples. A neural-network is trained to approximate the spectra of such signals on simulated data. The proposed methodology is very flexible: adapting to different signal and noise models only requires modifying the training data accordingly. Numerical experiments show that the approach performs competitively with classical methods designed for additive Gaussian noise at a range of noise levels, and is also effective in the presence of impulsive noise.Comment: Accepted at ICASSP 201

    Deep learning-based super-resolution in coherent imaging systems

    Full text link
    We present a deep learning framework based on a generative adversarial network (GAN) to perform super-resolution in coherent imaging systems. We demonstrate that this framework can enhance the resolution of both pixel size-limited and diffraction-limited coherent imaging systems. We experimentally validated the capabilities of this deep learning-based coherent imaging approach by super-resolving complex images acquired using a lensfree on-chip holographic microscope, the resolution of which was pixel size-limited. Using the same GAN-based approach, we also improved the resolution of a lens-based holographic imaging system that was limited in resolution by the numerical aperture of its objective lens. This deep learning-based super-resolution framework can be broadly applied to enhance the space-bandwidth product of coherent imaging systems using image data and convolutional neural networks, and provides a rapid, non-iterative method for solving inverse image reconstruction or enhancement problems in optics.Comment: 18 pages, 9 figures, 3 table

    Aerial Spectral Super-Resolution using Conditional Adversarial Networks

    Full text link
    Inferring spectral signatures from ground based natural images has acquired a lot of interest in applied deep learning. In contrast to the spectra of ground based images, aerial spectral images have low spatial resolution and suffer from higher noise interference. In this paper, we train a conditional adversarial network to learn an inverse mapping from a trichromatic space to 31 spectral bands within 400 to 700 nm. The network is trained on AeroCampus, a first of its kind aerial hyperspectral dataset. AeroCampus consists of high spatial resolution color images and low spatial resolution hyperspectral images (HSI). Color images synthesized from 31 spectral bands are used to train our network. With a baseline root mean square error of 2.48 on the synthesized RGB test data, we show that it is possible to generate spectral signatures in aerial imagery

    Super-Resolution 1H Magnetic Resonance Spectroscopic Imaging utilizing Deep Learning

    Full text link
    Magnetic resonance spectroscopic imaging (SI) is a unique imaging technique that provides biochemical information from in vivo tissues. The 1H spectra acquired from several spatial regions are quantified to yield metabolite concentrations reflective of tissue metabolism. However, since these metabolites are found in tissues at very low concentrations, SI is often acquired with limited spatial resolution. In this work we test the hypothesis that deep learning is able to upscale low resolution SI, together with the T1-weighted (T1w) image, to reconstruct high resolution SI. We report a novel densely connected Unet (D-Unet) architecture capable of producing super-resolution spectroscopic images. The inputs for the D-UNet are the T1w image and the low resolution SI image while the output is the high resolution SI. The results of the D-UNet are compared both qualitatively and quantitatively to simulated and in vivo high resolution SI. It is found that this deep learning approach can produce high quality spectroscopic images and reconstruct entire 1H spectra from low resolution acquisitions, which can greatly advance the current SI workflow.Comment: 8 figures, 1 tabl

    Hyperspectral recovery from RGB images using Gaussian Processes

    Full text link
    We propose to recover spectral details from RGB images of known spectral quantization by modeling natural spectra under Gaussian Processes and combining them with the RGB images. Our technique exploits Process Kernels to model the relative smoothness of reflectance spectra, and encourages non-negativity in the resulting signals for better estimation of the reflectance values. The Gaussian Processes are inferred in sets using clusters of spatio-spectrally correlated hyperspectral training patches. Each set is transformed to match the spectral quantization of the test RGB image. We extract overlapping patches from the RGB image and match them to the hyperspectral training patches by spectrally transforming the latter. The RGB patches are encoded over the transformed Gaussian Processes related to those hyperspectral patches and the resulting image is constructed by combining the codes with the original Processes. Our approach infers the desired Gaussian Processes under a fully Bayesian model inspired by Beta-Bernoulli Process, for which we also present the inference procedure. A thorough evaluation using three hyperspectral datasets demonstrates the effective extraction of spectral details from RGB images by the proposed technique.Comment: Revision submitted to IEEE TPAM

    NTIRE 2020 Challenge on Spectral Reconstruction from an RGB Image

    Full text link
    This paper reviews the second challenge on spectral reconstruction from RGB images, i.e., the recovery of whole-scene hyperspectral (HS) information from a 3-channel RGB image. As in the previous challenge, two tracks were provided: (i) a "Clean" track where HS images are estimated from noise-free RGBs, the RGB images are themselves calculated numerically using the ground-truth HS images and supplied spectral sensitivity functions (ii) a "Real World" track, simulating capture by an uncalibrated and unknown camera, where the HS images are recovered from noisy JPEG-compressed RGB images. A new, larger-than-ever, natural hyperspectral image data set is presented, containing a total of 510 HS images. The Clean and Real World tracks had 103 and 78 registered participants respectively, with 14 teams competing in the final testing phase. A description of the proposed methods, alongside their challenge scores and an extensive evaluation of top performing methods is also provided. They gauge the state-of-the-art in spectral reconstruction from an RGB image

    Learned Spectral Super-Resolution

    Full text link
    We describe a novel method for blind, single-image spectral super-resolution. While conventional super-resolution aims to increase the spatial resolution of an input image, our goal is to spectrally enhance the input, i.e., generate an image with the same spatial resolution, but a greatly increased number of narrow (hyper-spectral) wave-length bands. Just like the spatial statistics of natural images has rich structure, which one can exploit as prior to predict high-frequency content from a low resolution image, the same is also true in the spectral domain: the materials and lighting conditions of the observed world induce structure in the spectrum of wavelengths observed at a given pixel. Surprisingly, very little work exists that attempts to use this diagnosis and achieve blind spectral super-resolution from single images. We start from the conjecture that, just like in the spatial domain, we can learn the statistics of natural image spectra, and with its help generate finely resolved hyper-spectral images from RGB input. Technically, we follow the current best practice and implement a convolutional neural network (CNN), which is trained to carry out the end-to-end mapping from an entire RGB image to the corresponding hyperspectral image of equal size. We demonstrate spectral super-resolution both for conventional RGB images and for multi-spectral satellite data, outperforming the state-of-the-art.Comment: Submitted to ICCV 2017 (10 pages, 8 figures

    Simultaneous Estimation of Noise Variance and Number of Peaks in Bayesian Spectral Deconvolution

    Full text link
    The heuristic identification of peaks from noisy complex spectra often leads to misunderstanding of the physical and chemical properties of matter. In this paper, we propose a framework based on Bayesian inference, which enables us to separate multipeak spectra into single peaks statistically and consists of two steps. The first step is estimating both the noise variance and the number of peaks as hyperparameters based on Bayes free energy, which generally is not analytically tractable. The second step is fitting the parameters of each peak function to the given spectrum by calculating the posterior density, which has a problem of local minima and saddles since multipeak models are nonlinear and hierarchical. Our framework enables the escape from local minima or saddles by using the exchange Monte Carlo method and calculates Bayes free energy via the multiple histogram method. We discuss a simulation demonstrating how efficient our framework is and show that estimating both the noise variance and the number of peaks prevents overfitting, overpenalizing, and misunderstanding the precision of parameter estimation

    Hybrid Noise Removal in Hyperspectral Imagery With a Spatial-Spectral Gradient Network

    Full text link
    The existence of hybrid noise in hyperspectral images (HSIs) severely degrades the data quality, reduces the interpretation accuracy of HSIs, and restricts the subsequent HSIs applications. In this paper, the spatial-spectral gradient network (SSGN) is presented for mixed noise removal in HSIs. The proposed method employs a spatial-spectral gradient learning strategy, in consideration of the unique spatial structure directionality of sparse noise and spectral differences with additional complementary information for better extracting intrinsic and deep features of HSIs. Based on a fully cascaded multi-scale convolutional network, SSGN can simultaneously deal with the different types of noise in different HSIs or spectra by the use of the same model. The simulated and real-data experiments undertaken in this study confirmed that the proposed SSGN performs better at mixed noise removal than the other state-of-the-art HSI denoising algorithms, in evaluation indices, visual assessments, and time consumption.Comment: Accept by IEEE TGR

    Data recovery in computational fluid dynamics through deep image priors

    Full text link
    One of the challenges encountered by computational simulations at exascale is the reliability of simulations in the face of hardware and software faults. These faults, expected to increase with the complexity of the computational systems, will lead to the loss of simulation data and simulation failure and are currently addressed through a checkpoint-restart paradigm. Focusing specifically on computational fluid dynamics simulations, this work proposes a method that uses a deep convolutional neural network to recover simulation data. This data recovery method (i) is agnostic to the flow configuration and geometry, (ii) does not require extensive training data, and (iii) is accurate for very different physical flows. Results indicate that the use of deep image priors for data recovery is more accurate than standard recovery techniques, such as the Gaussian process regression, also known as Kriging. Data recovery is performed for two canonical fluid flows: laminar flow around a cylinder and homogeneous isotropic turbulence. For data recovery of the laminar flow around a cylinder, results indicate similar performance between the proposed method and Gaussian process regression across a wide range of mask sizes. For homogeneous isotropic turbulence, data recovery through the deep convolutional neural network exhibits an error in relevant turbulent quantities approximately three times smaller than that for the Gaussian process regression,. Forward simulations using recovered data illustrate that the enstrophy decay is captured within 10% using the deep convolutional neural network approach. Although demonstrated specifically for data recovery of fluid flows, this technique can be used in a wide range of applications, including particle image velocimetry, visualization, and computational simulations of physical processes beyond the Navier-Stokes equations
    corecore