28 research outputs found

    Multi-BVOC Super-Resolution Exploiting Compounds Inter-Connection

    Full text link
    Biogenic Volatile Organic Compounds (BVOCs) emitted from the terrestrial ecosystem into the Earth's atmosphere are an important component of atmospheric chemistry. Due to the scarcity of measurement, a reliable enhancement of BVOCs emission maps can aid in providing denser data for atmospheric chemical, climate, and air quality models. In this work, we propose a strategy to super-resolve coarse BVOC emission maps by simultaneously exploiting the contributions of different compounds. To this purpose, we first accurately investigate the spatial inter-connections between several BVOC species. Then, we exploit the found similarities to build a Multi-Image Super-Resolution (MISR) system, in which a number of emission maps associated with diverse compounds are aggregated to boost Super-Resolution (SR) performance. We compare different configurations regarding the species and the number of joined BVOCs. Our experimental results show that incorporating BVOCs' relationship into the process can substantially improve the accuracy of the super-resolved maps. Interestingly, the best results are achieved when we aggregate the emission maps of strongly uncorrelated compounds. This peculiarity seems to confirm what was already guessed for other data-domains, i.e., joined uncorrelated information are more helpful than correlated ones to boost MISR performance. Nonetheless, the proposed work represents the first attempt in SR of BVOC emissions through the fusion of multiple different compounds.Comment: 5 pages, 4 figures, 1 table, accepted at EURASIP-EUSIPCO 202

    LoGSRN: deep super resolution network for Digital Elevation Model

    Get PDF

    Multi-contrast MRI Super-resolution via Implicit Neural Representations

    Full text link
    Clinical routine and retrospective cohorts commonly include multi-parametric Magnetic Resonance Imaging; however, they are mostly acquired in different anisotropic 2D views due to signal-to-noise-ratio and scan-time constraints. Thus acquired views suffer from poor out-of-plane resolution and affect downstream volumetric image analysis that typically requires isotropic 3D scans. Combining different views of multi-contrast scans into high-resolution isotropic 3D scans is challenging due to the lack of a large training cohort, which calls for a subject-specific framework.This work proposes a novel solution to this problem leveraging Implicit Neural Representations (INR). Our proposed INR jointly learns two different contrasts of complementary views in a continuous spatial function and benefits from exchanging anatomical information between them. Trained within minutes on a single commodity GPU, our model provides realistic super-resolution across different pairs of contrasts in our experiments with three datasets. Using Mutual Information (MI) as a metric, we find that our model converges to an optimum MI amongst sequences, achieving anatomically faithful reconstruction. Code is available at: https://github.com/jqmcginnis/multi_contrast_inr

    Synthetic Low-Field MRI Super-Resolution Via Nested U-Net Architecture

    Full text link
    Low-field (LF) MRI scanners have the power to revolutionize medical imaging by providing a portable and cheaper alternative to high-field MRI scanners. However, such scanners are usually significantly noisier and lower quality than their high-field counterparts. The aim of this paper is to improve the SNR and overall image quality of low-field MRI scans to improve diagnostic capability. To address this issue, we propose a Nested U-Net neural network architecture super-resolution algorithm that outperforms previously suggested deep learning methods with an average PSNR of 78.83 and SSIM of 0.9551. We tested our network on artificial noisy downsampled synthetic data from a major T1 weighted MRI image dataset called the T1-mix dataset. One board-certified radiologist scored 25 images on the Likert scale (1-5) assessing overall image quality, anatomical structure, and diagnostic confidence across our architecture and other published works (SR DenseNet, Generator Block, SRCNN, etc.). We also introduce a new type of loss function called natural log mean squared error (NLMSE). In conclusion, we present a more accurate deep learning method for single image super-resolution applied to synthetic low-field MRI via a Nested U-Net architecture

    Hitchhiker's Guide to Super-Resolution: Introduction and Recent Advances

    Full text link
    With the advent of Deep Learning (DL), Super-Resolution (SR) has also become a thriving research area. However, despite promising results, the field still faces challenges that require further research e.g., allowing flexible upsampling, more effective loss functions, and better evaluation metrics. We review the domain of SR in light of recent advances, and examine state-of-the-art models such as diffusion (DDPM) and transformer-based SR models. We present a critical discussion on contemporary strategies used in SR, and identify promising yet unexplored research directions. We complement previous surveys by incorporating the latest developments in the field such as uncertainty-driven losses, wavelet networks, neural architecture search, novel normalization methods, and the latests evaluation techniques. We also include several visualizations for the models and methods throughout each chapter in order to facilitate a global understanding of the trends in the field. This review is ultimately aimed at helping researchers to push the boundaries of DL applied to SR.Comment: accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence, 202

    Improved Residual Dense Network for Large Scale Super-Resolution via Generative Adversarial Network

    Get PDF
    Recent single image super resolution (SISR) studies were conducted extensively on small upscaling factors such as x2 and x4 on remote sensing images, while less work was conducted on large factors such as the factor x8 and x16. Owing to the high performance of the generative adversarial networks (GANs), in this paper, two GAN’s frameworks are implemented to study the SISR on the residual remote sensing image with large magnification under x8 scale factor, which is still lacking acceptable results. This work proposes a modified version of the residual dense network (RDN) and then it been implemented within GAN framework which named RDGAN. The second GAN framework has been built based on the densely sampled super resolution network (DSSR) and we named DSGAN. The used loss function for the training employs the adversarial, mean squared error (MSE) and the perceptual loss derived from the VGG19 model. We optimize the training by using Adam for number of epochs then switching to the SGD optimizer. We validate the frameworks on the proposed dataset of this work and other three remote sensing datasets: the UC Merced, WHU-RS19 and RSSCN7. To validate the frameworks, we use the following image quality assessment metrics: the PSNR and the SSIM on the RGB and the Y channel and the MSE. The RDGAN evaluation values on the proposed dataset were 26.02, 0.704, and 257.70 for PSNR, SSIM and the MSE, respectively, and the DSGAN evaluation on the same dataset yielded 26.13, 0.708 and 251.89 for the PSNR, the SSIM, and the MSE

    Lucy Richardson and Mean Modified Wiener Filter for Construction of Super-Resolution Image

    Get PDF
    The ultimate goal of the Super-Resolution (SR) technique is to generate the High-Resolution (HR) image by combining the corresponding images with Low-Resolution (LR), which is utilized for different applications such as surveillance, remote sensing, medical diagnosis, etc. The original HR image may be corrupted due to various causes such as warping, blurring, and noise addition. SR image reconstruction methods are frequently plagued by obtrusive restorative artifacts such as noise, stair casing effect, and blurring. Thus, striking a balance between smoothness and edge retention is never easy. By enhancing the visual information and autonomous machine perception, this work presented research to improve the effectiveness of SR image reconstruction The reference image is obtained from DIV2K and BSD 100 dataset, these reference LR image is converted as composed LR image using the proposed Lucy Richardson and Modified Mean Wiener (LR-MMWF) Filters. The possessed LR image is provided as input for the stage of bicubic interpolation. Afterward, the initial HR image is obtained as output from the interpolation stage which is given as input for the SR model consisting of fidelity term to decrease residual between the projected HR image and detected LR image. At last, a model based on Bilateral Total Variation (BTV) prior is utilized to improve the stability of the HR image by refining the quality of the image. The results obtained from the performance analysis show that the proposed LR-MMW filter attained better PSNR and Structural Similarity (SSIM) than the existing filters. The results obtained from the experiments show that the proposed LR-MMW filter achieved better performance and provides a higher PSNR value of 31.65dB whereas the Filter-Net and 1D,2D CNN filter achieved PSNR values of 28.95dB and 31.63dB respectively

    Super-Resolution of License Plate Images Using Attention Modules and Sub-Pixel Convolution Layers

    Full text link
    Recent years have seen significant developments in the field of License Plate Recognition (LPR) through the integration of deep learning techniques and the increasing availability of training data. Nevertheless, reconstructing license plates (LPs) from low-resolution (LR) surveillance footage remains challenging. To address this issue, we introduce a Single-Image Super-Resolution (SISR) approach that integrates attention and transformer modules to enhance the detection of structural and textural features in LR images. Our approach incorporates sub-pixel convolution layers (also known as PixelShuffle) and a loss function that uses an Optical Character Recognition (OCR) model for feature extraction. We trained the proposed architecture on synthetic images created by applying heavy Gaussian noise to high-resolution LP images from two public datasets, followed by bicubic downsampling. As a result, the generated images have a Structural Similarity Index Measure (SSIM) of less than 0.10. Our results show that our approach for reconstructing these low-resolution synthesized images outperforms existing ones in both quantitative and qualitative measures. Our code is publicly available at https://github.com/valfride/lpr-rsr-ext

    Deep Learning for Single Image Super-Resolution: A Brief Review

    Get PDF
    Single image super-resolution (SISR) is a notoriously challenging ill-posed problem, which aims to obtain a high-resolution (HR) output from one of its low-resolution (LR) versions. To solve the SISR problem, recently powerful deep learning algorithms have been employed and achieved the state-of-the-art performance. In this survey, we review representative deep learning-based SISR methods, and group them into two categories according to their major contributions to two essential aspects of SISR: the exploration of efficient neural network architectures for SISR, and the development of effective optimization objectives for deep SISR learning. For each category, a baseline is firstly established and several critical limitations of the baseline are summarized. Then representative works on overcoming these limitations are presented based on their original contents as well as our critical understandings and analyses, and relevant comparisons are conducted from a variety of perspectives. Finally we conclude this review with some vital current challenges and future trends in SISR leveraging deep learning algorithms.Comment: Accepted by IEEE Transactions on Multimedia (TMM
    corecore