95,876 research outputs found

    Image Super-Resolution with Deep Dictionary

    Full text link
    Since the first success of Dong et al., the deep-learning-based approach has become dominant in the field of single-image super-resolution. This replaces all the handcrafted image processing steps of traditional sparse-coding-based methods with a deep neural network. In contrast to sparse-coding-based methods, which explicitly create high/low-resolution dictionaries, the dictionaries in deep-learning-based methods are implicitly acquired as a nonlinear combination of multiple convolutions. One disadvantage of deep-learning-based methods is that their performance is degraded for images created differently from the training dataset (out-of-domain images). We propose an end-to-end super-resolution network with a deep dictionary (SRDD), where a high-resolution dictionary is explicitly learned without sacrificing the advantages of deep learning. Extensive experiments show that explicit learning of high-resolution dictionary makes the network more robust for out-of-domain test images while maintaining the performance of the in-domain test images.Comment: ECCV 202

    SRflow: Deep learning based super-resolution of 4D-flow MRI data

    Full text link
    Exploiting 4D-flow magnetic resonance imaging (MRI) data to quantify hemodynamics requires an adequate spatio-temporal vector field resolution at a low noise level. To address this challenge, we provide a learned solution to super-resolve in vivo 4D-flow MRI data at a post-processing level. We propose a deep convolutional neural network (CNN) that learns the inter-scale relationship of the velocity vector map and leverages an efficient residual learning scheme to make it computationally feasible. A novel, direction-sensitive, and robust loss function is crucial to learning vector-field data. We present a detailed comparative study between the proposed super-resolution and the conventional cubic B-spline based vector-field super-resolution. Our method improves the peak-velocity to noise ratio of the flow field by 10 and 30% for in vivo cardiovascular and cerebrovascular data, respectively, for 4 × super-resolution over the state-of-the-art cubic B-spline. Significantly, our method offers 10x faster inference over the cubic B-spline. The proposed approach for super-resolution of 4D-flow data would potentially improve the subsequent calculation of hemodynamic quantities

    Spatio-Temporal Super-Resolution Data Assimilation (SRDA) Utilizing Deep Neural Networks with Domain Generalization

    Full text link
    Deep learning has recently gained attention in the atmospheric and oceanic sciences for its potential to improve the accuracy of numerical simulations or to reduce computational costs. Super-resolution is one such technique for high-resolution inference from low-resolution data. This paper proposes a new scheme, called four-dimensional super-resolution data assimilation (4D-SRDA). This framework calculates the time evolution of a system from low-resolution simulations using a physics-based model, while a trained neural network simultaneously performs data assimilation and spatio-temporal super-resolution. The use of low-resolution simulations without ensemble members reduces the computational cost of obtaining inferences at high spatio-temporal resolution. In 4D-SRDA, physics-based simulations and neural-network inferences are performed alternately, possibly causing a domain shift, i.e., a statistical difference between the training and test data, especially in offline training. Domain shifts can reduce the accuracy of inference. To mitigate this risk, we developed super-resolution mixup (SR-mixup)--a data augmentation method for domain generalization. SR-mixup creates a linear combination of randomly sampled inputs, resulting in synthetic data with a different distribution from the original data. The proposed methods were validated using an idealized barotropic ocean jet with supervised learning. The results suggest that the combination of 4D-SRDA and SR-mixup is effective for robust inference cycles. This study highlights the potential of super-resolution and domain-generalization techniques, in the field of data assimilation, especially for the integration of physics-based and data-driven models

    Deep Learning networks with p-norm loss layers for spatial resolution enhancement of 3D medical images

    Get PDF
    Thurnhofer-Hemsi K., López-Rubio E., Roé-Vellvé N., Molina-Cabello M.A. (2019) Deep Learning Networks with p-norm Loss Layers for Spatial Resolution Enhancement of 3D Medical Images. In: Ferrández Vicente J., Álvarez-Sánchez J., de la Paz López F., Toledo Moreo J., Adeli H. (eds) From Bioinspired Systems and Biomedical Applications to Machine Learning. IWINAC 2019. Lecture Notes in Computer Science, vol 11487. Springer, ChamNowadays, obtaining high-quality magnetic resonance (MR) images is a complex problem due to several acquisition factors, but is crucial in order to perform good diagnostics. The enhancement of the resolution is a typical procedure applied after the image generation. State-of-the-art works gather a large variety of methods for super-resolution (SR), among which deep learning has become very popular during the last years. Most of the SR deep-learning methods are based on the min- imization of the residuals by the use of Euclidean loss layers. In this paper, we propose an SR model based on the use of a p-norm loss layer to improve the learning process and obtain a better high-resolution (HR) image. This method was implemented using a three-dimensional convolutional neural network (CNN), and tested for several norms in order to determine the most robust t. The proposed methodology was trained and tested with sets of MR structural T1-weighted images and showed better outcomes quantitatively, in terms of Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM), and the restored and the calculated residual images showed better CNN outputs.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    FNOSeg3D: Resolution-Robust 3D Image Segmentation with Fourier Neural Operator

    Full text link
    Due to the computational complexity of 3D medical image segmentation, training with downsampled images is a common remedy for out-of-memory errors in deep learning. Nevertheless, as standard spatial convolution is sensitive to variations in image resolution, the accuracy of a convolutional neural network trained with downsampled images can be suboptimal when applied on the original resolution. To address this limitation, we introduce FNOSeg3D, a 3D segmentation model robust to training image resolution based on the Fourier neural operator (FNO). The FNO is a deep learning framework for learning mappings between functions in partial differential equations, which has the appealing properties of zero-shot super-resolution and global receptive field. We improve the FNO by reducing its parameter requirement and enhancing its learning capability through residual connections and deep supervision, and these result in our FNOSeg3D model which is parameter efficient and resolution robust. When tested on the BraTS'19 dataset, it achieved superior robustness to training image resolution than other tested models with less than 1% of their model parameters.Comment: This paper was accepted by the IEEE International Symposium on Biomedical Imaging (ISBI) 202

    Online Super-Resolution For Fibre-Bundle-Based Confocal Laser Endomicroscopy

    Get PDF
    Probe-based Confocal Laser Endomicroscopy (pCLE) produces microscopic images enabling real-time in vivo optical biopsy. However, the miniaturisation of the optical hardware, specifically the reliance on an optical fibre bundle as an imaging guide, fundamentally limits image quality by producing artefacts, noise, and relatively low contrast and resolution. The reconstruction approaches in clinical pCLE products do not fully alleviate these problems. Consequently, image quality remains a barrier that curbs the full potential of pCLE. Enhancing the image quality of pCLE in real-time remains a challenge. The research in this thesis is a response to this need. I have developed dedicated online super-resolution methods that account for the physics of the image acquisition process. These methods have the potential to replace existing reconstruction algorithms without interfering with the fibre design or the hardware of the device. In this thesis, novel processing pipelines are proposed for enhancing the image quality of pCLE. First, I explored a learning-based super-resolution method that relies on mapping from the low to the high-resolution space. Due to the lack of high-resolution pCLE, I proposed to simulate high-resolution data and use it as a ground truth model that is based on the pCLE acquisition physics. However, pCLE images are reconstructed from irregularly distributed fibre signals, and grid-based Convolutional Neural Networks are not designed to take irregular data as input. To alleviate this problem, I designed a new trainable layer that embeds Nadaraya- Watson regression. Finally, I proposed a novel blind super-resolution approach by deploying unsupervised zero-shot learning accompanied by a down-sampling kernel crafted for pCLE. I evaluated these new methods in two ways: a robust image quality assessment and a perceptual quality test assessed by clinical experts. The results demonstrate that the proposed super-resolution pipelines are superior to the current reconstruction algorithm in terms of image quality and clinician preference
    corecore