19,213 research outputs found

    Adversarial training with cycle consistency for unsupervised super-resolution in endomicroscopy

    Get PDF
    In recent years, endomicroscopy has become increasingly used for diagnostic purposes and interventional guidance. It can provide intraoperative aids for real-time tissue characterization and can help to perform visual investigations aimed for example to discover epithelial cancers. Due to physical constraints on the acquisition process, endomicroscopy images, still today have a low number of informative pixels which hampers their quality. Post-processing techniques, such as Super-Resolution (SR), are a potential solution to increase the quality of these images. SR techniques are often supervised, requiring aligned pairs of low-resolution (LR) and high-resolution (HR) images patches to train a model. However, in our domain, the lack of HR images hinders the collection of such pairs and makes supervised training unsuitable. For this reason, we propose an unsupervised SR framework based on an adversarial deep neural network with a physically-inspired cycle consistency, designed to impose some acquisition properties on the super-resolved images. Our framework can exploit HR images, regardless of the domain where they are coming from, to transfer the quality of the HR images to the initial LR images. This property can be particularly useful in all situations where pairs of LR/HR are not available during the training. Our quantitative analysis, validated using a database of 238 endomicroscopy video sequences from 143 patients, shows the ability of the pipeline to produce convincing super-resolved images. A Mean Opinion Score (MOS) study also confirms this quantitative image quality assessment.Comment: Accepted for publication on Medical Image Analysis journa

    Lensfree super-resolution holographic microscopy using wetting films on a chip.

    Get PDF
    We investigate the use of wetting films to significantly improve the imaging performance of lensfree pixel super-resolution on-chip microscopy, achieving < 1 µm spatial resolution over a large imaging area of ~24 mm(2). Formation of an ultra-thin wetting film over the specimen effectively creates a micro-lens effect over each object, which significantly improves the signal-to-noise-ratio and therefore the resolution of our lensfree images. We validate the performance of this approach through lensfree on-chip imaging of various objects having fine morphological features (with dimensions of e.g., ≤0.5 µm) such as Escherichia coli (E. coli), human sperm, Giardia lamblia trophozoites, polystyrene micro beads as well as red blood cells. These results are especially important for the development of highly sensitive field-portable microscopic analysis tools for resource limited settings

    Image Quality Improvement of Medical Images using Deep Learning for Computer-aided Diagnosis

    Get PDF
    Retina image analysis is an important screening tool for early detection of multiple dis eases such as diabetic retinopathy which greatly impairs visual function. Image analy sis and pathology detection can be accomplished both by ophthalmologists and by the use of computer-aided diagnosis systems. Advancements in hardware technology led to more portable and less expensive imaging devices for medical image acquisition. This promotes large scale remote diagnosis by clinicians as well as the implementation of computer-aided diagnosis systems for local routine disease screening. However, lower cost equipment generally results in inferior quality images. This may jeopardize the reliability of the acquired images and thus hinder the overall performance of the diagnos tic tool. To solve this open challenge, we carried out an in-depth study on using different deep learning-based frameworks for improving retina image quality while maintaining the underlying morphological information for the diagnosis. Our results demonstrate that using a Cycle Generative Adversarial Network for unpaired image-to-image trans lation leads to successful transformations of retina images from a low- to a high-quality domain. The visual evidence of this improvement was quantitatively affirmed by the two proposed validation methods. The first used a retina image quality classifier to confirm a significant prediction label shift towards a quality enhance. On average, a 50% increase of images being classified as high-quality was verified. The second analysed the perfor mance modifications of a diabetic retinopathy detection algorithm upon being trained with the quality-improved images. The latter led to strong evidence that the proposed solution satisfies the requirement of maintaining the images’ original information for diagnosis, and that it assures a pathology-assessment more sensitive to the presence of pathological signs. These experimental results confirm the potential effectiveness of our solution in improving retina image quality for diagnosis. Along with the addressed con tributions, we analysed how the construction of the data sets representing the low-quality domain impacts the quality translation efficiency. Our findings suggest that by tackling the problem more selectively, that is, constructing data sets more homogeneous in terms of their image defects, we can obtain more accentuated quality transformations

    Investigation of Different Pre-processing Quality Enhancement Techniques on X-ray Images

    Get PDF
    To maximize the accuracy of classification for medical images especially in chest- X ray, we need to improve quality of CXR images or high resolute images will be needed. Pneumonia is a lung infection caused by organism like bacteria or virus. Mostly Chest X-Ray (CXR) is used to detect the infection, but due to limitation of existing equipment, bandwidth, storage space we obtain low quality images. Spatial resolution of medical images is reduced due to image acquisition time, low radiation dose. Quality in medical images plays a major role for clinical diagnosis of disease in deep learning. There is no doubt that noise, low resolution and annotations in chest images are major constraint to the performance of deep learning. Researchers used famous image enhancement algorithms: Histogram equalization (HE), Contrast-limited Adaptive Histogram Equalization (CLAHE), De-noising, Discrete Wavelet Transform (DWT), Gamma Correction (GC), but it is still a challenging task to improve features in images. Computer vision and Super resolution are growing fields of deep learning. Super resolution is also feasible for mono chromatic medical images, which improve the region of interest. Multiple low-resolution images mix with high resolution and then reconstruct a target input image to high quality image by using Super Convolution Neural Network (SRCNN). The objective evaluation based on pixel difference-based PSNR and Human visual system SSIM metric are used for quality measurement. In this study we achieve effective value of PSNR (40 to 43 dB) by considering 30 images of different category (normal, viral or bacterial pneumonia) and SSIM value varies from 97% to 98%. The experiment shows that image quality of CXR is increased by SRCNN, and then high qualitative images will be used for further classification, so that significant parameter of accuracy will be finding in diagnosis of disease in deep learning

    Acoustical structured illumination for super-resolution ultrasound imaging.

    Get PDF
    Structured illumination microscopy is an optical method to increase the spatial resolution of wide-field fluorescence imaging beyond the diffraction limit by applying a spatially structured illumination light. Here, we extend this concept to facilitate super-resolution ultrasound imaging by manipulating the transmitted sound field to encode the high spatial frequencies into the observed image through aliasing. Post processing is applied to precisely shift the spectral components to their proper positions in k-space and effectively double the spatial resolution of the reconstructed image compared to one-way focusing. The method has broad application, including the detection of small lesions for early cancer diagnosis, improving the detection of the borders of organs and tumors, and enhancing visualization of vascular features. The method can be implemented with conventional ultrasound systems, without the need for additional components. The resulting image enhancement is demonstrated with both test objects and ex vivo rat metacarpals and phalanges

    Online Super-Resolution For Fibre-Bundle-Based Confocal Laser Endomicroscopy

    Get PDF
    Probe-based Confocal Laser Endomicroscopy (pCLE) produces microscopic images enabling real-time in vivo optical biopsy. However, the miniaturisation of the optical hardware, specifically the reliance on an optical fibre bundle as an imaging guide, fundamentally limits image quality by producing artefacts, noise, and relatively low contrast and resolution. The reconstruction approaches in clinical pCLE products do not fully alleviate these problems. Consequently, image quality remains a barrier that curbs the full potential of pCLE. Enhancing the image quality of pCLE in real-time remains a challenge. The research in this thesis is a response to this need. I have developed dedicated online super-resolution methods that account for the physics of the image acquisition process. These methods have the potential to replace existing reconstruction algorithms without interfering with the fibre design or the hardware of the device. In this thesis, novel processing pipelines are proposed for enhancing the image quality of pCLE. First, I explored a learning-based super-resolution method that relies on mapping from the low to the high-resolution space. Due to the lack of high-resolution pCLE, I proposed to simulate high-resolution data and use it as a ground truth model that is based on the pCLE acquisition physics. However, pCLE images are reconstructed from irregularly distributed fibre signals, and grid-based Convolutional Neural Networks are not designed to take irregular data as input. To alleviate this problem, I designed a new trainable layer that embeds Nadaraya- Watson regression. Finally, I proposed a novel blind super-resolution approach by deploying unsupervised zero-shot learning accompanied by a down-sampling kernel crafted for pCLE. I evaluated these new methods in two ways: a robust image quality assessment and a perceptual quality test assessed by clinical experts. The results demonstrate that the proposed super-resolution pipelines are superior to the current reconstruction algorithm in terms of image quality and clinician preference

    MedGAN: Medical Image Translation using GANs

    Full text link
    Image-to-image translation is considered a new frontier in the field of medical image analysis, with numerous potential applications. However, a large portion of recent approaches offers individualized solutions based on specialized task-specific architectures or require refinement through non-end-to-end training. In this paper, we propose a new framework, named MedGAN, for medical image-to-image translation which operates on the image level in an end-to-end manner. MedGAN builds upon recent advances in the field of generative adversarial networks (GANs) by merging the adversarial framework with a new combination of non-adversarial losses. We utilize a discriminator network as a trainable feature extractor which penalizes the discrepancy between the translated medical images and the desired modalities. Moreover, style-transfer losses are utilized to match the textures and fine-structures of the desired target images to the translated images. Additionally, we present a new generator architecture, titled CasNet, which enhances the sharpness of the translated medical outputs through progressive refinement via encoder-decoder pairs. Without any application-specific modifications, we apply MedGAN on three different tasks: PET-CT translation, correction of MR motion artefacts and PET image denoising. Perceptual analysis by radiologists and quantitative evaluations illustrate that the MedGAN outperforms other existing translation approaches.Comment: 16 pages, 8 figure
    • …
    corecore