46,091 research outputs found

    Deep learning-based method to accurately estimate breast tissue optical properties in the presence of the chest wall

    Get PDF
    SIGNIFICANCE: In general, image reconstruction methods used in diffuse optical tomography (DOT) are based on diffusion approximation, and they consider the breast tissue as a homogenous, semi-infinite medium. However, the semi-infinite medium assumption used in DOT reconstruction is not valid when the chest wall is underneath the breast tissue. AIM: We aim to reduce the chest wall\u27s effect on the estimated average optical properties of breast tissue and obtain accurate forward model for DOT reconstruction. APPROACH: We propose a deep learning-based neural network approach where a convolution neural network (CNN) is trained to simultaneously obtain accurate optical property values for both the breast tissue and the chest wall. RESULTS: The CNN model shows great promise in reducing errors in estimating the optical properties of the breast tissue in the presence of a shallow chest wall. For patient data, the CNN model predicted the breast tissue optical absorption coefficient, which was independent of chest wall depth. CONCLUSIONS: Our proposed method can be readily used in DOT and diffuse spectroscopy measurements to improve the accuracy of estimated tissue optical properties

    Event-Driven Imaging in Turbid Media: A Confluence of Optoelectronics and Neuromorphic Computation

    Full text link
    In this paper a new optical-computational method is introduced to unveil images of targets whose visibility is severely obscured by light scattering in dense, turbid media. The targets of interest are taken to be dynamic in that their optical properties are time-varying whether stationary in space or moving. The scheme, to our knowledge the first of its kind, is human vision inspired whereby diffuse photons collected from the turbid medium are first transformed to spike trains by a dynamic vision sensor as in the retina, and image reconstruction is then performed by a neuromorphic computing approach mimicking the brain. We combine benchtop experimental data in both reflection (backscattering) and transmission geometries with support from physics-based simulations to develop a neuromorphic computational model and then apply this for image reconstruction of different MNIST characters and image sets by a dedicated deep spiking neural network algorithm. Image reconstruction is achieved under conditions of turbidity where an original image is unintelligible to the human eye or a digital video camera, yet clearly and quantifiable identifiable when using the new neuromorphic computational approach

    Neural network-based image reconstruction in swept-source optical coherence tomography using undersampled spectral data

    Full text link
    Optical Coherence Tomography (OCT) is a widely used non-invasive biomedical imaging modality that can rapidly provide volumetric images of samples. Here, we present a deep learning-based image reconstruction framework that can generate swept-source OCT (SS-OCT) images using undersampled spectral data, without any spatial aliasing artifacts. This neural network-based image reconstruction does not require any hardware changes to the optical set-up and can be easily integrated with existing swept-source or spectral domain OCT systems to reduce the amount of raw spectral data to be acquired. To show the efficacy of this framework, we trained and blindly tested a deep neural network using mouse embryo samples imaged by an SS-OCT system. Using 2-fold undersampled spectral data (i.e., 640 spectral points per A-line), the trained neural network can blindly reconstruct 512 A-lines in ~6.73 ms using a desktop computer, removing spatial aliasing artifacts due to spectral undersampling, also presenting a very good match to the images of the same samples, reconstructed using the full spectral OCT data (i.e., 1280 spectral points per A-line). We also successfully demonstrate that this framework can be further extended to process 3x undersampled spectral data per A-line, with some performance degradation in the reconstructed image quality compared to 2x spectral undersampling. This deep learning-enabled image reconstruction approach can be broadly used in various forms of spectral domain OCT systems, helping to increase their imaging speed without sacrificing image resolution and signal-to-noise ratio.Comment: 20 Pages, 7 Figures, 1 Tabl

    Comparison of convolutional neural networks for cloudy optical images reconstruction from single or multitemporal joint SAR and optical images

    Full text link
    With the increasing availability of optical and synthetic aperture radar (SAR) images thanks to the Sentinel constellation, and the explosion of deep learning, new methods have emerged in recent years to tackle the reconstruction of optical images that are impacted by clouds. In this paper, we focus on the evaluation of convolutional neural networks that use jointly SAR and optical images to retrieve the missing contents in one single polluted optical image. We propose a simple framework that ease the creation of datasets for the training of deep nets targeting optical image reconstruction, and for the validation of machine learning based or deterministic approaches. These methods are quite different in terms of input images constraints, and comparing them is a problematic task not addressed in the literature. We show how space partitioning data structures help to query samples in terms of cloud coverage, relative acquisition date, pixel validity and relative proximity between SAR and optical images. We generate several datasets to compare the reconstructed images from networks that use a single pair of SAR and optical image, versus networks that use multiple pairs, and a traditional deterministic approach performing interpolation in temporal domain.Comment: 17 page

    Range-Point Migration-Based Image Expansion Method Exploiting Fully Polarimetric Data for UWB Short-Range Radar

    Get PDF
    Ultrawideband radar with high-range resolution is a promising technology for use in short-range 3-D imaging applications, in which optical cameras are not applicable. One of the most efficient 3-D imaging methods is the range-point migration (RPM) method, which has a definite advantage for the synthetic aperture radar approach in terms of computational burden, high accuracy, and high spatial resolution. However, if an insufficient aperture size or angle is provided, these kinds of methods cannot reconstruct the whole target structure due to the absence of reflection signals from large part of target surface. To expand the 3-D image obtained by RPM, this paper proposes an image expansion method by incorporating the RPM feature and fully polarimetric data-based machine learning approach. Following ellipsoid-based scattering analysis and learning with a neural network, this method expresses the target image as an aggregation of parts of ellipsoids, which significantly expands the original image by the RPM method without sacrificing the reconstruction accuracy. The results of numerical simulation based on 3-D finite-difference time-domain analysis verify the effectiveness of our proposed method, in terms of image-expansion criteria

    Spatio-temporal reconstruction of drop impact dynamics by means of color-coded glare points and deep learning

    Full text link
    The present work introduces a deep learning approach for the three-dimensional reconstruction of the spatio-temporal dynamics of the gas-liquid interface in two-phase flows on the basis of monocular images obtained via optical measurement techniques. The dynamics of liquid droplets impacting onto structured solid substrates are captured through high-speed imaging in an extended shadowgraphy setup with additional reflective glare points from lateral light sources that encode further three-dimensional information of the gas-liquid interface in the images. A neural network is learned for the physically correct reconstruction of the droplet dynamics on a labelled dataset generated by synthetic image rendering on the basis of gas-liquid interface shapes obtained from direct numerical simulation. The employment of synthetic image rendering allows for the efficient generation of training data and circumvents the introduction of errors resulting from the inherent discrepancy of the droplet shapes between experiment and simulation. The accurate reconstruction of the gas-liquid interface during droplet impingement on the basis of images obtained in the experiment demonstrates the practicality of the presented approach based on neural networks and synthetic training data generation. The introduction of glare points from lateral light sources in the experiments is shown to improve the reconstruction accuracy, which indicates that the neural network learns to leverage the additional three-dimensional information encoded in the images for a more accurate depth estimation. Furthermore, the physically reasonable reconstruction of unknown gas-liquid interface shapes indicates that the neural network learned a versatile model of the involved two-phase flow phenomena during droplet impingement

    Learning from irregularly sampled data for endomicroscopy super-resolution: a comparative study of sparse and dense approaches

    Get PDF
    PURPOSE: Probe-based confocal laser endomicroscopy (pCLE) enables performing an optical biopsy via a probe. pCLE probes consist of multiple optical fibres arranged in a bundle, which taken together generate signals in an irregularly sampled pattern. Current pCLE reconstruction is based on interpolating irregular signals onto an over-sampled Cartesian grid, using a naive linear interpolation. It was shown that convolutional neural networks (CNNs) could improve pCLE image quality. Yet classical CNNs may be suboptimal in regard to irregular data. METHODS: We compare pCLE reconstruction and super-resolution (SR) methods taking irregularly sampled or reconstructed pCLE images as input. We also propose to embed a Nadaraya-Watson (NW) kernel regression into the CNN framework as a novel trainable CNN layer. We design deep learning architectures allowing for reconstructing high-quality pCLE images directly from the irregularly sampled input data. We created synthetic sparse pCLE images to evaluate our methodology. RESULTS: The results were validated through an image quality assessment based on a combination of the following metrics: peak signal-to-noise ratio and the structural similarity index. Our analysis indicates that both dense and sparse CNNs outperform the reconstruction method currently used in the clinic. CONCLUSION: The main contributions of our study are a comparison of sparse and dense approach in pCLE image reconstruction. We also implement trainable generalised NW kernel regression as a novel sparse approach. We also generated synthetic data for training pCLE SR
    • …
    corecore