154 research outputs found
Simultaneous Multiple Surface Segmentation Using Deep Learning
The task of automatically segmenting 3-D surfaces representing boundaries of
objects is important for quantitative analysis of volumetric images, and plays
a vital role in biomedical image analysis. Recently, graph-based methods with a
global optimization property have been developed and optimized for various
medical imaging applications. Despite their widespread use, these require human
experts to design transformations, image features, surface smoothness priors,
and re-design for a different tissue, organ or imaging modality. Here, we
propose a Deep Learning based approach for segmentation of the surfaces in
volumetric medical images, by learning the essential features and
transformations from training data, without any human expert intervention. We
employ a regional approach to learn the local surface profiles. The proposed
approach was evaluated on simultaneous intraretinal layer segmentation of
optical coherence tomography (OCT) images of normal retinas and retinas
affected by age related macular degeneration (AMD). The proposed approach was
validated on 40 retina OCT volumes including 20 normal and 20 AMD subjects. The
experiments showed statistically significant improvement in accuracy for our
approach compared to state-of-the-art graph based optimal surface segmentation
with convex priors (G-OSC). A single Convolution Neural Network (CNN) was used
to learn the surfaces for both normal and diseased images. The mean unsigned
surface positioning errors obtained by G-OSC method 2.31 voxels (95% CI
2.02-2.60 voxels) was improved to voxels (95% CI 1.14-1.40 voxels) using
our new approach. On average, our approach takes 94.34 s, requiring 95.35 MB
memory, which is much faster than the 2837.46 s and 6.87 GB memory required by
the G-OSC method on the same computer system.Comment: 8 page
Single image example-based super-resolution using cross-scale patch matching and Markov random field modelling
Example-based super-resolution has become increasingly popular over the last few years for its ability to overcome the limitations of classical multi-frame approach. In this paper we present a new example-based method that uses the input low-resolution image itself as a search space for high-resolution patches by exploiting self-similarity across different resolution scales. Found examples are combined in a high-resolution image by the means of Markov Random Field modelling that forces their global agreement. Additionally, we apply back-projection and steering kernel regression as post-processing techniques. In this way, we are able to produce sharp and artefact-free results that are comparable or better than standard interpolation and state-of-the-art super-resolution techniques
Effect of Uveal Melanocytes on Choroidal Morphology in Rhesus Macaques and Humans on Enhanced-Depth Imaging Optical Coherence Tomography.
PurposeTo compare cross-sectional choroidal morphology in rhesus macaque and human eyes using enhanced-depth imaging optical coherence tomography (EDI-OCT) and histologic analysis.MethodsEnhanced-depth imaging-OCT images from 25 rhesus macaque and 30 human eyes were evaluated for choriocapillaris and choroidal-scleral junction (CSJ) visibility in the central macula based on OCT reflectivity profiles, and compared with age-matched histologic sections. Semiautomated segmentation of the choriocapillaris and CSJ was used to measure choriocapillary and choroidal thickness, respectively. Multivariate regression was performed to determine the association of age, refractive error, and race with choriocapillaris and CSJ visibility.ResultsRhesus macaques exhibit a distinct hyporeflective choriocapillaris layer on EDI-OCT, while the CSJ cannot be visualized. In contrast, humans show variable reflectivities of the choriocapillaris, with a distinct CSJ seen in many subjects. Histologic sections demonstrate large, darkly pigmented melanocytes that are densely distributed in the macaque choroid, while melanocytes in humans are smaller, less pigmented, and variably distributed. Optical coherence tomography reflectivity patterns of the choroid appear to correspond to the density, size, and pigmentation of choroidal melanocytes. Mean choriocapillary thickness was similar between the two species (19.3 ± 3.4 vs. 19.8 ± 3.4 μm, P = 0.615), but choroidal thickness may be lower in macaques than in humans (191.2 ± 43.0 vs. 266.8 ± 78.0 μm, P < 0.001). Racial differences in uveal pigmentation also appear to affect the visibility of the choriocapillaris and CSJ on EDI-OCT.ConclusionsPigmented uveal melanocytes affect choroidal morphology on EDI-OCT in rhesus macaque and human eyes. Racial differences in pigmentation may affect choriocapillaris and CSJ visibility, and may influence the accuracy of choroidal thickness measurements
Vascular Response to Sildenafil Citrate in Aging and Age-Related Macular Degeneration.
Age-related macular degeneration (AMD) - the leading cause of vision loss in the elderly - share many risks factors as atherosclerosis, which exhibits loss of vascular compliance resulting from aging and oxidative stress. Here, we attempt to explore choroidal and retinal vascular compliance in patients with AMD by evaluating dynamic vascular changes using live ocular imaging following treatment with oral sildenafil citrate, a phosphodiesterase type 5 (PDE5) inhibitor and potent vasodilator. Enhanced-depth imaging optical coherence tomography (EDI-OCT) and OCT angiography (OCT-A) were performed on 46 eyes of 23 subjects, including 15 patients with non-exudative AMD in one eye and exudative AMD in the fellow eye, and 8 age-matched control subjects. Choroidal thickness, choroidal vascularity, and retinal vessel density were measured across the central macula at 1 and 3 hours after a 100 mg oral dose of sildenafil citrate. Baseline choroidal thickness was 172.1 ± 60.0 μm in non-exudative AMD eyes, 196.4 ± 89.8 μm in exudative AMD eyes, and 207.4 ± 77.7 μm in control eyes, with no difference between the 3 groups (P = 0.116). After sildenafil, choroidal thickness increased by 6.0% to 9.0% at 1 and 3 hours in all groups (P = 0.001-0.014). Eyes from older subjects were associated with choroidal thinning at baseline (P = 0.005) and showed less choroidal expansion at 1 hour and 3 hours after sildenafil (P = 0.001) regardless of AMD status (P = 0.666). The choroidal thickening appeared to be primarily attributed to expansion of the stroma rather than luminal component. Retinal vascular density remained unchanged after sildenafil in all 3 groups (P = 0.281-0.587). Together, our studies suggest that vascular response of the choroid to sildenafil decreases with age, but is not affected by the presence of non-exudative or exudative AMD, providing insight into changes in vessel compliance in aging and AMD
Deep learning based detection of cone photoreceptors with multimodal adaptive optics scanning light ophthalmoscope images of achromatopsia
Fast and reliable quantification of cone photoreceptors is a bottleneck in the clinical utilization of adaptive optics scanning light ophthalmoscope (AOSLO) systems for the study, diagnosis, and prognosis of retinal diseases. To-date, manual grading has been the sole reliable source of AOSLO quantification, as no automatic method has been reliably utilized for cone detection in real-world low-quality images of diseased retina. We present a novel deep learning based approach that combines information from both the confocal and non-confocal split detector AOSLO modalities to detect cones in subjects with achromatopsia. Our dual-mode deep learning based approach outperforms the state-of-the-art automated techniques and is on a par with human grading
RAC-CNN: multimodal deep learning based automatic detection and classification of rod and cone photoreceptors in adaptive optics scanning light ophthalmoscope images
Quantification of the human rod and cone photoreceptor mosaic in adaptive optics scanning light ophthalmoscope (AOSLO) images is useful for the study of various retinal pathologies. Subjective and time-consuming manual grading has remained the gold standard for evaluating these images, with no well validated automatic methods for detecting individual rods having been developed. We present a novel deep learning based automatic method, called the rod and cone CNN (RAC-CNN), for detecting and classifying rods and cones in multimodal AOSLO images. We test our method on images from healthy subjects as well as subjects with achromatopsia over a range of retinal eccentricities. We show that our method is on par with human grading for detecting rods and cones
Color Capable Sub-Pixel Resolving Optofluidic Microscope and Its Application to Blood Cell Imaging for Malaria Diagnosis
Miniaturization of imaging systems can significantly benefit clinical diagnosis in challenging environments, where access to physicians and good equipment can be limited. Sub-pixel resolving optofluidic microscope (SROFM) offers high-resolution imaging in the form of an on-chip device, with the combination of microfluidics and inexpensive CMOS image sensors. In this work, we report on the implementation of color SROFM prototypes with a demonstrated optical resolution of 0.66 µm at their highest acuity. We applied the prototypes to perform color imaging of red blood cells (RBCs) infected with Plasmodium falciparum, a particularly harmful type of malaria parasites and one of the major causes of death in the developing world
The ePetri dish, an on-chip cell imaging platform based on subpixel perspective sweeping microscopy (SPSM)
We report a chip-scale lensless wide-field-of-view microscopy imaging technique, subpixel perspective sweeping microscopy, which can render microscopy images of growing or confluent cell cultures autonomously. We demonstrate that this technology can be used to build smart Petri dish platforms, termed ePetri, for cell culture experiments. This technique leverages the recent broad and cheap availability of high performance image sensor chips to provide a low-cost and automated microscopy solution. Unlike the two major classes of lensless microscopy methods, optofluidic microscopy and digital in-line holography microscopy, this new approach is fully capable of working with cell cultures or any samples in which cells may be contiguously connected. With our prototype, we demonstrate the ability to image samples of area 6 mm × 4 mm at 660-nm resolution. As a further demonstration, we showed that the method can be applied to image color stained cell culture sample and to image and track cell culture growth directly within an incubator. Finally, we showed that this method can track embryonic stem cell differentiations over the entire sensor surface. Smart Petri dish based on this technology can significantly streamline and improve cell culture experiments by cutting down on human labor and contamination risks
Deep Burst Denoising
Noise is an inherent issue of low-light image capture, one which is
exacerbated on mobile devices due to their narrow apertures and small sensors.
One strategy for mitigating noise in a low-light situation is to increase the
shutter time of the camera, thus allowing each photosite to integrate more
light and decrease noise variance. However, there are two downsides of long
exposures: (a) bright regions can exceed the sensor range, and (b) camera and
scene motion will result in blurred images. Another way of gathering more light
is to capture multiple short (thus noisy) frames in a "burst" and intelligently
integrate the content, thus avoiding the above downsides. In this paper, we use
the burst-capture strategy and implement the intelligent integration via a
recurrent fully convolutional deep neural net (CNN). We build our novel,
multiframe architecture to be a simple addition to any single frame denoising
model, and design to handle an arbitrary number of noisy input frames. We show
that it achieves state of the art denoising results on our burst dataset,
improving on the best published multi-frame techniques, such as VBM4D and
FlexISP. Finally, we explore other applications of image enhancement by
integrating content from multiple frames and demonstrate that our DNN
architecture generalizes well to image super-resolution
Simple, Accurate, and Robust Nonparametric Blind Super-Resolution
This paper proposes a simple, accurate, and robust approach to single image
nonparametric blind Super-Resolution (SR). This task is formulated as a
functional to be minimized with respect to both an intermediate super-resolved
image and a nonparametric blur-kernel. The proposed approach includes a
convolution consistency constraint which uses a non-blind learning-based SR
result to better guide the estimation process. Another key component is the
unnatural bi-l0-l2-norm regularization imposed on the super-resolved, sharp
image and the blur-kernel, which is shown to be quite beneficial for estimating
the blur-kernel accurately. The numerical optimization is implemented by
coupling the splitting augmented Lagrangian and the conjugate gradient (CG).
Using the pre-estimated blur-kernel, we finally reconstruct the SR image by a
very simple non-blind SR method that uses a natural image prior. The proposed
approach is demonstrated to achieve better performance than the recent method
by Michaeli and Irani [2] in both terms of the kernel estimation accuracy and
image SR quality
- …