8 research outputs found
A deep learning method for automatic segmentation of the bony orbit in MRI and CT images
This paper proposes a fully automatic method to segment the inner boundary of the bony orbit in two different image modalities: magnetic resonance imaging (MRI) and computed tomography (CT). The method, based on a deep learning architecture, uses two fully convolutional neural networks in series followed by a graph-search method to generate a boundary for the orbit. When compared to human performance for segmentation of both CT and MRI data, the proposed method achieves high Dice coefficients on both orbit and background, with scores of 0.813 and 0.975 in CT images and 0.930 and 0.995 in MRI images, showing a high degree of agreement with a manual segmentation by a human expert. Given the volumetric characteristics of these imaging modalities and the complexity and time-consuming nature of the segmentation of the orbital region in the human skull, it is often impractical to manually segment these images. Thus, the proposed method provides a valid clinical and research tool that performs similarly to the human observer
Effect of patch size and network architecture on a convolutional neural network approach for automatic segmentation of OCT retinal layers
Deep learning strategies, particularly convolutional neural networks (CNNs), are especially suited to finding patterns in images and using those patterns for image classification. The method is normally applied to an image patch and assigns a class weight to the patch; this method has recently been used to detect the probability of retinal boundary locations in OCT images, which is subsequently used to segment the OCT image using a graph-search approach. This paper examines the effects of a number of modifications to the CNN architecture with the aim of optimizing retinal layer segmentation, specifically the effect of patch size as well as the network architecture design on CNN performance and subsequent layer segmentation is presented. The results demonstrate that increasing patch size can improve the performance of the classification and provides a more reliable segmentation in the analysis of retinal layer characteristics in OCT imaging. Similarly, this work shows that changing aspects of the CNN network design can also significantly improve the segmentation results. This work also demonstrates that the performance of the method can change depending on the number of classes (i.e. boundaries) used to train the CNN, with fewer classes showing an inferior performance due to the presence of similar image features between classes that can trigger false positives. Changes in the network (patch size and or architecture) can be applied to provide a superior segmentation performance, which is robust to the class effect. The findings from this work may inform future CNN development in OCT retinal image analysis
Automatic detection of cone photoreceptors with fully convolutional networks
Purpose: To develop a fully automatic method, based on deep learning algorithms, for determining the locations of cone photoreceptors within adaptive optics scanning laser ophthalmoscope images and evaluate its performance against a dataset of manually segmented images. Methods: A fully convolutional network (FCN) based on U-Net architecture was used to generate prediction probability maps and then used a localization algorithm to reduce the prediction map to a collection of points. The proposed method was trained and tested on two publicly available datasets of different imaging modalities, with Dice overlap, false discovery rate, and true positive reported to assess performance. Results: The proposed method achieves a Dice coefficient of 0.989, true positive rate of 0.987, and false discovery rate of 0.009 on the first confocal dataset; and a Dice coefficient of 0.926, true positive rate of 0.909, and false discovery rate of 0.051 on the second split detector dataset. Results compare favorably with a previously proposed method, but this method provides quicker (25 times faster) evaluation performance. Conclusions: The proposed FCN-based method demonstrates that deep learning algorithms can achieve accurate cone localizations, almost comparable to a human expert, while labeling the images. Translational Relevance: Manual cone photoreceptor identification is a time-consuming task due to the large number of cones present within a single image; using the proposed FCN-based method could support the image analysis task, drastically reducing the need for manual assessment of the photoreceptor mosaic.</p