20 research outputs found
A fully automatic nerve segmentation and morphometric parameter quantification system for early diagnosis of diabetic neuropathy in corneal images
Diabetic Peripheral Neuropathy (DPN) is one of the most common types of diabetes that can affect the cornea. An accurate analysis of the nerve structures can assist the early diagnosis of this disease. This paper proposes a robust, fast and fully automatic nerve segmentation and morphometric parameter quantification system for corneal confocal microscope images. The segmentation part consists of three main steps. First, a preprocessing step is applied to enhance the visibility of the nerves and remove noise using anisotropic diffusion filtering, specifically a Coherence filter followed by Gaussian filtering. Second, morphological operations are applied to remove unwanted objects in the input image such as epithelial cells and small nerve segments. Finally, an edge detection step is applied to detect all the nerves in the input image. In this step, an efficient algorithm for connecting discontinuous nerves is proposed. In the morphometric parameters quantification part, a number of features are extracted, including thickness, tortuosity and length of nerve, which may be used for the early diagnosis of diabetic polyneuropathy and when planning Laser-Assisted in situ Keratomileusis (LASIK) or Photorefractive keratectomy (PRK). The performance of the proposed segmentation system is evaluated against manually traced ground-truth images based on a database consisting of 498 corneal sub-basal nerve images (238 are normal and 260 are abnormal). In addition, the robustness and efficiency of the proposed system in extracting morphometric features with clinical utility was evaluated in 919 images taken from healthy subjects and diabetic patients with and without neuropathy. We demonstrate rapid (13 seconds/image), robust and effective automated corneal nerve quantification. The proposed system will be deployed as a useful clinical tool to support the expertise of ophthalmologists and save the clinician time in a busy clinical setting
Localizing the Recurrent Laryngeal Nerve via Ultrasound with a Bayesian Shape Framework
Tumor infiltration of the recurrent laryngeal nerve (RLN) is a contraindication for robotic thyroidectomy and can be difficult to detect via standard laryngoscopy. Ultrasound (US) is a viable alternative for RLN detection due to its safety and ability to provide real-time feedback. However, the tininess of the RLN, with a diameter typically less than 3mm, poses significant challenges to the accurate localization of the RLN. In this work, we propose a knowledge-driven framework for RLN localization, mimicking the standard approach surgeons take to identify the RLN according to its surrounding organs. We construct a prior anatomical model based on the inherent relative spatial relationships between organs. Through Bayesian shape alignment (BSA), we obtain the candidate coordinates of the center of a region of interest (ROI) that encloses the RLN. The ROI allows a decreased field of view for determining the refined centroid of the RLN using a dual-path identification network, based on multi-scale semantic information. Experimental results indicate that the proposed method achieves superior hit rates and substantially smaller distance errors compared with state-of-the-art methods
Classification of Corneal Nerve Images Using Machine Learning Techniques
Recent research shows that small nerve fiber damage is an early detector of neuropathy. These small nerve fibers are present in the human cornea and can be visualized through the use of a corneal confocal microscope. A series of images can be acquired from the subbasal nerve plexus of the cornea. Before the images can be quantified for nerve loss, a human expert manually traces the nerves in the image and then classifies the image as having neuropathy or not. Some nerve tracing algorithms are available in the literature, but none of them are reported as being used in clinical practice. An alternate practice is to visually classify the image for neuropathy without quantification. In this paper, we evaluate the potential of various machine learning techniques for automating corneal nerve image classification. First, the images are down-sampled using discrete wavelet transform, filtering and a number of morphological operations. The resulting binary image is used for extracting characteristic features of the image. This is followed by training the classifier on the extracted features. The trained classifier is then used for predicting the state of the nerves in the images. Our experiments yield a classification accuracy of 0.91 reflecting the effectiveness of the proposed method
An automatic corneal subbasal nerve registration system using FFT and phase correlation techniques for an accurate DPN diagnosis
yesConfocal microscopy is employed as a fast and non-invasive way to capture a sequence of images from different layers and membranes of the cornea. The captured images are used to extract useful and helpful clinical information for early diagnosis of corneal diseases such as, Diabetic Peripheral Neuropathy (DPN). In this paper, an automatic corneal subbasal nerve registration system is proposed. The main aim of the proposed system is to produce a new informative corneal image that contains structural and functional information. In addition a colour coded corneal image map is produced by overlaying a sequence of Cornea Confocal Microscopy (CCM) images that differ in their displacement, illumination, scaling, and rotation to each other. An automatic image registration method is proposed based on combining the advantages of Fast Fourier Transform (FFT) and phase correlation techniques. The proposed registration algorithm searches for the best common features between a number of sequenced CCM images in the frequency domain to produce the formative image map. In this generated image map, each colour represents the severity level of a specific clinical feature that can be used to give ophthalmologists a clear and precise representation of the extracted clinical features from each nerve in the image map. Moreover, successful implementation of the proposed system and the availability of the required datasets opens the door for other interesting ideas; for instance, it can be used to give ophthalmologists a summarized and objective description about a diabetic patient’s health status using a sequence of CCM images that have been captured from different imaging devices and/or at different time
Segmentation of nerve bundles and ganglia in spine MRI using particle filters
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 41-44).Automatic segmentation of spinal nerve bundles originating within the dural sac and exiting the spinal canal is important for diagnosis and surgical planning. The variability in intensity, contrast, shape and direction of nerves seen in high resolution myelographic MR images makes segmentation a challenging task. In this thesis, we present an automatic tracking method for segmentation of nerve bundles based on particle filters. We develop a novel approach to flexible particle representation of tubular structures based on Bezier splines. We construct an appropriate dynamics to reflect the continuity and smoothness properties of real nerve bundles. Moreover, we introduce a robust image likelihood model that enables delineation of nerve bundles and ganglia from the surrounding anatomical structures. We evaluate the results by comparing them to expert manual segmentation, and we demonstrate accurate and fast nerve tracking.by Adrian Vasile Dalca.S.M
Recommended from our members
Novel medical imaging technologies for processing epithelium and endothelium layers in corneal confocal images. Developing automated segmentation and quantification algorithms for processing sub-basal epithelium nerves and endothelial cells for early diagnosis of diabetic neuropathy in corneal confocal microscope images
Diabetic Peripheral Neuropathy (DPN) is one of the most common types of diabetes that can affect the cornea. An accurate analysis of the corneal epithelium nerve structures and the corneal endothelial cell can assist early diagnosis of this disease and other corneal diseases, which can lead to visual impairment and then to blindness. In this thesis, fully-automated segmentation and quantification algorithms for processing and analysing sub-basal epithelium nerves and endothelial cells are proposed for early diagnosis of diabetic neuropathy in Corneal Confocal Microscopy (CCM) images. Firstly, a fully automatic nerve segmentation system for corneal confocal microscope images is proposed. The performance of the proposed system is evaluated against manually traced images with an execution time of the prototype is 13 seconds. Secondly, an automatic corneal nerve registration system is proposed. The main aim of this system is to produce a new informative corneal image that contains structural and functional information. Thirdly, an automated real-time system, termed the Corneal Endothelium Analysis System (CEAS) is developed and applied for the segmentation of endothelial cells in images of human cornea obtained by In Vivo CCM. The performance of the proposed CEAS system was tested against manually traced images with an execution time of only 6 seconds per image. Finally, the results obtained from all the proposed approaches have been evaluated and validated by an expert advisory board from two institutes, they are the Division of Medicine, Weill Cornell Medicine-Qatar, Doha, Qatar and the Manchester Royal Eye Hospital, Centre for Endocrinology and Diabetes, UK
RadFormer: Transformers with Global-Local Attention for Interpretable and Accurate Gallbladder Cancer Detection
We propose a novel deep neural network architecture to learn interpretable
representation for medical image analysis. Our architecture generates a global
attention for region of interest, and then learns bag of words style deep
feature embeddings with local attention. The global, and local feature maps are
combined using a contemporary transformer architecture for highly accurate
Gallbladder Cancer (GBC) detection from Ultrasound (USG) images. Our
experiments indicate that the detection accuracy of our model beats even human
radiologists, and advocates its use as the second reader for GBC diagnosis. Bag
of words embeddings allow our model to be probed for generating interpretable
explanations for GBC detection consistent with the ones reported in medical
literature. We show that the proposed model not only helps understand decisions
of neural network models but also aids in discovery of new visual features
relevant to the diagnosis of GBC. Source-code and model will be available at
https://github.com/sbasu276/RadFormerComment: To Appear in Elsevier Medical Image Analysi
Building Extraction from Very High Resolution Aerial Imagery Using Joint Attention Deep Neural Network
Automated methods to extract buildings from very high resolution (VHR) remote sensing data have many applications in a wide range of fields. Many convolutional neural network (CNN) based methods have been proposed and have achieved significant advances in the building extraction task. In order to refine predictions, a lot of recent approaches fuse features from earlier layers of CNNs to introduce abundant spatial information, which is known as skip connection. However, this strategy of reusing earlier features directly without processing could reduce the performance of the network. To address this problem, we propose a novel fully convolutional network (FCN) that adopts attention based re-weighting to extract buildings from aerial imagery. Specifically, we consider the semantic gap between features from different stages and leverage the attention mechanism to bridge the gap prior to the fusion of features. The inferred attention weights along spatial and channel-wise dimensions make the low level feature maps adaptive to high level feature maps in a target-oriented manner. Experimental results on three publicly available aerial imagery datasets show that the proposed model (RFA-UNet) achieves comparable and improved performance compared to other state-of-the-art models for building extraction