18 research outputs found

    Automatic classification of focal liver lesions based on clinical DCE-MR and T2-weighted images: a feasibility study

    No full text
    Focal liver lesion classification is an important part of diagnostics. In clinical practice, T2-weighted (T2W) and dynamic contrast enhanced (DCE) MR images are used to determine the type of lesion. For automatic liver lesion classification only T2W images are exploited. In this feasibility study, a multi-modal approach for automatic lesion classification of five lesion classes (adenoma, cyst, haemangioma, HCC, and metastasis) is studied. Features are derived from four sets: (A) non-corrected, and (B) motion corrected DCE-MRI, (C) T2W images, and (D) B+C combined, originating from 43 patients. An extremely randomized forest is used as classifier. The results show that motion corrected DCE-MRI features are a valuable addition to the T2W features, and improve the accuracy in discriminating benign and malignant lesions, as well as the classification of the five lesion classes. The multimodal approach shows promising results for an automatic liver lesion classification

    Detection of joint space narrowing in hand radiographs

    No full text
    Radiographic assessment of joint space narrowing in hand radiographs is important for determining the progression of rheumatoid arthritis in an early stage. Clinical scoring methods are based on manual measurements that are time consuming and subjected to intra-reader and inter-reader variance. The goal is to design an automated method for measuring the joint space width with a higher sensitivity to change1 than manual methods. The large variability in joint shapes and textures, the possible presence of joint damage, and the interpretation of projection images make it difficult to detect joint margins accurately. We developed a method that uses a modified active shape model to scan for margins within a predetermined region of interest. Possible joint space margin locations are detected using a probability score based on the Mahalanobis distance. To prevent the detection of false edges, we use a dynamic programming approach. The shape model and the Mahalanobis scoring function are trained with a set of 50 hand radiographs, in which the margins have been outlined by an expert. We tested our method on a test set of 50 images. The method was evaluated by calculating the mean absolute difference with manual readings by a trained person. 90% of the joint margins are detected within 0.12 mm. We found that our joint margin detection method has a higher precision considering reproducibility than manual readings. For cases where the joint space has disappeared, the algorithm is unable to estimate the margins. In these cases it would be necessary to use a different method to quantify joint damage

    Registration, segmentation, and visualization of multimodal brain images

    No full text
    This paper gives an overview of the studies performed at our institute over the last decade on the processing and visualization of brain images, in the context of international developments in the field. The focus is on multimodal image registration and multimodal visualization, while segmentation is touched upon as a preprocessing step for visualization. The state-of-the-art in these areas is discussed and suggestions for future research are given. © 2001 Elsevier Science Ltd. This paper gives an overview of the studies performed at our institute over the last decade on the processing and visualization of brain images, in the context of international developments in the field. The focus is on multimodal image registration and multimodal visualization, while segmentation is touched upon as a preprocessing step for visualization. The state-of-the-art in these areas is discussed and suggestions for future research are given

    Evaluation of a deep learning approach for the segmentation of brain tissues and white matter hyperintensities of presumed vascular origin in MRI

    Get PDF
    Automatic segmentation of brain tissues and white matter hyperintensities of presumed vascular origin (WMH) in MRI of older patients is widely described in the literature. Although brain abnormalities and motion artefacts are common in this age group, most segmentation methods are not evaluated in a setting that includes these items. In the present study, our tissue segmentation method for brain MRI was extended and evaluated for additional WMH segmentation. Furthermore, our method was evaluated in two large cohorts with a realistic variation in brain abnormalities and motion artefacts. The method uses a multi-scale convolutional neural network with a T1-weighted image, a T2-weighted fluid attenuated inversion recovery (FLAIR) image and a T1-weighted inversion recovery (IR) image as input. The method automatically segments white matter (WM), cortical grey matter (cGM), basal ganglia and thalami (BGT), cerebellum (CB), brain stem (BS), lateral ventricular cerebrospinal fluid (lvCSF), peripheral cerebrospinal fluid (pCSF), and WMH. Our method was evaluated quantitatively with images publicly available from the MRBrainS13 challenge (n = 20), quantitatively and qualitatively in relatively healthy older subjects (n = 96), and qualitatively in patients from a memory clinic (n = 110). The method can accurately segment WMH (Overall Dice coefficient in the MRBrainS13 data of 0.67) without compromising performance for tissue segmentations (Overall Dice coefficients in the MRBrainS13 data of 0.87 for WM, 0.85 for cGM, 0.82 for BGT, 0.93 for CB, 0.92 for BS, 0.93 for lvCSF, 0.76 for pCSF). Furthermore, the automatic WMH volumes showed a high correlation with manual WMH volumes (Spearman's ρ = 0.83 for relatively healthy older subjects). In both cohorts, our method produced reliable segmentations (as determined by a human observer) in most images (relatively healthy/memory clinic: tissues 88%/77% reliable, WMH 85%/84% reliable) despite various degrees of brain abnormalities and motion artefacts. In conclusion, this study shows that a convolutional neural network-based segmentation method can accurately segment brain tissues and WMH in MR images of older patients with varying degrees of brain abnormalities and motion artefacts

    Intra-temporal facial nerve centerline segmentation for navigated temporal bone surgery

    No full text
    Approaches through the temporal bone require surgeons to drill away bone to expose a target skull base lesion while evading vital structures contained within it, such as the sigmoid sinus, jugular bulb, and facial nerve. We hypothesize that an augmented neuronavigation system that continuously calculates the distance to these structures and warns if the surgeon drills too close, will aid in making safe surgical approaches. Contemporary image guidance systems are lacking an automated method to segment the inhomogeneous and complexly curved facial nerve. Therefore, we developed a segmentation method to delineate the intra-temporal facial nerve centerline from clinically available temporal bone CT images semi-automatically. Our method requires the user to provide the start- and end-point of the facial nerve in a patient's CT scan, after which it iteratively matches an active appearance model based on the shape and texture of forty facial nerves. Its performance was evaluated on 20 patients by comparison to our gold standard: manually segmented facial nerve centerlines. Our segmentation method delineates facial nerve centerlines with a maximum error along its whole trajectory of 0.40±0.20 mm (mean±standard deviation). These results demonstrate that our model-based segmentation method can robustly segment facial nerve centerlines. Next, we can investigate whether integration of this automated facial nerve delineation with a distance calculating neuronavigation interface results in a system that can adequately warn surgeons during temporal bone drilling, and effectively diminishes risks of iatrogenic facial nerve palsy. © 2011 SPIE

    Intra-temporal facial nerve centerline segmentation for navigated temporal bone surgery

    No full text
    Approaches through the temporal bone require surgeons to drill away bone to expose a target skull base lesion while evading vital structures contained within it, such as the sigmoid sinus, jugular bulb, and facial nerve. We hypothesize that an augmented neuronavigation system that continuously calculates the distance to these structures and warns if the surgeon drills too close, will aid in making safe surgical approaches. Contemporary image guidance systems are lacking an automated method to segment the inhomogeneous and complexly curved facial nerve. Therefore, we developed a segmentation method to delineate the intra-temporal facial nerve centerline from clinically available temporal bone CT images semi-automatically. Our method requires the user to provide the start- and end-point of the facial nerve in a patient's CT scan, after which it iteratively matches an active appearance model based on the shape and texture of forty facial nerves. Its performance was evaluated on 20 patients by comparison to our gold standard: manually segmented facial nerve centerlines. Our segmentation method delineates facial nerve centerlines with a maximum error along its whole trajectory of 0.40±0.20 mm (mean±standard deviation). These results demonstrate that our model-based segmentation method can robustly segment facial nerve centerlines. Next, we can investigate whether integration of this automated facial nerve delineation with a distance calculating neuronavigation interface results in a system that can adequately warn surgeons during temporal bone drilling, and effectively diminishes risks of iatrogenic facial nerve palsy. © 2011 SPIE
    corecore