7,923 research outputs found
A comparative evaluation of interactive segmentation algorithms
In this paper we present a comparative evaluation of four popular interactive segmentation algorithms. The evaluation was carried out as a series of user-experiments, in which participants were tasked with extracting 100 objects from a common dataset: 25 with each algorithm, constrained within a time limit of 2 min for each object. To facilitate the experiments, a âscribble-drivenâ segmentation tool was developed to enable interactive image segmentation by simply marking areas of foreground and background with the mouse. As the participants refined and improved their respective segmentations, the corresponding updated segmentation mask was stored along with the elapsed time. We then collected and evaluated each recorded mask against a manually segmented ground truth, thus allowing us to gauge segmentation accuracy over time. Two benchmarks were used for the evaluation: the well-known Jaccard index for measuring object accuracy, and a new fuzzy metric, proposed in this paper, designed for measuring boundary accuracy. Analysis of the experimental results demonstrates the effectiveness of the suggested measures and provides valuable insights into the performance and characteristics of the evaluated algorithms
Left-ventricle myocardium segmentation using a coupled level-set with a priori knowledge
This paper presents a coupled level-set segmentation of the myocardium of the left ventricle of the heart using a priori information. From a fast marching initialisation, two fronts representing the endocardium and epicardium boundaries of the left ventricle are evolved as the zero level-set of a higher dimension function. We introduce a novel and robust stopping term using both gradient and region-based information. The segmentation is supervised both with a coupling function and using a probabilistic model built from training instances. The robustness of the segmentation scheme is evaluated by performing a segmentation on four unseen data-sets containing high variation and the performance of the segmentation is quantitatively assessed
Visual and Contextual Modeling for the Detection of Repeated Mild Traumatic Brain Injury.
Currently, there is a lack of computational methods for the evaluation of mild traumatic brain injury (mTBI) from magnetic resonance imaging (MRI). Further, the development of automated analyses has been hindered by the subtle nature of mTBI abnormalities, which appear as low contrast MR regions. This paper proposes an approach that is able to detect mTBI lesions by combining both the high-level context and low-level visual information. The contextual model estimates the progression of the disease using subject information, such as the time since injury and the knowledge about the location of mTBI. The visual model utilizes texture features in MRI along with a probabilistic support vector machine to maximize the discrimination in unimodal MR images. These two models are fused to obtain a final estimate of the locations of the mTBI lesion. The models are tested using a novel rodent model of repeated mTBI dataset. The experimental results demonstrate that the fusion of both contextual and visual textural features outperforms other state-of-the-art approaches. Clinically, our approach has the potential to benefit both clinicians by speeding diagnosis and patients by improving clinical care
Fuzzy-based Propagation of Prior Knowledge to Improve Large-Scale Image Analysis Pipelines
Many automatically analyzable scientific questions are well-posed and offer a
variety of information about the expected outcome a priori. Although often
being neglected, this prior knowledge can be systematically exploited to make
automated analysis operations sensitive to a desired phenomenon or to evaluate
extracted content with respect to this prior knowledge. For instance, the
performance of processing operators can be greatly enhanced by a more focused
detection strategy and the direct information about the ambiguity inherent in
the extracted data. We present a new concept for the estimation and propagation
of uncertainty involved in image analysis operators. This allows using simple
processing operators that are suitable for analyzing large-scale 3D+t
microscopy images without compromising the result quality. On the foundation of
fuzzy set theory, we transform available prior knowledge into a mathematical
representation and extensively use it enhance the result quality of various
processing operators. All presented concepts are illustrated on a typical
bioimage analysis pipeline comprised of seed point detection, segmentation,
multiview fusion and tracking. Furthermore, the functionality of the proposed
approach is validated on a comprehensive simulated 3D+t benchmark data set that
mimics embryonic development and on large-scale light-sheet microscopy data of
a zebrafish embryo. The general concept introduced in this contribution
represents a new approach to efficiently exploit prior knowledge to improve the
result quality of image analysis pipelines. Especially, the automated analysis
of terabyte-scale microscopy data will benefit from sophisticated and efficient
algorithms that enable a quantitative and fast readout. The generality of the
concept, however, makes it also applicable to practically any other field with
processing strategies that are arranged as linear pipelines.Comment: 39 pages, 12 figure
Computational Anatomy for Multi-Organ Analysis in Medical Imaging: A Review
The medical image analysis field has traditionally been focused on the
development of organ-, and disease-specific methods. Recently, the interest in
the development of more 20 comprehensive computational anatomical models has
grown, leading to the creation of multi-organ models. Multi-organ approaches,
unlike traditional organ-specific strategies, incorporate inter-organ relations
into the model, thus leading to a more accurate representation of the complex
human anatomy. Inter-organ relations are not only spatial, but also functional
and physiological. Over the years, the strategies 25 proposed to efficiently
model multi-organ structures have evolved from the simple global modeling, to
more sophisticated approaches such as sequential, hierarchical, or machine
learning-based models. In this paper, we present a review of the state of the
art on multi-organ analysis and associated computation anatomy methodology. The
manuscript follows a methodology-based classification of the different
techniques 30 available for the analysis of multi-organs and multi-anatomical
structures, from techniques using point distribution models to the most recent
deep learning-based approaches. With more than 300 papers included in this
review, we reflect on the trends and challenges of the field of computational
anatomy, the particularities of each anatomical region, and the potential of
multi-organ analysis to increase the impact of 35 medical imaging applications
on the future of healthcare.Comment: Paper under revie
Fully automated segmentation and tracking of the intima media thickness in ultrasound video sequences of the common carotid artery
AbstractâThe robust identification and measurement of the intima media thickness (IMT) has a high clinical relevance because it represents one of the most precise predictors used in the assessment of potential future cardiovascular events. To facilitate the analysis of arterial wall thickening in serial clinical investigations, in this paper we have developed a novel fully automatic algorithm for the segmentation, measurement, and tracking of the intima media complex (IMC) in B-mode ultrasound video sequences. The proposed algorithm entails a two-stage image analysis process that initially addresses the segmentation of the IMC in the first frame of the ultrasound video sequence using a model-based approach; in the second step, a novel customized tracking procedure is applied to robustly detect the IMC in the subsequent frames. For the video tracking procedure, we introduce a spatially coherent algorithm called adaptive normalized correlation that prevents the tracking process from converging to wrong arterial interfaces. This represents the main contribution of this paper and was developed to deal with inconsistencies in the appearance of the IMC over the cardiac cycle. The quantitative evaluation has been carried out on 40 ultrasound video sequences of the common carotid artery (CCA) by comparing the results returned by the developed algorithm with respect to ground truth data that has been manually annotated by clinical experts. The measured IMTmean ± standard deviation recorded by the proposed algorithm is 0.60 mm ± 0.10, with a mean coefficient of variation (CV) of 2.05%, whereas the corresponding result obtained for the manually annotated ground truth data is 0.60 mm ± 0.11 with a mean CV equal to 5.60%. The numerical results reported in this paper indicate that the proposed algorithm is able to correctly segment and track the IMC in ultrasound CCA video sequences, and we were encouraged by the stability of our technique when applied to data captured under different imaging conditions. Future clinical studies will focus on the evaluation of patients that are affected by advanced cardiovascular conditions such as focal thickening and arterial plaques
Semi-supervised learning towards automated segmentation of PET images with limited annotations: Application to lymphoma patients
The time-consuming task of manual segmentation challenges routine systematic
quantification of disease burden. Convolutional neural networks (CNNs) hold
significant promise to reliably identify locations and boundaries of tumors
from PET scans. We aimed to leverage the need for annotated data via
semi-supervised approaches, with application to PET images of diffuse large
B-cell lymphoma (DLBCL) and primary mediastinal large B-cell lymphoma (PMBCL).
We analyzed 18F-FDG PET images of 292 patients with PMBCL (n=104) and DLBCL
(n=188) (n=232 for training and validation, and n=60 for external testing). We
employed FCM and MS losses for training a 3D U-Net with different levels of
supervision: i) fully supervised methods with labeled FCM (LFCM) as well as
Unified focal and Dice loss functions, ii) unsupervised methods with Robust FCM
(RFCM) and Mumford-Shah (MS) loss functions, and iii) Semi-supervised methods
based on FCM (RFCM+LFCM), as well as MS loss in combination with supervised
Dice loss (MS+Dice). Unified loss function yielded higher Dice score (mean +/-
standard deviation (SD)) (0.73 +/- 0.03; 95% CI, 0.67-0.8) compared to Dice
loss (p-value<0.01). Semi-supervised (RFCM+alpha*LFCM) with alpha=0.3 showed
the best performance, with a Dice score of 0.69 +/- 0.03 (95% CI, 0.45-0.77)
outperforming (MS+alpha*Dice) for any supervision level (any alpha) (p<0.01).
The best performer among (MS+alpha*Dice) semi-supervised approaches with
alpha=0.2 showed a Dice score of 0.60 +/- 0.08 (95% CI, 0.44-0.76) compared to
another supervision level in this semi-supervised approach (p<0.01).
Semi-supervised learning via FCM loss (RFCM+alpha*LFCM) showed improved
performance compared to supervised approaches. Considering the time-consuming
nature of expert manual delineations and intra-observer variabilities,
semi-supervised approaches have significant potential for automated
segmentation workflows
- âŠ