47 research outputs found
CTA Quantification and Multi-modal Visualization for Assessing Coronary Artery Disease
In cardiovascular disease, relating a coronary stenosis to a cardiac perfusion defect is
of importance for selecting and planning the proper treatment. However, this is challenging
owing to the high ana
Automatic segmentation, detection and quantification of coronary artery stenoses on CTA
Accurate detection and quantification of coronary artery stenoses is an essential requirement for treatment planning of patients with suspected coronary artery disease. We present a method to automatically detect and quantify coronary artery stenoses in computed tomography coronary angiography. First, centerlines are extracted using a two-point minimum cost path approach and a subsequent refinement step. The resulting centerlines are used as an initialization for lumen segmentation, performed using graph cuts. Then, the expected diameter of the healthy lumen is estimated by applying robust kernel regression to the coronary artery lumen diameter profile. Finally, stenoses are detected and quantified by computing the difference between estimated and expected diameter profiles. We evaluated our method using the data provided in the Coronary Artery Stenoses Detection and Quantification Evaluation Framework. Using 30 testing datasets, the method achieved a detection sensitivity of 29 % and a positive predi
User Interaction in Semi-Automatic Segmentation of Organs at Risk: a Case Study in Radiotherapy
Accurate segmentation of organs at risk is an important step in radiotherapy planning. Manual segmentation being a tedious procedure and prone to inter- and intra-observer variability, there is a growing interest in automated segmentation methods. However, automatic methods frequently fail to provide satisfactory result, and post-processing corrections are often needed. Semi-automatic segmentation methods are designed to overcome these problems by combining physicians’ expertise and computers’ potential. This study evaluates two semi-automatic segmentation methods with different types of user interactions, named the “strokes” and the “contour”, to provide insights into the role and impact of human-computer interaction. Two physicians participated in the experiment. In total, 42 case studies were carried out on five different types of organs at risk. For each case study, both the human-computer interaction process and quality of the segmentation results were measured subjectively and objectively. Furthermore, different measures of the process and the results were correlated. A total of 36 quantifiable and ten non-quantifiable correlations were identified for each type of interaction. Among those pairs of measures, 20 of the contour method and 22 of the strokes method were strongly or moderately correlated, either directly or inversely. Based on those correlated measures, it is concluded that: (1) in the design of semi-automatic segmentation methods, user interactions need to be less cognitively challenging; (2) based on the observed workflows and preferences of physicians, there is a need for flexibility in the interface design; (3) the correlated measures provide insights that can be used in improving user interaction design
Automatic segmentation of right ventricle in cardiac cine MR images using a saliency analysis
PURPOSE: Accurate measurement of the right ventricle (RV) volume is important for the assessment of the ventricular function and a biomarker of the progression of any cardiovascular disease. However, the high RV variability makes difficult a proper delineation of the myocardium wall. This paper introduces a new automatic method for segmenting the RV volume from short axis cardiac magnetic resonance (MR) images by a salient analysis of temporal and spatial observations.
METHODS: The RV volume estimation starts by localizing the heart as the region with the most coherent motion during the cardiac cycle. Afterward, the ventricular chambers are identified at the basal level using the isodata algorithm, the right ventricle extracted, and its centroid computed. A series of radial intensity profiles, traced from this centroid, is used to search a salient intensity pattern that models the inner-outer myocardium boundary. This process is iteratively applied toward the apex, using the segmentation of the previous slice as a regularizer. The consecutive 2D segmentations are added together to obtain the final RV endocardium volume that serves to estimate also the epicardium.
RESULTS: Experiments performed with a public dataset, provided by the RV segmentation challenge in cardiac MRI, demonstrated that this method is highly competitive with respect to the state of the art, obtaining a Dice score of 0.87, and a Hausdorff distance of 7.26 mm while a whole volume was segmented in about 3 s.
CONCLUSIONS: The proposed method provides an useful delineation of the RV shape using only the spatial and temporal information of the cine MR images. This methodology may be used by the expert to achieve cardiac indicators of the right ventricle function
Evaluation of state-of-the-art segmentation algorithms for left ventricle infarct from late Gadolinium enhancement MR images
Studies have demonstrated the feasibility of late Gadolinium enhancement (LGE) cardiovascular magnetic
resonance (CMR) imaging for guiding the management of patients with sequelae to myocardial infarction,
such as ventricular tachycardia and heart failure. Clinical implementation of these developments necessitates
a reproducible and reliable segmentation of the infarcted regions. It is challenging to compare
new algorithms for infarct segmentation in the left ventricle (LV) with existing algorithms. Benchmarking
datasets with evaluation strategies are much needed to facilitate comparison. This manuscript presents
a benchmarking evaluation framework for future algorithms that segment infarct from LGE CMR of the
LV. The image database consists of 30 LGE CMR images of both humans and pigs that were acquired
from two separate imaging centres. A consensus ground truth was obtained for all data using maximum
likelihood estimation.
Six widely-used fixed-thresholding methods and five recently developed algorithms are tested on the
benchmarking framework. Results demonstrate that the algorithms have better overlap with the consensus
ground truth than most of the n-SD fixed-thresholding methods, with the exception of the FullWidth-at-Half-Maximum
(FWHM) fixed-thresholding method. Some of the pitfalls of fixed thresholding
methods are demonstrated in this work. The benchmarking evaluation framework, which is a contribution
of this work, can be used to test and benchmark future algorithms that detect and quantify infarct
in LGE CMR images of the LV. The datasets, ground truth and evaluation code have been made publicly
available through the website: https://www.cardiacatlas.org/web/guest/challenges
Radiomics : principles and radiotherapy applications
International audienceRadiomics is defined as the extraction of a large quantity of quantitative image features. The different radiomic indexes that have been proposed in the literature are described as well as the various factors that have an impact on the robustness of these indexes. We will see that several hundred quantitative features can be extracted per lesion and imaging modality. The ever-growing number of features studied raises the question of the statistical method of analysis used.This review addresses the research supporting the clinical use of radiomics in oncology in the staging of disease, discrimination between healthy and pathological tissues, the identification of genetic features, the prediction of patient survival, the response to treatment, the recurrence after radiotherapy and chemoradiotherapy and the side effects.Based on the existing literature, it remains difficult to identify features that should be used for current clinical practice