196 research outputs found
Design and validation of Segment - freely available software for cardiovascular image analysis
<p>Abstract</p> <p>Background</p> <p>Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format.</p> <p>Results</p> <p>Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page <url>http://segment.heiberg.se</url>.</p> <p>Conclusions</p> <p>Segment is a well-validated comprehensive software package for cardiovascular image analysis. It is freely available for research purposes provided that relevant original research publications related to the software are cited.</p
Automatic segmentation in CMR - Development and validation of algorithms for left ventricular function, myocardium at risk and myocardial infarction
In this thesis four new algorithms are presented for automatic segmentation in cardiovascular magnetic resonance (CMR); automatic segmentation of the left ventricle, myocardial infarction, and myocardium at risk in two different image types. All four algorithms were implemented in freely available software for image analysis and were validated against reference delineations with a low bias and high regional agreement. CMR is the most accurate and reproducible method for assessment of left ventricular mass and volumes and reference standard for assessment of myocardial infarction. CMR is also validated against single photon emission computed tomography (SPECT) for assessment of myocardium at risk up to one week after acute myocardial infarction. However, the clinical standard for quantification of left ventricular mass and volumes is manual delineation which has been shown to have a large bias between observers from different sites and for myocardium at risk and myocardial infarction there is no clinical standard due to varying results shown for the previously suggested threshold methods. The new automatic algorithms were all based on intensity classification by Expectation Maximization (EM) and incorporation of a priori information specific for each application. Validation was performed in large cohorts of patients with regards to bias in clinical parameters and regional agreement as Dice Similarity Coefficient (DSC). Further, images with reference delineation of the left ventricle were made available for future benchmarking of left ventricular segmentation, and the new automatic algorithms for segmentation of myocardium at risk and myocardial infarction were directly compared to the previously suggested intensity threshold methods. Combining intensity classification by EM with a priori information as in the new automatic algorithms was shown superior to previous methods and specifically to the previously suggested threshold methods for myocardium at risk and myocardial infarction. Added value of using a priori information and intensity correction was shown significant measured by DSC even though not significant for bias. For the previously suggested methods of infarct quantification a poorer result was found in the new multi-center, multi-vendor patient data than in the original validation in animal studies or single center patient studies. Thus, the results in this thesis also show the importance ofusing both bias and DSC for validation and performing validation in images of representative quality as in multi-center, multi-vendor patient studies
Automated Detection of Regions of Interest for Brain Perfusion MR Images
Images with abnormal brain anatomy produce problems for automatic
segmentation techniques, and as a result poor ROI detection affects both
quantitative measurements and visual assessment of perfusion data. This paper
presents a new approach for fully automated and relatively accurate ROI
detection from dynamic susceptibility contrast perfusion magnetic resonance and
can therefore be applied excellently in the perfusion analysis. In the proposed
approach the segmentation output is a binary mask of perfusion ROI that has
zero values for air pixels, pixels that represent non-brain tissues, and
cerebrospinal fluid pixels. The process of binary mask producing starts with
extracting low intensity pixels by thresholding. Optimal low-threshold value is
solved by obtaining intensity pixels information from the approximate
anatomical brain location. Holes filling algorithm and binary region growing
algorithm are used to remove falsely detected regions and produce region of
only brain tissues. Further, CSF pixels extraction is provided by thresholding
of high intensity pixels from region of only brain tissues. Each time-point
image of the perfusion sequence is used for adjustment of CSF pixels location.
The segmentation results were compared with the manual segmentation performed
by experienced radiologists, considered as the reference standard for
evaluation of proposed approach. On average of 120 images the segmentation
results have a good agreement with the reference standard. All detected
perfusion ROIs were deemed by two experienced radiologists as satisfactory
enough for clinical use. The results show that proposed approach is suitable to
be used for perfusion ROI detection from DSC head scans. Segmentation tool
based on the proposed approach can be implemented as a part of any automatic
brain image processing system for clinical use
ΠΠ΅ΠΉΡΠΎΠ½Π½Π° ΠΌΠ΅ΡΠ΅ΠΆΠ° Π½Π° ΠΎΡΠ½ΠΎΠ²Ρ Π³Π»ΠΈΠ±ΠΎΠΊΠΎΠ³ΠΎ Π½Π°Π²ΡΠ°Π½Π½Ρ Π΄Π»Ρ ΠΎΡΡΠΈΠΌΠ°Π½Π½Ρ Π·ΠΎΠ½ ΡΠ½ΡΠ΅ΡΠ΅ΡΡ Π½Π° T2*-Π·Π²Π°ΠΆΠ΅Π½ΠΈΡ ΠΏΠ΅ΡΡΡΠ·ΡΠΉΠ½ΠΈΡ Π·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½Π½ΡΡ ΠΠ Π’ ΠΌΠΎΠ·ΠΊΡ
Brain region segmentation is usually the first step for dynamic susceptibility contrast perfusion analysis. Although manual segmentation is more accurate, it is a time-consuming and not sufficiently reproducible process. Clinicians still rely on manual segmentation especially for cases with abnormal brain anatomy, as removing brain parts or inclusion of non-brain tissues can be a potential source of falsely high or falsely low values of perfusion parameters. This study proposes an effective deep learning-based neural network for fully automatic segmentation of brain from non-brain tissues in T2*-weighted magnetic resonance images with abnormal brain anatomy. Our neural network architecture combines U-Net and ResNet with plugged spatial and channel squeeze and excitation attention modules into the ResNet backbone. The train, validation, and test processes are conducted on 32 three-dimensional volumes of different subjects from the TCGA glioblastoma multiforme collection. Four performance metrics are used in our experiments: Dice coefficient, sensitivity, specificity, and accuracy. Quantitative results (i.e., Dice coefficient of 0.9726 +/- 0.004, sensitivity of 0.9514+/-0.007, specificity of 0.9983+/-0.001, and accuracy of 0.9864+/-0.003) reveal that the proposed neural network architecture is efficient and accurate for brain segmentation. The obtained results also demonstrate that the training model using the proposed U-Net+ResNet architecture of the neural network provides the best Dice coefficient, specificity, and accuracy metric values compared to current methods under the same hardware conditions and using the same test dataset of magnetic resonance images of a human head with abnormal brain anatomy. Moreover, obtained results also indicate that the proposed U-Net+ResNet architecture of deep learning-based neural network could be good enough in a clinical setup to reduce the need for time-consuming and non-reproducible manual segmentation
Brain Tissues Segmentation on MR Perfusion Images Using CUSUM Filter for Boundary Pixels
The fully automated and relatively accurate method of brain tissues
segmentation on T2-weighted magnetic resonance perfusion images is proposed.
Segmentation with this method provides a possibility to obtain perfusion region
of interest on images with abnormal brain anatomy that is very important for
perfusion analysis. In the proposed method the result is presented as a binary
mask, which marks two regions: brain tissues pixels with unity values and
skull, extracranial soft tissue and background pixels with zero values. The
binary mask is produced based on the location of boundary between two studied
regions. Each boundary point is detected with CUSUM filter as a change point
for iteratively accumulated points at time of moving on a sinusoidal-like path
along the boundary from one region to another. The evaluation results for 20
clinical cases showed that proposed segmentation method could significantly
reduce the time and efforts required to obtain desirable results for perfusion
region of interest detection on T2-weighted magnetic resonance perfusion images
with abnormal brain anatomy
Brain tissues segmentation on MR perfusion images using CUSUM filter for boundary pixels
The fully automated and relatively accurate method of brain tissues segmentation on Π’2-weighted magnetic resonance perfusion images is proposed. Segmentation with this method provides a possibility to obtain perfusion region of interest on images with abnormal brain anatomy that is very important for perfusion analysis. In the proposed method the result is presented as a binary mask, which marks two regions: brain tissues pixels with unity values and skull, extracranial soft tissue and background pixels with zero values. The binary mask is produced based on the location of boundary between two studied regions. Each boundary point is detected with CUSUM filter as a change point for iteratively accumulated points at time of moving on a sinusoidal-like path along the boundary from one region to another. The evaluation results for 20 clinical cases showed that proposed segmentation method could significantly reduce the time and efforts required to obtain desirable results for perfusion region of interest detection on Π’2-weighted magnetic resonance perfusion images with abnormal brain anatomy
Π‘Π΅Π³ΠΌΠ΅Π½ΡΠ°ΡΡΡ ΠΌΠΎΠ·ΠΊΡ Π½Π° ΠΏΠ΅ΡΡΡΠ·ΡΠΉΠ½ΠΈΡ Π·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½Π½ΡΡ ΠΌΠ°Π³Π½ΡΡΠ½ΠΎ-ΡΠ΅Π·ΠΎΠ½Π°Π½ΡΠ½ΠΎΡ ΡΠΎΠΌΠΎΠ³ΡΠ°ΡΡΡ
Π’ΠΎΡΠ½Π΅ Π²ΠΈΠ·Π½Π°ΡΠ΅Π½Π½Ρ Π·ΠΎΠ½ΠΈ Π΄ΠΎΡΠ»ΡΠ΄ΠΆΠ΅Π½Π½Ρ ΠΏΠ΅ΡΡΡΠ·ΡΡ Π³ΠΎΠ»ΠΎΠ²Π½ΠΎΠ³ΠΎ ΠΌΠΎΠ·ΠΊΡ Ρ Π²Π°ΠΆΠ»ΠΈΠ²ΠΈΠΌ ΠΊΡΠΎΠΊΠΎΠΌ Π² ΠΎΡΡΠΈΠΌΠ°Π½Π½Ρ ΡΠΊΡΡΠ½ΠΈΡ
ΡΠ΅Π·ΡΠ»ΡΡΠ°ΡΡΠ² ΠΎΡΡΠ½ΠΊΠΈ Π³Π΅ΠΌΠΎΠ΄ΠΈΠ½Π°ΠΌΡΡΠ½ΠΈΡ
ΠΏΠ°ΡΠ°ΠΌΠ΅ΡΡΡΠ² ΠΏΠ΅ΡΡΡΠ·ΡΡ Π·Π° Π΄ΠΎΠΏΠΎΠΌΠΎΠ³ΠΎΡ ΠΏΠ΅ΡΡΡΠ·ΡΠΉΠ½ΠΎΡ Π΄ΠΈΠ½Π°ΠΌΡΡΠ½ΠΎ-ΡΠΏΡΠΈΠΉΠ½ΡΡΠ»ΠΈΠ²ΠΎΡ ΠΊΠΎΠ½ΡΡΠ°ΡΡΠ½ΠΎΡ ΠΌΠ°Π³Π½ΡΡΠ½ΠΎ-ΡΠ΅Π·ΠΎΠ½Π°Π½ΡΠ½ΠΎΡ ΡΠΎΠΌΠΎΠ³ΡΠ°ΡΡΡ. Π ΠΌΠ΅ΡΠΎΡ ΠΏΠΎΠΊΡΠ°ΡΠ΅Π½Π½Ρ Π²ΡΠ΄ΡΠ²ΠΎΡΡΠ²Π°Π½ΠΎΡΡΡ ΡΠ° Π½Π°Π΄ΡΠΉΠ½ΠΎΡΡΡ Π²ΠΈΠ·Π½Π°ΡΠ΅Π½Π½Ρ Π·ΠΎΠ½ΠΈ Π΄ΠΎΡΠ»ΡΠ΄ΠΆΠ΅Π½Π½Ρ ΠΏΠ΅ΡΡΡΠ·ΡΡ Π³ΠΎΠ»ΠΎΠ²Π½ΠΎΠ³ΠΎ ΠΌΠΎΠ·ΠΊΡ Π±ΡΠ»ΠΈ Π·Π°ΠΏΡΠΎΠΏΠΎΠ½ΠΎΠ²Π°Π½Ρ ΡΡΠ·Π½Ρ Π°Π»Π³ΠΎΡΠΈΡΠΌΠΈ ΠΏΡΠΎΠ²Π΅Π΄Π΅Π½Π½Ρ Π½Π°ΠΏΡΠ²Π°Π²ΡΠΎΠΌΠ°ΡΠΈΠ·ΠΎΠ²Π°Π½ΠΎΡ Π°Π±ΠΎ ΠΏΠΎΠ²Π½ΡΡΡΡ Π°Π²ΡΠΎΠΌΠ°ΡΠΈΠ·ΠΎΠ²Π°Π½ΠΎΡ ΡΠ΅Π³ΠΌΠ΅Π½ΡΠ°ΡΡΡ Π΄ΡΠ»ΡΠ½ΠΊΠΈ ΡΠΊΠ°Π½ΠΈΠ½ ΠΌΠΎΠ·ΠΊΡ Π»ΡΠ΄ΠΈΠ½ΠΈ. Π¦Π΅ Π΄ΠΎΡΠ»ΡΠ΄ΠΆΠ΅Π½Π½Ρ ΠΌΡΡΡΠΈΡΡ ΠΎΠ³Π»ΡΠ΄ ΠΏΠΎΡΠΎΡΠ½ΠΎΠ³ΠΎ ΡΡΠ°Π½Ρ ΠΏΠΈΡΠ°Π½Π½Ρ ΡΠ΅Π³ΠΌΠ΅Π½ΡΠ°ΡΡΡ Π΄ΡΠ»ΡΠ½ΠΊΠΈ ΡΠΊΠ°Π½ΠΈΠ½ ΠΌΠΎΠ·ΠΊΡ Π»ΡΠ΄ΠΈΠ½ΠΈ Π½Π° ΠΏΠ΅ΡΡΡΠ·ΡΠΉΠ½ΠΈΡ
Π’2- Ρ Π’2*-Π·Π²Π°ΠΆΠ΅Π½ΠΈΡ
ΠΠ -Π·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½Π½ΡΡ
, Π° ΡΠ°ΠΊΠΎΠΆ ΡΠΎΠ·Π³Π»ΡΠ΄ Π°Π»Π³ΠΎΡΠΈΡΠΌΡΠ², ΡΠΊΡ Π½Π°ΠΉΡΠ°ΡΡΡΡΠ΅ Π²ΠΈΠΊΠΎΡΠΈΡΡΠΎΠ²ΡΡΡΡΡΡ Π΄Π»Ρ ΠΏΠΎΠ²Π½ΡΡΡΡ Π°Π²ΡΠΎΠΌΠ°ΡΠΈΠ·ΠΎΠ²Π°Π½ΠΎΡ ΠΏΡΠΎΡΠ΅Π΄ΡΡΠΈ ΡΠ΅Π³ΠΌΠ΅Π½ΡΠ°ΡΡΡ, ΡΡ
Π½ΡΡ
ΠΏΠ΅ΡΠ΅Π²Π°Π³ ΡΠ° Π½Π΅Π΄ΠΎΠ»ΡΠΊΡΠ²
- β¦