10 research outputs found

    MRBrainS Challenge: Online Evaluation Framework for Brain Image Segmentation in 3T MRI Scans

    Get PDF
    Many methods have been proposed for tissue segmentation in brain MRI scans. The multitude of methods proposed complicates the choice of one method above others. We have therefore established the MRBrainS online evaluation framework for evaluating (semi) automatic algorithms that segment gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) on 3T brain MRI scans of elderly subjects (65-80 y). Participants apply their algorithms to the provided data, after which their results are evaluated and ranked. Full manual segmentations of GM, WM, and CSF are available for all scans and used as the reference standard. Five datasets are provided for training and fifteen for testing. The evaluated methods are ranked based on their overall performance to segment GM, WM, and CSF and evaluated using three evaluation metrics (Dice, H95, and AVD) and the results are published on the MRBrainS13 website. We present the results of eleven segmentation algorithms that participated in the MRBrainS13 challenge workshop at MICCAI, where the framework was launched, and three commonly used freeware packages: FreeSurfer, FSL, and SPM. The MRBrainS evaluation framework provides an objective and direct comparison of all evaluated algorithms and can aid in selecting the best performing method for the segmentation goal at hand.This study was financially supported by IMDI Grant 104002002 (Brainbox) from ZonMw, the Netherlands Organisation for Health Research and Development, within kind sponsoring by Philips, the University Medical Center Utrecht, and Eindhoven University of Technology. The authors would like to acknowledge the following members of the Utrecht Vascular Cognitive Impairment Study Group who were not included as coauthors of this paper but were involved in the recruitment of study participants and MRI acquisition at the UMC Utrecht (in alphabetical order by department): E. van den Berg, M. Brundel, S. Heringa, and L. J. Kappelle of the Department of Neurology, P. R. Luijten and W. P. Th. M. Mali of the Department of Radiology, and A. Algra and G. E. H. M. Rutten of the Julius Center for Health Sciences and Primary Care. The research of Geert Jan Biessels and the VCI group was financially supported by VIDI Grant 91711384 from ZonMw and by Grant 2010T073 of the Netherlands Heart Foundation. The research of Jeroen de Bresser is financially supported by a research talent fellowship of the University Medical Center Utrecht (Netherlands). The research of Annegreet van Opbroek and Marleen de Bruijne is financially supported by a research grant from NWO (the Netherlands Organisation for Scientific Research). The authors would like to acknowledge MeVis Medical Solutions AG (Bremen, Germany) for providing MeVisLab. Duygu Sarikaya and Liang Zhao acknowledge their Advisor Professor Jason Corso for his guidance. Duygu Sarikaya is supported by NIH 1 R21CA160825-01 and Liang Zhao is partially supported by the China Scholarship Council (CSC).info:eu-repo/semantics/publishedVersio

    Making the PACS workstation a browser of image processing software : a feasibility study using inter-process communication techniques

    No full text
    PURPOSE: To enhance the functional expandability of a picture archiving and communication systems (PACS) workstation and to facilitate the integration of third-part image-processing modules, we propose a browser-server style method. METHODS: In the proposed solution, the PACS workstation shows the front-end user interface defined in an XML file while the image processing software is running in the background as a server. Inter-process communication (IPC) techniques allow an efficient exchange of image data, parameters, and user input between the PACS workstation and stand-alone image-processing software. Using a predefined communication protocol, the PACS workstation developer or image processing software developer does not need detailed information about the other system, but will still be able to achieve seamless integration between the two systems and the IPC procedure is totally transparent to the final user. RESULTS: A browser-server style solution was built between OsiriX (PACS workstation software) and MeVisLab (Image-Processing Software). Ten example image-processing modules were easily added to OsiriX by converting existing MeVisLab image processing networks. Image data transfer using shared memory added <10ms of processing time while the other IPC methods cost 1-5 s in our experiments. CONCLUSION: The browser-server style communication based on IPC techniques is an appealing method that allows PACS workstation developers and image processing software developers to cooperate while focusing on different interests.The original publication is available at www.springerlink.com:Chunliang Wang, Felix Ritter and Orjan Smedby, Making the PACS workstation a browser of image processing software: a feasibility study using inter-process communication techniques, 2010, International journal of computer assisted radiology and surgery, (5), 4, 411-419.http://dx.doi.org/10.1007/s11548-010-0417-8Copyright: Springer Science Business Mediahttp://www.springerlink.com

    Assessment of image quality in abdominal computed tomography: Effect of model-based iterative reconstruction, multi-planar reconstruction and slice thickness on potential dose reduction

    No full text
    Purpose: To determine the effect of tube load, model-based iterative reconstruction (MBIR) strength and slice thickness in abdominal CT using visual comparison of multi-planar reconstruction images. Method: Five image criteria were assessed independently by four radiologists on two data sets at 42- and 98-mAs tube loads for 25 patients examined on a 192-slice dual-source CT scanner. Effect of tube load, MBIR strength, slice thickness and potential dose reduction was estimated with Visual Grading Regression (VGR). Objective image quality was determined by measuring noise (SD), contrast-to-noise (CNR) ratio and noise-power spectra (NPS). Results: Comparing 42- and 98-mAs tube loads, improved image quality was observed as a strong effect of log tube load regardless of MBIR strength (p amp;lt; 0.001). Comparing strength 5 to 3, better image quality was obtained for two criteria (p amp;lt; 0.01), but inferior for liver parenchyma and overall image quality. Image quality was significantly better for slice thicknesses of 2mm and 3mm compared to 1mm, with potential dose reductions between 24%-41%. As expected, with decrease in slice thickness and algorithm strength, the noise power and SD (HU-values) increased, while the CNR decreased. Conclusion: Increasing slice thickness from 1 mm to 2 mm or 3 mm allows for a possible dose reduction. MBIR strength 5 shows improved image quality for three out of five criteria for 1 mm slice thickness. Increasing MBIR strength from 3 to 5 has diverse effects on image quality. Our findings do not support a general recommendation to replace strength 3 by strength 5 in clinical abdominal CT protocols. However, strength 5 may be used in task-based protocols.Funding Agencies|ALF-and LFoU-grants from Region Ostergotland; Medical Faculty at Linkoping University</p

    Photon-counting detector CT and energy-integrating detector CT for trabecular bone microstructure analysis of cubic specimens from human radius

    No full text
    Background As bone microstructure is known to impact bone strength, the aim of this in vitro study was to evaluate if the emerging photon-counting detector computed tomography (PCD-CT) technique may be used for measurements of trabecular bone structures like thickness, separation, nodes, spacing and bone volume fraction. Methods Fourteen cubic sections of human radius were scanned with two multislice CT devices, one PCD-CT and one energy-integrating detector CT (EID-CT), using micro-CT as a reference standard. The protocols for PCD-CT and EID-CT were those recommended for inner- and middle-ear structures, although at higher mAs values: PCD-CT at 450 mAs and EID-CT at 600 (dose equivalent to PCD-CT) and 1000 mAs. Average measurements of the five bone parameters as well as dispersion measurements of thickness, separation and spacing were calculated using a three-dimensional automated region growing (ARG) algorithm. Spearman correlations with micro-CT were computed. Results Correlations with micro-CT, for PCD-CT and EID-CT, ranged from 0.64 to 0.98 for all parameters except for dispersion of thickness, which did not show a significant correlation (p = 0.078 to 0.892). PCD-CT had seven of the eight parameters with correlations rho &amp;gt; 0.7 and three rho &amp;gt; 0.9. The dose-equivalent EID-CT instead had four parameters with correlations rho &amp;gt; 0.7 and only one rho &amp;gt; 0.9. Conclusions In this in vitro study of radius specimens, strong correlations were found between trabecular bone structure parameters computed from PCD-CT data when compared to micro-CT. This suggests that PCD-CT might be useful for analysing bone microstructure in the peripheral human skeleton.Funding Agencies|ALF, Region Ostergotland [RO-936170]; Royal Institute of Technology</p

    Validation of automated post-adjustments of HDR prostate brachytherapy treatment plans by quantitative measures and oncologist observer study

    No full text
    PURPOSE: The aim was to evaluate a postprocessing optimization algorithms ability to improve the spatial properties of a clinical treatment plan while preserving the target coverage and the dose to the organs at risk. The goal was to obtain a more homogenous treatment plan, minimizing the need for manual adjustments after inverse treatment planning. MATERIALS AND METHODS: The study included 25 previously treated prostate cancer pa-tients. The treatment plans were evaluated on dose-volume histogram parameters established clin-ical and quantitative measures of the high dose volumes. The volumes of the four largest hot spots were compared and complemented with a human observer study with visual grading by eight oncologists. Statistical analysis was done using ordinal logistic regression. Weighted kappa and Fleiss kappa were used to evaluate intra-and interobserver reliability. RESULTS: The quantitative analysis showed that there was no change in planning target volume (PTV) coverage and dose to the rectum. There were significant improvements for the adjusted treatment plan in: V150% and V200% for PTV, dose to urethra, conformal index, and dose nonhomogeneity ratio. The three largest hot spots for the adjusted treatment plan were significantly smaller compared to the clinical treatment plan. The observers preferred the adjusted treatment plan in 132 cases and the clinical in 83 cases. The observers preferred the adjusted treatment plan on homogeneity and organs at risk but preferred the clinical plan on PTV coverage. CONCLUSIONS: Quantitative analysis showed that the postadjustment optimization tool could improve the spatial properties of the treatment plans while maintaining the target coverage. (c) 2022 The Authors. Published by Elsevier Inc. on behalf of American Brachytherapy Society. This is an open access article under the CC BY license ( http://creativecommons.org/licenses/by/4.0/

    Standardized Evaluation System for Left Ventricular Segmentation Algorithms in 3D Echocardiography

    No full text
    Real-time 3D Echocardiography (RT3DE) has been proven to be an accurate tool for left ventricular (LV) volume assessment. However, identification of the LV endocardium remains a challenging task, mainly because of the low tissue/blood contrast of the images combined with typical artifacts. Several semi and fully automatic algorithms have been proposed for segmenting the endocardium in RT3DE data in order to extract relevant clinical indices, but a systematic and fair comparison between such methods has so far been impossible due to the lack of a publicly available common database. Here, we introduce a standardized evaluation framework to reliably evaluate and compare the performance of the algorithms developed to segment the LV border in RT3DE. A database consisting of 45 multivendor cardiac ultrasound recordings acquired at different centers with corresponding reference measurements from three experts are made available. The algorithms from nine research groups were quantitatively evaluated and compared using the proposed online platform. The results showed that the best methods produce promising results with respect to the experts' measurements for the extraction of clinical indices, and that they offer good segmentation precision in terms of mean distance error in the context of the experts' variability range. The platform remains open for new submissions

    A Multi-Organ Nucleus Segmentation Challenge

    No full text
    Generalized nucleus segmentation techniques can contribute greatly to reducing the time to develop and validate visual biomarkers for new digital pathology datasets. We summarize the results of MoNuSeg 2018 Challenge whose objective was to develop generalizable nuclei segmentation techniques in digital pathology. The challenge was an official satellite event of the MICCAI 2018 conference in which 32 teams with more than 80 participants from geographically diverse institutes participated. Contestants were given a training set with 30 images from seven organs with annotations of 21,623 individual nuclei. A test dataset with 14 images taken from seven organs, including two organs that did not appear in the training set was released without annotations. Entries were evaluated based on average aggregated Jaccard index (AJI) on the test set to prioritize accurate instance segmentation as opposed to mere semantic segmentation. More than half the teams that completed the challenge outperformed a previous baseline. Among the trends observed that contributed to increased accuracy were the use of color normalization as well as heavy data augmentation. Additionally, fully convolutional networks inspired by variants of U-Net, FCN, and Mask-RCNN were popularly used, typically based on ResNet or VGG base architectures. Watershed segmentation on predicted semantic segmentation maps was a popular post-processing strategy. Several of the top techniques compared favorably to an individual human annotator and can be used with confidence for nuclear morphometrics

    A multi-organ nucleus segmentation challenge

    No full text
    Generalized nucleus segmentation techniques can contribute greatly to reducing the time to develop and validate visual biomarkers for new digital pathology datasets. We summarize the results of MoNuSeg 2018 Challenge whose objective was to develop generalizable nuclei segmentation techniques in digital pathology. The challenge was an official satellite event of the MICCAI 2018 conference in which 32 teams with more than 80 participants from geographically diverse institutes participated. Contestants were given a training set with 30 images from seven organs with annotations of 21,623 individual nuclei. A test dataset with 14 images taken from seven organs, including two organs that did not appear in the training set was released without annotations. Entries were evaluated based on average aggregated Jaccard index (AJI) on the test set to prioritize accurate instance segmentation as opposed to mere semantic segmentation. More than half the teams that completed the challenge outperformed a previous baseline. Among the trends observed that contributed to increased accuracy were the use of color normalization as well as heavy data augmentation. Additionally, fully convolutional networks inspired by variants of U-Net, FCN, and Mask-RCNN were popularly used, typically based on ResNet or VGG base architectures. Watershed segmentation on predicted semantic segmentation maps was a popular post-processing strategy. Several of the top techniques compared favorably to an individual human annotator and can be used with confidence for nuclear morphometrics

    Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge

    No full text
    Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumor is a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST/RANO criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that underwent gross total resection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset
    corecore