32 research outputs found
Mid-rapidity anti-proton to proton ratio from Au+Au collisions at GeV
We report results on the ratio of mid-rapidity anti-proton to proton yields
in Au+Au collisions at \rts = 130 GeV per nucleon pair as measured by the
STAR experiment at RHIC. Within the rapidity and transverse momentum range of
and 0.4 1.0 GeV/, the ratio is essentially independent of
either transverse momentum or rapidity, with an average of for minimum bias collisions. Within errors, no
strong centrality dependence is observed. The results indicate that at this
RHIC energy, although the -\pb pair production becomes important at
mid-rapidity, a significant excess of baryons over anti-baryons is still
present.Comment: 5 pages, 3 figures, accepted by Phys. Rev. Let
Automatic liver tumor segmentation in CT with fully convolutional neural networks and object-based postprocessing
Automatic liver tumor segmentation would have a big impact on liver therapy planning procedures and follow-up assessment, thanks to standardization and incorporation of full volumetric information. In this work, we develop a fully automatic method for liver tumor segmentation in CT images based on a 2D fully convolutional neural network with an object-based postprocessing step. We describe our experiments on the LiTS challenge training data set and evaluate segmentation and detection performance. Our proposed design cascading two models working on voxel- and object-level allowed for a significant reduction of false positive findings by 85% when compared with the raw neural network output. In comparison with the human performance, our approach achieves a similar segmentation quality for detected tumors (mean Dice 0.69 vs. 0.72), but is inferior in the detection performance (recall 63% vs. 92%). Finally, we describe how we participated in the LiTS challenge and achieved state-of-the-art performance
The VOLCANO '09 Challenge: Preliminary Results
The VOLCANO\"09 Challenge invited participants to evaluate the change in size of pulmonary nodules in CT images; the challenge data set consisted of 50 pairs of CT scans each scan containing a single nodule. This is the first challenge for CAD methods on pulmonary nodules in which size change rather than volume estimation is the primary endpoint. Responses from 13 teams were received with size change results for a total of 17 different methods. In this paper the challenge data set is described and statistical results computed from the submissions are presented. The dataset consisted of several subgroups: (a) zero-change cases, cases with different slice thickness scans, cases with actual size change and a synthetic nodule case. No statistical difference was found between the methods; a slice thickness change was significant and there was an interesting bias observed for some zero-change nodules
Neural-network-based automatic segmentation of cerebral ultrasound images for improving image-guided neurosurgery
Segmentation of anatomical structures in intraoperative ultrasound (iUS) images during image-guided interventions is challenging. Anatomical variances and the uniqueness of each procedure impede robust automatic image analysis. In addition, ultrasound image acquisition itself, especially acquired freehand by multiple physicians, is subject to major variability. In this paper we present a robust and fully automatic neural-network-based segmentation of central structures of the brain on B-mode ultrasound images. For our study we used iUS data sets from 18 patients, containing sweeps before, during, and after tumor resection, acquired at the University Hospital Essen, Germany. Different, machine learning approaches are compared and discussed in order to achieve results of highest quality without overfitting. We evaluate our results on the same data sets as in a previous publication in which the segmentations were used to improve iUS and preoperative Mill registration. Despite the smaller amount of data compared to other studies, we could efficiently train a U-net model for our purpose. Segmentations for this demanding task were performed with an average Dice coefficient of 0.88 and an average Hausdorff distance of 5.21 mm. Compared with a prior method for which a Random Forest, classifier was trained with handcrafted features, the Dice coefficient could be increased by 0.14 and the Hausdorff distance is reduced by 7 mm
Comparison of neuroendocrine tumor detection and characterization using DOTATOC-PET in correlation with contrast enhanced CT and delayed contrast enhanced MRI
Purpose: We evaluated the rate of successful characterization of gastroenteropancreatic neuroendocrine tumors (NETs) present with an increased somatostatin receptor, comparing CE-CT with CE-MRI, each in correlation with DOTATOC-PET. Methods and materials: 8 patients with GEP-NET were imaged using CE-MRI (Gd-EOB-DTPA), CE-CT (Imeron 400) and DOTATOC-PET. Contrast-enhancement of normal liver-tissue and metastasis was quantified with ROI-technique. Tumor delineation was assessed with visual-score in blind-read-analysis by two experienced radiologists. Results: Out of 40 liver metastases in patients with NETs, all were detected by CE-MRI and the lesion extent could be adequately assessed, whereas CT failed to detect 20% of all metastases. The blind-read-score of CT in arterial and portal phase was median -0.65 and -1.4, respectively, and 2.7 for delayed-MRI. The quantitative ROI-analysis presented an improved contrast-enhancement-ratio with a median of 1.2, 1.6 and 3.3 for CE-CT arterial, portal-phase and delayed-MRI respectively. Conclusion: Late CE-MRI was superior to CE-CT in providing additionally morphologic characterization and exact lesion extension of hepatic metastases from neuroendocrine tumor detected with DOTATOC-PET. Therefore, late enhanced Gd-EOB-DTPA-MRI seems to be the adequate imaging modality for combination with DOTATOC-PET to provide complementary (macroscopic and molecular) tumor characterization in hepatic metastasized NETs
On the evaluation of segmentation editing tools
Efficient segmentation editing tools are important components in the segmentation process, as no automatic methods exist that always generate sufficient results. Evaluating segmentation editing algorithms is challenging, because their quality depends on the user’s subjective impression. So far, no established methods for an objective, comprehensive evaluation of such tools exist and, particularly, intermediate segmentation results are not taken into account. We discuss the evaluation of editing algorithms in the context of tumor segmentation in computed tomography. We propose a rating scheme to qualitatively measure the accuracy and efficiency of editing tools in user studies. In order to objectively summarize the overall quality, we propose two scores based on the subjective rating and the quantified segmentation quality over time. Finally, a simulation-based evaluation approach is discussed, which allows a more reproducible evaluation without the need for human input. This automated evaluation complements user studies, allowing a more convincing evaluation, particularly during development, where frequent user studies are not possible. The proposed methods have been used to evaluate two dedicated editing algorithms on 131 representative tumor segmentations. We show how the comparison of editing algorithms benefits from the proposed methods. Our results also show the correlation of the suggested quality score with the qualitative ratings
Automatic and efficient MRI-US segmentations for improving intraoperative image fusion in image-guided neurosurgery
Knowledge of the exact tumor location and structures at risk in its vicinity are crucial for neurosurgical interventions. Neuronavigation systems support navigation within the patient's brain, based on preoperative MRI (preMRI). However, increasing tissue deformation during the course of tumor resection reduces navigation accuracy based on preMRI. Intraoperative ultrasound (iUS) is therefore used as real-time intraoperative imaging. Registration of preMRI and iUS remains a challenge due to different or varying contrasts in iUS and preMRI. Here, we present an automatic and efficient segmentation of B-mode US images to support the registration process. The falx cerebri and the tentorium cerebelli were identified as examples for central cerebral structures and their segmentations can serve as guiding frame for multi-modal image registration. Segmentations of the falx and tentorium were performed with an average Dice coefficient of 0.74 and an average Hausdorff distance of 12.2 mm. The subsequent registration incorporates these segmentations and increases accuracy, robustness and speed of the overall registration process compared to purely intensity-based registration. For validation an expert manually located corresponding landmarks. Our approach reduces the initial mean Target Registration Error from 16.9 mm to 3.8 mm using our intensity-based registration and to 2.2 mm with our combined segmentation and registration approach. The intensity-based registration reduced the maximum initial TRE from 19.4 mm to 5.6 mm, with the approach incorporating segmentations this is reduced to 3.0 mm. Mean volumetric intensity-based registration of preMRI and iUS took 40.5 s, including segmentations 12.0 s
Algorithm variability in the estimation of lung nodule volume from phantom CT scans: Results of the QIBA 3A public challenge
Rationale and Objectives Quantifying changes in lung tumor volume is important for diagnosis, therapy planning, and evaluation of response to therapy. The aim of this study was to assess the performance of multiple algorithms on a reference data set. The study was organized by the Quantitative Imaging Biomarker Alliance (QIBA). Materials and Methods The study was organized as a public challenge. Computed tomography scans of synthetic lung tumors in an anthropomorphic phantom were acquired by the Food and Drug Administration. Tumors varied in size, shape, and radiodensity. Participants applied their own semi-automated volume estimation algorithms that either did not allow or allowed post-segmentation correction (type 1 or 2, respectively). Statistical analysis of accuracy (percent bias) and precision (repeatability and reproducibility) was conducted across algorithms, as well as across nodule characteristics, slice thickness, and algorithm type. Results Eighty-four percent of volume measurements of QIBA-compliant tumors were within 15% of the true volume, ranging from 66% to 93% across algorithms, compared to 61% of volume measurements for all tumors (ranging from 37% to 84%). Algorithm type did not affect bias substantially; however, it was an important factor in measurement precision. Algorithm precision was notably better as tumor size increased, worse for irregularly shaped tumors, and on the average better for type 1 algorithms. Over all nodules meeting the QIBA Profile, precision, as measured by the repeatability coefficient, was 9.0% compared to 18.4% overall. Conclusion The results achieved in this study, using a heterogeneous set of measurement algorithms, support QIBA quantitative performance claims in terms of volume measurement repeatability for nodules meeting the QIBA Profile criteria