80 research outputs found

    Fully-automated Body Composition Analysis in Routine CT Imaging Using 3D Semantic Segmentation Convolutional Neural Networks

    Full text link
    Body tissue composition is a long-known biomarker with high diagnostic and prognostic value in cardiovascular, oncological and orthopaedic diseases, but also in rehabilitation medicine or drug dosage. In this study, the aim was to develop a fully automated, reproducible and quantitative 3D volumetry of body tissue composition from standard CT examinations of the abdomen in order to be able to offer such valuable biomarkers as part of routine clinical imaging. Therefore an in-house dataset of 40 CTs for training and 10 CTs for testing were fully annotated on every fifth axial slice with five different semantic body regions: abdominal cavity, bones, muscle, subcutaneous tissue, and thoracic cavity. Multi-resolution U-Net 3D neural networks were employed for segmenting these body regions, followed by subclassifying adipose tissue and muscle using known hounsfield unit limits. The S{\o}rensen Dice scores averaged over all semantic regions was 0.9553 and the intra-class correlation coefficients for subclassified tissues were above 0.99. Our results show that fully-automated body composition analysis on routine CT imaging can provide stable biomarkers across the whole abdomen and not just on L3 slices, which is historically the reference location for analysing body composition in the clinical routine

    Clinical applications of PET/MRI: current status and future perspectives

    Get PDF
    Fully integrated positron emission tomography (PET)/magnetic resonance imaging (MRI) scanners have been available for a few years. Since then, the number of scanner installations and published studies have been growing. While feasibility of integrated PET/MRI has been demonstrated for many clinical and preclinical imaging applications, now those applications where PET/MRI provides a clear benefit in comparison to the established reference standards need to be identified. The current data show that those particular applications demanding multiparametric imaging capabilities, high soft tissue contrast and/or lower radiation dose seem to benefit from this novel hybrid modality. Promising results have been obtained in whole-body cancer staging in non-small cell lung cancer and multiparametric tumor imaging. Furthermore, integrated PET/MRI appears to have added value in oncologic applications requiring high soft tissue contrast such as assessment of liver metastases of neuroendocrine tumors or prostate cancer imaging. Potential benefit of integrated PET/MRI has also been demonstrated for cardiac (i.e., myocardial viability, cardiac sarcoidosis) and brain (i.e., glioma grading, Alzheimer’s disease) imaging, where MRI is the predominant modality. The lower radiation dose compared to PET/computed tomography will be particularly valuable in the imaging of young patients with potentially curable diseases. However, further clinical studies and technical innovation on scanner hard- and software are needed. Also, agreements on adequate refunding of PET/MRI examinations need to be reached. Finally, the translation of new PET tracers from preclinical evaluation into clinical applications is expected to foster the entire field of hybrid PET imaging, including PET/MRI

    k-strip: A novel segmentation algorithm in k-space for the application of skull stripping

    Full text link
    Objectives: Present a novel deep learning-based skull stripping algorithm for magnetic resonance imaging (MRI) that works directly in the information rich k-space. Materials and Methods: Using two datasets from different institutions with a total of 36,900 MRI slices, we trained a deep learning-based model to work directly with the complex raw k-space data. Skull stripping performed by HD-BET (Brain Extraction Tool) in the image domain were used as the ground truth. Results: Both datasets were very similar to the ground truth (DICE scores of 92\%-98\% and Hausdorff distances of under 5.5 mm). Results on slices above the eye-region reach DICE scores of up to 99\%, while the accuracy drops in regions around the eyes and below, with partially blurred output. The output of k-strip often smoothed edges at the demarcation to the skull. Binary masks are created with an appropriate threshold. Conclusion: With this proof-of-concept study, we were able to show the feasibility of working in the k-space frequency domain, preserving phase information, with consistent results. Future research should be dedicated to discovering additional ways the k-space can be used for innovative image analysis and further workflows.Comment: 11 pages, 6 figures, 2 table

    Open-Source Skull Reconstruction with MONAI

    Full text link
    We present a deep learning-based approach for skull reconstruction for MONAI, which has been pre-trained on the MUG500+ skull dataset. The implementation follows the MONAI contribution guidelines, hence, it can be easily tried out and used, and extended by MONAI users. The primary goal of this paper lies in the investigation of open-sourcing codes and pre-trained deep learning models under the MONAI framework. Nowadays, open-sourcing software, especially (pre-trained) deep learning models, has become increasingly important. Over the years, medical image analysis experienced a tremendous transformation. Over a decade ago, algorithms had to be implemented and optimized with low-level programming languages, like C or C++, to run in a reasonable time on a desktop PC, which was not as powerful as today's computers. Nowadays, users have high-level scripting languages like Python, and frameworks like PyTorch and TensorFlow, along with a sea of public code repositories at hand. As a result, implementations that had thousands of lines of C or C++ code in the past, can now be scripted with a few lines and in addition executed in a fraction of the time. To put this even on a higher level, the Medical Open Network for Artificial Intelligence (MONAI) framework tailors medical imaging research to an even more convenient process, which can boost and push the whole field. The MONAI framework is a freely available, community-supported, open-source and PyTorch-based framework, that also enables to provide research contributions with pre-trained models to others. Codes and pre-trained weights for skull reconstruction are publicly available at: https://github.com/Project-MONAI/research-contributions/tree/master/SkullRe

    Fully automated preoperative liver volumetry incorporating the anatomical location of the central hepatic vein

    Get PDF
    The precise preoperative calculation of functional liver volumes is essential prior major liver resections, as well as for the evaluation of a suitable donor for living donor liver transplantation. The aim of this study was to develop a fully automated, reproducible, and quantitative 3D volumetry of the liver from standard CT examinations of the abdomen as part of routine clinical imaging. Therefore, an in-house dataset of 100 venous phase CT examinations for training and 30 venous phase ex-house CT examinations with a slice thickness of 5 mm for testing and validating were fully annotated with right and left liver lobe. Multi-Resolution U-Net 3D neural networks were employed for segmenting these liver regions. The Sorensen-Dice coefficient was greater than 0.9726 +/- 0.0058, 0.9639 +/- 0.0088, and 0.9223 +/- 0.0187 and a mean volume difference of 32.12 +/- 19.40 ml, 22.68 +/- 21.67 ml, and 9.44 +/- 27.08 ml compared to the standard of reference (SoR) liver, right lobe, and left lobe annotation was achieved. Our results show that fully automated 3D volumetry of the liver on routine CT imaging can provide reproducible, quantitative, fast and accurate results without needing any examiner in the preoperative work-up for hepatobiliary surgery and especially for living donor liver transplantation.Projekt DEA

    Is integrated 18F-FDG PET/MRI superior to 18F-FDG PET/CT in the differentiation of incidental tracer uptake in the head and neck area?

    Get PDF
    PURPOSE:We aimed to investigate the accuracy of 18F-fluorodeoxyglucose positron emission tomography/magnetic resonance imaging (18F-FDG PET/MRI) compared with contrast-enhanced 18F-FDG PET/computed tomography (PET/CT) for the characterization of incidental tracer uptake in examinations of the head and neck.METHODS:A retrospective analysis of 81 oncologic patients who underwent contrast-enhanced 18F-FDG PET/CT and subsequent PET/MRI was performed by two readers for incidental tracer uptake. In a consensus reading, discrepancies were resolved. Each finding was either characterized as most likely benign, most likely malignant, or indeterminate. Using all available clinical information including results from histopathologic sampling and follow-up examinations, an expert reader classified each finding as benign or malignant. McNemar’s test was used to compare the performance of both imaging modalities in characterizing incidental tracer uptake.RESULTS:Forty-six lesions were detected by both modalities. On PET/CT, 27 lesions were classified as most likely benign, one as most likely malignant, and 18 as indeterminate; on PET/MRI, 31 lesions were classified as most likely benign, one lesion as most likely malignant, and 14 as indeterminate. Forty-three lesions were benign and one lesion was malignant according to the reference standard. In two lesions, a definite diagnosis was not possible. McNemar’s test detected no differences concerning the correct classification of incidental tracer uptake between PET/CT and PET/MRI (P = 0.125).CONCLUSION:In examinations of the head and neck area, incidental tracer uptake cannot be classified more accurately by PET/MRI than by PET/CT

    Prediction of left lobe hypertrophy after right lobe radioembolization of the liver using a clinical data model with external validation

    Get PDF
    In cirrhotic patients with hepatocellular carcinoma (HCC), right-sided radioembolization (RE) with Yttrium-90-loaded microspheres is an established palliative therapy and can be considered a “curative intention” treatment when aiming for sequential tumor resection. To become surgical candidate, hypertrophy of the left liver lobe to > 40% (future liver remnant, FLR) is mandatory, which can develop after RE. The amount of radiation-induced shrinkage of the right lobe and compensatory hypertrophy of the left lobe is difficult for clinicians to predict. This study aimed to utilize machine learning to predict left lobe liver hypertrophy in patients with HCC and cirrhosis scheduled for right lobe RE, with external validation. The results revealed that machine learning can accurately predict relative and absolute volume changes of the left liver lobe after right lobe RE. This prediction algorithm could help to estimate the chances of conversion from palliative RE to curative major hepatectomy following significant FLR hypertrophy

    Vergleich von Spirometrie und Bodyplethysmographie als Nachweismethoden der Bronchokonstriktion beim unspezifischen Provokationstest mit Methacholin

    No full text
    Asthma ist durch eine bronchiale Hyperreaktivität gekennzeichnet, welche im Methacholintest (MCT) gemessen wird. Spirometrie und Bodyplethysmographie sollten als Nachweisverfahren der Atemwegsobstruktion im MCT verglichen werden. In zwei Studien wurden MCTs ausgewertet. Die Einsekundenkapazität (FEV1)(FEV_{1}) und der spezifische Atemwegswiderstand (sRaw)(sR_{aw}) wurden bestimmt. Die Auswertung basierte auf der Receiver Operating Characteristic (ROC) Analyse. Die Bodyplethysmographie war sensitiver als die Spirometrie. Die Leistungsfähigkeit der Spirometrie konnte durch die Absenkung der Reaktionsschwelle verbessert werden. Die Klassifikation basierend auf der Steigung der Dosis-Wirkungskurve erwies sich als vergleichbar. Die Bevorzugung der Spirometrie gegenüber der Bodyplethysmographie in den Leitlinien sollte hinterfragt werden. Die etablierten Reaktionsschwellen sollten systematisch neu definiert und die Slope-Methode als Alternative diskutiert werden

    Optimizing body region classification with deep convolutional activation features

    No full text
    • …
    corecore