22 research outputs found

    Convolutional neural network-based automatic heart segmentation and quantitation in 123I-metaiodobenzylguanidine SPECT imaging

    Get PDF
    Background: Since three-dimensional segmentation of cardiac region in 123I-metaiodobenzylguanidine (MIBG) study has not been established, this study aimed to achieve organ segmentation using a convolutional neural network (CNN) with 123I-MIBG single photon emission computed tomography (SPECT) imaging, to calculate heart counts and washout rates (WR) automatically and to compare with conventional quantitation based on planar imaging. Methods: We assessed 48 patients (aged 68.4 \ub1 11.7\ua0years) with heart and neurological diseases, including chronic heart failure, dementia with Lewy bodies, and Parkinson\u27s disease. All patients were assessed by early and late 123I-MIBG planar and SPECT imaging. The CNN was initially trained to individually segment the lungs and liver on early and late SPECT images. The segmentation masks were aligned, and then, the CNN was trained to directly segment the heart, and all models were evaluated using fourfold cross-validation. The CNN-based average heart counts and WR were calculated and compared with those determined using planar parameters. The CNN-based SPECT and conventional planar heart counts were corrected by physical time decay, injected dose of 123I-MIBG, and body weight. We also divided WR into normal and abnormal groups from linear regression lines determined by the relationship between planar WR and CNN-based WR and then analyzed agreement between them. Results: The CNN segmented the cardiac region in patients with normal and reduced uptake. The CNN-based SPECT heart counts significantly correlated with conventional planar heart counts with and without background correction and a planar heart-to-mediastinum ratio (R2 = 0.862, 0.827, and 0.729, p < 0.0001, respectively). The CNN-based and planar WRs also correlated with and without background correction and WR based on heart-to-mediastinum ratios of R2 = 0.584, 0.568 and 0.507, respectively (p < 0.0001). Contingency table findings of high and low WR (cutoffs: 34% and 30% for planar and SPECT studies, respectively) showed 87.2% agreement between CNN-based and planar methods. Conclusions: The CNN could create segmentation from SPECT images, and average heart counts and WR were reliably calculated three-dimensionally, which might be a novel approach to quantifying SPECT images of innervation

    AI-based detection of lung lesions in [18F]FDG PET-CT from lung cancer patients

    Get PDF
    Background: [ F]-fluorodeoxyglucose (FDG) positron emission tomography with computed tomography (PET-CT) is a well-established modality in the work-up of patients with suspected or confirmed diagnosis of lung cancer. Recent research efforts have focused on extracting theragnostic and textural information from manually indicated lung lesions. Both semi-automatic and fully automatic use of artificial intelligence (AI) to localise and classify FDG-avid foci has been demonstrated. To fully harness AI’s usefulness, we have developed a method which both automatically detects abnormal lung lesions and calculates the total lesion glycolysis (TLG) on FDG PET-CT. Methods: One hundred twelve patients (59 females and 53 males) who underwent FDG PET-CT due to suspected or for the management of known lung cancer were studied retrospectively. These patients were divided into a training group (59%; n = 66), a validation group (20.5%; n = 23) and a test group (20.5%; n = 23). A nuclear medicine physician manually segmented abnormal lung lesions with increased FDG-uptake in all PET-CT studies. The AI-based method was trained to segment the lesions based on the manual segmentations. TLG was then calculated from manual and AI-based measurements, respectively and analysed with Bland-Altman plots. Results: The AI-tool’s performance in detecting lesions had a sensitivity of 90%. One small lesion was missed in two patients, respectively, where both had a larger lesion which was correctly detected. The positive and negative predictive values were 88% and 100%, respectively. The correlation between manual and AI TLG measurements was strong (R = 0.74). Bias was 42 g and 95% limits of agreement ranged from − 736 to 819 g. Agreement was particularly high in smaller lesions. Conclusions: The AI-based method is suitable for the detection of lung lesions and automatic calculation of TLG in small- to medium-sized tumours. In a clinical setting, it will have an added value due to its capability to sort out negative examinations resulting in prioritised and focused care on patients with potentially malignant lesions

    RECOMIA - a cloud-based platform for artificial intelligence research in nuclear medicine and radiology

    Get PDF
    Background: Artificial intelligence (AI) is about to transform medical imaging. The Research Consortium for Medical Image Analysis (RECOMIA), a not-for-profit organisation, has developed an online platform to facilitate collaboration between medical researchers and AI researchers. The aim is to minimise the time and effort researchers need to spend on technical aspects, such as transfer, display, and annotation of images, as well as legal aspects, such as de-identification. The purpose of this article is to present the RECOMIA platform and its AI-based tools for organ segmentation in computed tomography (CT), which can be used for extraction of standardised uptake values from the corresponding positron emission tomography (PET) image. Results: The RECOMIA platform includes modules for (1) local de-identification of medical images, (2) secure transfer of images to the cloud-based platform, (3) display functions available using a standard web browser, (4) tools for manual annotation of organs or pathology in the images, (5) deep learning-based tools for organ segmentation or other customised analyses, (6) tools for quantification of segmented volumes, and (7) an export function for the quantitative results. The AI-based tool for organ segmentation in CT currently handles 100 organs (77 bones and 23 soft tissue organs). The segmentation is based on two convolutional neural networks (CNNs): one network to handle organs with multiple similar instances, such as vertebrae and ribs, and one network for all other organs. The CNNs have been trained using CT studies from 339 patients. Experienced radiologists annotated organs in the CT studies. The performance of the segmentation tool, measured as mean Dice index on a manually annotated test set, with 10 representative organs, was 0.93 for all foreground voxels, and the mean Dice index over the organs were 0.86 (0.82 for the soft tissue organs and 0.90 for the bones). Conclusion: The paper presents a platform that provides deep learning-based tools that can perform basic organ segmentations in CT, which can then be used to automatically obtain the different measurement in the corresponding PET image. The RECOMIA platform is available on request at www.recomia.org for research purposes

    Artificial intelligence-aided CT segmentation for body composition analysis: a validation study

    Get PDF
    Background: Body composition is associated with survival outcome in oncological patients, but it is not routinely calculated. Manual segmentation of subcutaneous adipose tissue (SAT) and muscle is time-consuming and therefore limited to a single CT slice. Our goal was to develop an artificial-intelligence (AI)-based method for automated quantification of three-dimensional SAT and muscle volumes from CT images. Methods: Ethical approvals from Gothenburg and Lund Universities were obtained. Convolutional neural networks were trained to segment SAT and muscle using manual segmentations on CT images from a training group of 50 patients. The method was applied to a separate test group of 74 cancer patients, who had two CT studies each with a median interval between the studies of 3 days. Manual segmentations in a single CT slice were used for comparison. The accuracy was measured as overlap between the automated and manual segmentations. Results: The accuracy of the AI method was 0.96 for SAT and 0.94 for muscle. The average differences in volumes were significantly lower than the corresponding differences in areas in a single CT slice: 1.8% versus 5.0% (p < 0.001) for SAT and 1.9% versus 3.9% (p < 0.001) for muscle. The 95% confidence intervals for predicted volumes in an individual subject from the corresponding single CT slice areas were in the order of \ub1 20%. Conclusions: The AI-based tool for quantification of SAT and muscle volumes showed high accuracy and reproducibility and provided a body composition analysis that is more relevant than manual analysis of a single CT slice

    Artificial intelligence could alert for focal skeleton/bone marrow uptake in Hodgkin’s lymphoma patients staged with FDG-PET/CT

    Get PDF
    To develop an artificial intelligence (AI)-based method for the detection of focal skeleton/bone marrow uptake (BMU) in patients with Hodgkin’s lymphoma (HL) undergoing staging with FDG-PET/CT. The results of the AI in a separate test group were compared to the interpretations of independent physicians. The skeleton and bone marrow were segmented using a convolutional neural network. The training of AI was based on 153 un-treated patients. Bone uptake significantly higher than the mean BMU was marked as abnormal, and an index, based on the total squared abnormal uptake, was computed to identify the focal uptake. Patients with an index above a predefined threshold were interpreted as having focal uptake. As the test group, 48 un-treated patients who had undergone a staging FDG-PET/CT between 2017–2018 with biopsy-proven HL were retrospectively included. Ten physicians classified the 48 cases regarding focal skeleton/BMU. The majority of the physicians agreed with the AI in 39/48 cases (81%) regarding focal skeleton/bone marrow involvement. Inter-observer agreement between the physicians was moderate, Kappa 0.51 (range 0.25–0.80). An AI-based method can be developed to highlight suspicious focal skeleton/BMU in HL patients staged with FDG-PET/CT. Inter-observer agreement regarding focal BMU is moderate among nuclear medicine physicians

    Artificial intelligence based automatic quantification of epicardial adipose tissue suitable for large scale population studies

    Get PDF
    To develop a fully automatic model capable of reliably quantifying epicardial adipose tissue (EAT) volumes and attenuation in large scale population studies to investigate their relation to markers of cardiometabolic risk. Non-contrast cardiac CT images from the SCAPIS study were used to train and test a convolutional neural network based model to quantify EAT by: segmenting the pericardium, suppressing noise-induced artifacts in the heart chambers, and, if image sets were incomplete, imputing missing EAT volumes. The model achieved a mean Dice coefficient of 0.90 when tested against expert manual segmentations on 25 image sets. Tested on 1400 image sets, the model successfully segmented 99.4% of the cases. Automatic imputation of missing EAT volumes had an error of less than 3.1% with up to 20% of the slices in image sets missing. The most important predictors of EAT volumes were weight and waist, while EAT attenuation was predicted mainly by EAT volume. A model with excellent performance, capable of fully automatic handling of the most common challenges in large scale EAT quantification has been developed. In studies of the importance of EAT in disease development, the strong co-variation with anthropometric measures needs to be carefully considered

    Artificial intelligence-based measurements of PET/CT imaging biomarkers are associated with disease-specific survival of high-risk prostate cancer patients

    Get PDF
    Objective: Artificial intelligence (AI) offers new opportunities for objective quantitative measurements of imaging biomarkers from positron-emission tomography/computed tomography (PET/CT). Clinical image reporting relies predominantly on observer-dependent visual assessment and easily accessible measures like SUVmax, representing lesion uptake in a relatively small amount of tissue. Our hypothesis is that measurements of total volume and lesion uptake of the entire tumour would better reflect the disease`s activity with prognostic significance, compared with conventional measurements. Methods: An AI-based algorithm was trained to automatically measure the prostate and its tumour content in PET/CT of 145 patients. The algorithm was then tested retrospectively on 285 high-risk patients, who were examined using 18F-choline PET/CT for primary staging between April 2008 and July 2015. Prostate tumour volume, tumour fraction of the prostate gland, lesion uptake of the entire tumour, and SUVmax were obtained automatically. Associations between these measurements, age, PSA, Gleason score and prostate cancer-specific survival were studied, using a Cox proportional-hazards regression model. Results: Twenty-three patients died of prostate cancer during follow-up (median survival 3.8 years). Total tumour volume of the prostate (p = 0.008), tumour fraction of the gland (p = 0.005), total lesion uptake of the prostate (p = 0.02), and age (p = 0.01) were significantly associated with disease-specific survival, whereas SUVmax (p = 0.2), PSA (p = 0.2), and Gleason score (p = 0.8) were not. Conclusion: AI-based assessments of total tumour volume and lesion uptake were significantly associated with disease-specific survival in this patient cohort, whereas SUVmax and Gleason scores were not. The AI-based approach appears well-suited for clinically relevant patient stratification and monitoring of individual therapy

    Application of an artificial intelligence-based tool in [18F]FDG PET/CT for the assessment of bone marrow involvement in multiple myeloma

    Get PDF
    Purpose: [18F]FDG PET/CT is an imaging modality of high performance in multiple myeloma (MM). Nevertheless, the inter-observer reproducibility in PET/CT scan interpretation may be hampered by the different patterns of bone marrow (BM) infiltration in the disease. Although many approaches have been recently developed to address the issue of standardization, none can yet be considered a standard method in the interpretation of PET/CT. We herein aim to validate a novel three-dimensional deep learning-based tool on PET/CT images for automated assessment of the intensity of BM metabolism in MM patients. Materials and methods: Whole-body [18F]FDG PET/CT scans of 35 consecutive, previously untreated MM patients were studied. All patients were investigated in the context of an open-label, multicenter, randomized, active-controlled, phase 3 trial (GMMG-HD7). Qualitative (visual) analysis classified the PET/CT scans into three groups based on the presence and number of focal [18F]FDG-avid lesions as well as the degree of diffuse [18F]FDG uptake in the BM. The proposed automated method for BM metabolism assessment is based on an initial CT-based segmentation of the skeleton, its transfer to the SUV PET images, the subsequent application of different SUV thresholds, and refinement of the resulting regions using postprocessing. In the present analysis, six different SUV thresholds (Approaches 1–6) were applied for the definition of pathological tracer uptake in the skeleton [Approach 1: liver SUVmedian 7 1.1 (axial skeleton), gluteal muscles SUVmedian 7 4 (extremities). Approach 2: liver SUVmedian 7 1.5 (axial skeleton), gluteal muscles SUVmedian 7 4 (extremities). Approach 3: liver SUVmedian 7 2 (axial skeleton), gluteal muscles SUVmedian 7 4 (extremities). Approach 4: ≥ 2.5. Approach 5: ≥ 2.5 (axial skeleton), ≥ 2.0 (extremities). Approach 6: SUVmax liver]. Using the resulting masks, subsequent calculations of the whole-body metabolic tumor volume (MTV) and total lesion glycolysis (TLG) in each patient were performed. A correlation analysis was performed between the automated PET values and the results of the visual PET/CT analysis as well as the histopathological, cytogenetical, and clinical data of the patients. Results: BM segmentation and calculation of MTV and TLG after the application of the deep learning tool were feasible in all patients. A significant positive correlation (p < 0.05) was observed between the results of the visual analysis of the PET/CT scans for the three patient groups and the MTV and TLG values after the employment of all six [18F]FDG uptake thresholds. In addition, there were significant differences between the three patient groups with regard to their MTV and TLG values for all applied thresholds of pathological tracer uptake. Furthermore, we could demonstrate a significant, moderate, positive correlation of BM plasma cell infiltration and plasma levels of β2-microglobulin with the automated quantitative PET/CT parameters MTV and TLG after utilization of Approaches 1, 2, 4, and 5. Conclusions: The automated, volumetric, whole-body PET/CT assessment of the BM metabolic activity in MM is feasible with the herein applied method and correlates with clinically relevant parameters in the disease. This methodology offers a potentially reliable tool in the direction of optimization and standardization of PET/CT interpretation in MM. Based on the present promising findings, the deep learning-based approach will be further evaluated in future prospective studies with larger patient cohorts

    Deep learning-based quantification of PET/CT prostate gland uptake: association with overall survival

    Get PDF
    Aim: To validate a deep-learning (DL) algorithm for automated quantification of prostate cancer on positron emission tomography/computed tomography (PET/CT) and explore the potential of PET/CT measurements as prognostic biomarkers. Material and methods: Training of the DL-algorithm regarding prostate volume was performed on manually segmented CT images in 100 patients. Validation of the DL-algorithm was carried out in 45 patients with biopsy-proven hormone-na\uefve prostate cancer. The automated measurements of prostate volume were compared with manual measurements made independently by two observers. PET/CT measurements of tumour burden based on volume and SUV of abnormal voxels were calculated automatically. Voxels in the co-registered 18F-choline PET images above a standardized uptake value (SUV) of 2\ub765, and corresponding to the prostate as defined by the automated segmentation in the CT images, were defined as abnormal. Validation of abnormal voxels was performed by manual segmentation of radiotracer uptake. Agreement between algorithm and observers regarding prostate volume was analysed by S\uf8rensen-Dice index (SDI). Associations between automatically based PET/CT biomarkers and age, prostate-specific antigen (PSA), Gleason score as well as overall survival were evaluated by a univariate Cox regression model. Results: The SDI between the automated and the manual volume segmentations was 0\ub778 and 0\ub779, respectively. Automated PET/CT measures reflecting total lesion uptake and the relation between volume of abnormal voxels and total prostate volume were significantly associated with overall survival (P\ua0=\ua00\ub702), whereas age, PSA, and Gleason score were not. Conclusion: Automated PET/CT biomarkers showed good agreement to manual measurements and were significantly associated with overall survival

    Shape-aware multi-atlas segmentation

    No full text
    Despite of having no explicit shape model, multi-atlas approaches to image segmentation have proved to be a top-performer for several diverse datasets and imaging modalities. In this paper, we show how one can directly incorporate shape regularization into the multi-atlas framework. Unlike traditional methods, our proposed approach does not rely on label fusion on the voxel level. Instead, each registered atlas is viewed as an estimate of the position of a shape model. We evaluate and compare our method on two public benchmarks: (i) the VISCERAL Grand Challenge on multi-organ segmentation of whole-body CT images and (ii) the Hammers brain atlas of MR images for segmenting the hippocampus and the amygdala. For this wide spectrum of both easy and hard segmentation tasks, our experimental quantitative results are on par or better than state-of-the-art. More importantly, we obtain qualitatively better segmentation boundaries, for instance, preserving fine structures
    corecore