35 research outputs found

    RT-utils: A Minimal Python Library for RT-struct Manipulation

    Full text link
    Towards the need for automated and precise AI-based analysis of medical images, we present RT-utils, a specialized Python library tuned for the manipulation of radiotherapy (RT) structures stored in DICOM format. RT-utils excels in converting the polygon contours into binary masks, ensuring accuracy and efficiency. By converting DICOM RT structures into standardized formats such as NumPy arrays and SimpleITK Images, RT-utils optimizes inputs for computational solutions such as AI-based automated segmentation techniques or radiomics analysis. Since its inception in 2020, RT-utils has been used extensively with a focus on simplifying complex data processing tasks. RT-utils offers researchers a powerful solution to enhance workflows and drive significant advancements in medical imaging

    A cascaded deep network for automated tumor detection and segmentation in clinical PET imaging of diffuse large B-cell lymphoma

    Full text link
    Accurate detection and segmentation of diffuse large B-cell lymphoma (DLBCL) from PET images has important implications for estimation of total metabolic tumor volume, radiomics analysis, surgical intervention and radiotherapy. Manual segmentation of tumors in whole-body PET images is time-consuming, labor-intensive and operator-dependent. In this work, we develop and validate a fast and efficient three-step cascaded deep learning model for automated detection and segmentation of DLBCL tumors from PET images. As compared to a single end-to-end network for segmentation of tumors in whole-body PET images, our three-step model is more effective (improves 3D Dice score from 58.9% to 78.1%) since each of its specialized modules, namely the slice classifier, the tumor detector and the tumor segmentor, can be trained independently to a high degree of skill to carry out a specific task, rather than a single network with suboptimal performance on overall segmentation.Comment: 8 pages, 3 figures, 3 table

    Semi-supervised learning towards automated segmentation of PET images with limited annotations: Application to lymphoma patients

    Get PDF
    The time-consuming task of manual segmentation challenges routine systematic quantification of disease burden. Convolutional neural networks (CNNs) hold significant promise to reliably identify locations and boundaries of tumors from PET scans. We aimed to leverage the need for annotated data via semi-supervised approaches, with application to PET images of diffuse large B-cell lymphoma (DLBCL) and primary mediastinal large B-cell lymphoma (PMBCL). We analyzed 18F-FDG PET images of 292 patients with PMBCL (n=104) and DLBCL (n=188) (n=232 for training and validation, and n=60 for external testing). We employed FCM and MS losses for training a 3D U-Net with different levels of supervision: i) fully supervised methods with labeled FCM (LFCM) as well as Unified focal and Dice loss functions, ii) unsupervised methods with Robust FCM (RFCM) and Mumford-Shah (MS) loss functions, and iii) Semi-supervised methods based on FCM (RFCM+LFCM), as well as MS loss in combination with supervised Dice loss (MS+Dice). Unified loss function yielded higher Dice score (mean +/- standard deviation (SD)) (0.73 +/- 0.03; 95% CI, 0.67-0.8) compared to Dice loss (p-value<0.01). Semi-supervised (RFCM+alpha*LFCM) with alpha=0.3 showed the best performance, with a Dice score of 0.69 +/- 0.03 (95% CI, 0.45-0.77) outperforming (MS+alpha*Dice) for any supervision level (any alpha) (p<0.01). The best performer among (MS+alpha*Dice) semi-supervised approaches with alpha=0.2 showed a Dice score of 0.60 +/- 0.08 (95% CI, 0.44-0.76) compared to another supervision level in this semi-supervised approach (p<0.01). Semi-supervised learning via FCM loss (RFCM+alpha*LFCM) showed improved performance compared to supervised approaches. Considering the time-consuming nature of expert manual delineations and intra-observer variabilities, semi-supervised approaches have significant potential for automated segmentation workflows

    Theranostic digital twins: Concept, framework and roadmap towards personalized radiopharmaceutical therapies.

    Get PDF
    Radiopharmaceutical therapy (RPT) is a rapidly developing field of nuclear medicine, with several RPTs already well established in the treatment of several different types of cancers. However, the current approaches to RPTs often follow a somewhat inflexible "one size fits all" paradigm, where patients are administered the same amount of radioactivity per cycle regardless of their individual characteristics and features. This approach fails to consider inter-patient variations in radiopharmacokinetics, radiation biology, and immunological factors, which can significantly impact treatment outcomes. To address this limitation, we propose the development of theranostic digital twins (TDTs) to personalize RPTs based on actual patient data. Our proposed roadmap outlines the steps needed to create and refine TDTs that can optimize radiation dose to tumors while minimizing toxicity to organs at risk. The TDT models incorporate physiologically-based radiopharmacokinetic (PBRPK) models, which are additionally linked to a radiobiological optimizer and an immunological modulator, taking into account factors that influence RPT response. By using TDT models, we envisage the ability to perform virtual clinical trials, selecting therapies towards improved treatment outcomes while minimizing risks associated with secondary effects. This framework could empower practitioners to ultimately develop tailored RPT solutions for subgroups and individual patients, thus improving the precision, accuracy, and efficacy of treatments while minimizing risks to patients. By incorporating TDT models into RPTs, we can pave the way for a new era of precision medicine in cancer treatment

    Observer study-based evaluation of TGAN architecture used to generate oncological PET images

    Full text link
    The application of computer-vision algorithms in medical imaging has increased rapidly in recent years. However, algorithm training is challenging due to limited sample sizes, lack of labeled samples, as well as privacy concerns regarding data sharing. To address these issues, we previously developed (Bergen et al. 2022) a synthetic PET dataset for Head and Neck (H and N) cancer using the temporal generative adversarial network (TGAN) architecture and evaluated its performance segmenting lesions and identifying radiomics features in synthesized images. In this work, a two-alternative forced-choice (2AFC) observer study was performed to quantitatively evaluate the ability of human observers to distinguish between real and synthesized oncological PET images. In the study eight trained readers, including two board-certified nuclear medicine physicians, read 170 real/synthetic image pairs presented as 2D-transaxial using a dedicated web app. For each image pair, the observer was asked to identify the real image and input their confidence level with a 5-point Likert scale. P-values were computed using the binomial test and Wilcoxon signed-rank test. A heat map was used to compare the response accuracy distribution for the signed-rank test. Response accuracy for all observers ranged from 36.2% [27.9-44.4] to 63.1% [54.8-71.3]. Six out of eight observers did not identify the real image with statistical significance, indicating that the synthetic dataset was reasonably representative of oncological PET images. Overall, this study adds validity to the realism of our simulated H&N cancer dataset, which may be implemented in the future to train AI algorithms while favoring patient confidentiality and privacy protection

    Semi-supervised learning towards automated segmentation of PET images with limited annotations:application to lymphoma patients

    Get PDF
    Manual segmentation poses a time-consuming challenge for disease quantification, therapy evaluation, treatment planning, and outcome prediction. Convolutional neural networks (CNNs) hold promise in accurately identifying tumor locations and boundaries in PET scans. However, a major hurdle is the extensive amount of supervised and annotated data necessary for training. To overcome this limitation, this study explores semi-supervised approaches utilizing unlabeled data, specifically focusing on PET images of diffuse large B-cell lymphoma (DLBCL) and primary mediastinal large B-cell lymphoma (PMBCL) obtained from two centers. We considered 2-[18F]FDG PET images of 292 patients PMBCL (n = 104) and DLBCL (n = 188) (n = 232 for training and validation, and n = 60 for external testing). We harnessed classical wisdom embedded in traditional segmentation methods, such as the fuzzy clustering loss function (FCM), to tailor the training strategy for a 3D U-Net model, incorporating both supervised and unsupervised learning approaches. Various supervision levels were explored, including fully supervised methods with labeled FCM and unified focal/Dice loss, unsupervised methods with robust FCM (RFCM) and Mumford-Shah (MS) loss, and semi-supervised methods combining FCM with supervised Dice loss (MS + Dice) or labeled FCM (RFCM + FCM). The unified loss function yielded higher Dice scores (0.73 ± 0.11; 95% CI 0.67–0.8) than Dice loss (p value &lt; 0.01). Among the semi-supervised approaches, RFCM + αFCM (α = 0.3) showed the best performance, with Dice score of 0.68 ± 0.10 (95% CI 0.45–0.77), outperforming MS + αDice for any supervision level (any α) (p &lt; 0.01). Another semi-supervised approach with MS + αDice (α = 0.2) achieved Dice score of 0.59 ± 0.09 (95% CI 0.44–0.76) surpassing other supervision levels (p &lt; 0.01). Given the time-consuming nature of manual delineations and the inconsistencies they may introduce, semi-supervised approaches hold promise for automating medical imaging segmentation workflows.</p

    Thyroidiomics: An Automated Pipeline for Segmentation and Classification of Thyroid Pathologies from Scintigraphy Images

    Full text link
    The objective of this study was to develop an automated pipeline that enhances thyroid disease classification using thyroid scintigraphy images, aiming to decrease assessment time and increase diagnostic accuracy. Anterior thyroid scintigraphy images from 2,643 patients were collected and categorized into diffuse goiter (DG), multinodal goiter (MNG), and thyroiditis (TH) based on clinical reports, and then segmented by an expert. A ResUNet model was trained to perform auto-segmentation. Radiomic features were extracted from both physician (scenario 1) and ResUNet segmentations (scenario 2), followed by omitting highly correlated features using Spearman\u27s correlation, and feature selection using Recursive Feature Elimination (RFE) with XGBoost as the core. All models were trained under leave-one-center-out cross-validation (LOCOCV) scheme, where nine instances of algorithms were iteratively trained and validated on data from eight centers and tested on the ninth for both scenarios separately. Segmentation performance was assessed using the Dice similarity coefficient (DSC), while classification performance was assessed using metrics, such as precision, recall, F1-score, accuracy, area under the Receiver Operating Characteristic (ROC AUC), and area under the precision-recall curve (PRC AUC). ResUNet achieved DSC values of 0.84±\pm0.03, 0.71±\pm0.06, and 0.86±\pm0.02 for MNG, TH, and DG, respectively. Classification in scenario 1 achieved an accuracy of 0.76±\pm0.04 and a ROC AUC of 0.92±\pm0.02 while in scenario 2, classification yielded an accuracy of 0.74±\pm0.05 and a ROC AUC of 0.90±\pm0.02. The automated pipeline demonstrated comparable performance to physician segmentations on several classification metrics across different classes, effectively reducing assessment time while maintaining high diagnostic accuracy. Code available at: https://github.com/ahxmeds/thyroidiomics.git.7 pages, 4 figures, 2 table

    The image biomarker standardization initiative: Standardized convolutional filters for reproducible radiomics and enhanced clinical insights

    Get PDF
    Standardizing convolutional filters that enhance specific structures and patterns in medical imaging enables reproducible radiomics analyses, improving consistency and reliability for enhanced clinical insights. Filters are commonly used to enhance specific structures and patterns in images, such as vessels or peritumoral regions, to enable clinical insights beyond the visible image using radiomics. However, their lack of standardization restricts reproducibility and clinical translation of radiomics decision support tools. In this special report, teams of researchers who developed radiomics software participated in a three-phase study (September 2020 to December 2022) to establish a standardized set of filters. The first two phases focused on finding reference filtered images and reference feature values for commonly used convolutional filters: mean, Laplacian of Gaussian, Laws and Gabor kernels, separable and nonseparable wavelets (including decomposed forms), and Riesz transformations. In the first phase, 15 teams used digital phantoms to establish 33 reference filtered images of 36 filter configurations. In phase 2, 11 teams used a chest CT image to derive reference values for 323 of 396 features computed from filtered images using 22 filter and image processing configurations. Reference filtered images and feature values for Riesz transformations were not established. Reproducibility of standardized convolutional filters was validated on a public data set of multimodal imaging (CT, fluorodeoxyglucose PET, and T1-weighted MRI) in 51 patients with soft-tissue sarcoma. At validation, reproducibility of 486 features computed from filtered images using nine configurations × three imaging modalities was assessed using the lower bounds of 95% CIs of intraclass correlation coefficients. Out of 486 features, 458 were found to be reproducible across nine teams with lower bounds of 95% CIs of intraclass correlation coefficients greater than 0.75. In conclusion, eight filter types were standardized with reference filtered images and reference feature values for verifying and calibrating radiomics software packages. A web-based tool is available for compliance checking

    Detection of the rotator cuff tears using a novel convolutional neural network from magnetic resonance image (MRI)

    No full text
    The rotator cuff tear is a common situation for basketballers, handballers, or other athletes that strongly use their shoulders. This injury can be diagnosed precisely from a magnetic resonance (MR) image. In this paper, a novel deep learning-based framework is proposed to diagnose rotator cuff tear from MRI images of patients suspected of the rotator cuff tear. First, we collected 150 shoulders MRI images from two classes of rotator cuff tear patients and healthy ones with the same numbers. These images were observed by an orthopedic specialist and then tagged and used as input in the various configurations of the Convolutional Neural Network (CNN). At this stage, five different configurations of convolutional networks have been examined. Then, in the next step, the selected network with the highest accuracy is used to extract the deep features and classify the two classes of rotator cuff tear and healthy. Also, MRI images are feed to two quick pre-trained CNNs (MobileNetv2 and SqueezeNet) to compare with the proposed CNN. Finally, the evaluation is performed using the 5-fold cross-validation method. Also, a specific Graphical User Interface (GUI) was designed in the MATLAB environment for simplicity, which allows for testing by detecting the image class. The proposed CNN achieved higher accuracy than the two mentioned pre-trained CNNs. The average accuracy, precision, sensitivity, and specificity achieved by the best selected CNN configuration are equal to 92.67%, 91.13%, 91.75%, and 92.22%, respectively. The deep learning algorithm could accurately rule out significant rotator cuff tear based on shoulder MRI
    corecore