24 research outputs found

    Automated lesion detection of breast cancer in [18F] FDG PET/CT using a novel AI-Based workflow

    Get PDF
    UNLABELLED: Applications based on artificial intelligence (AI) and deep learning (DL) are rapidly being developed to assist in the detection and characterization of lesions on medical images. In this study, we developed and examined an image-processing workflow that incorporates both traditional image processing with AI technology and utilizes a standards-based approach for disease identification and quantitation to segment and classify tissue within a whole-body [ METHODS: One hundred thirty baseline PET/CT studies from two multi-institutional preoperative clinical trials in early-stage breast cancer were semi-automatically segmented using techniques based on PERCIST v1.0 thresholds and the individual segmentations classified as to tissue type by an experienced nuclear medicine physician. These classifications were then used to train a convolutional neural network (CNN) to automatically accomplish the same tasks. RESULTS: Our CNN-based workflow demonstrated Sensitivity at detecting disease (either primary lesion or lymphadenopathy) of 0.96 (95% CI [0.9, 1.0], 99% CI [0.87,1.00]), Specificity of 1.00 (95% CI [1.0,1.0], 99% CI [1.0,1.0]), DICE score of 0.94 (95% CI [0.89, 0.99], 99% CI [0.86, 1.00]), and Jaccard score of 0.89 (95% CI [0.80, 0.98], 99% CI [0.74, 1.00]). CONCLUSION: This pilot work has demonstrated the ability of AI-based workflow using DL-CNNs to specifically identify breast cancer tissue as determined by

    Visual and Semiquantitative Accuracy in Clinical Baseline 123I-Ioflupane SPECT/CT Imaging

    Get PDF
    PURPOSE: We aimed to (a) elucidate the concordance of visual assessment of an initial I-ioflupane scan by a human interpreter with comparison to results using a fully automatic semiquantitative method and (b) to assess the accuracy compared to follow-up (f/u) diagnosis established by movement disorder specialists. METHODS: An initial I-ioflupane scan was performed in 382 patients with clinically uncertain Parkinsonian syndrome. An experienced reader performed a visual evaluation of all scans independently. The findings of the visual read were compared with semiquantitative evaluation. In addition, available f/u clinical diagnosis (serving as a reference standard) was compared with results of the human read and the software. RESULTS: When comparing the semiquantitative method with the visual assessment, discordance could be found in 25 (6.5%) of 382 of the cases for the experienced reader (ĸ = 0.868). The human observer indicated region of interest misalignment as the main reason for discordance. With neurology f/u serving as reference, the results of the reader revealed a slightly higher accuracy rate (87.7%, ĸ = 0.75) compared to semiquantification (86.2%, ĸ = 0.719, P < 0.001, respectively). No significant difference in the diagnostic performance of the visual read versus software-based assessment was found. CONCLUSIONS: In comparison with a fully automatic semiquantitative method in I-ioflupane interpretation, human assessment obtained an almost perfect agreement rate. However, compared to clinical established diagnosis serving as a reference, visual read seemed to be slightly more accurate as a solely software-based quantitative assessment

    The theranostic promise for Neuroendocrine Tumors in the late 2010s: where do we stand, where do we go?

    Get PDF
    More than 25 years after the first peptide receptor radionuclide therapy (PRRT), the concept of somatostatin receptor (SSTR)-directed imaging and therapy for neuroendocrine tumors (NET) is seeing rapidly increasing use. To maximize the full potential of its theranostic promise, efforts in recent years have expanded recommendations in current guidelines and included the evaluation of novel theranostic radiotracers for imaging and treatment of NET. Moreover, the introduction of standardized reporting framework systems may harmonize PET reading, address pitfalls in interpreting SSTR-PET/CT scans and guide the treating physician in selecting PRRT candidates. Notably, the concept of PRRT has also been applied beyond oncology, e.g. for treatment of inflammatory conditions like sarcoidosis. Future perspectives may include the efficacy evaluation of PRRT compared to other common treatment options for NET, novel strategies for closer monitoring of potential side effects, the introduction of novel radiotracers with beneficial pharmacodynamic and kinetic properties or the use of supervised machine learning approaches for outcome prediction. This article reviews how the SSTR-directed theranostic concept is currently applied and also reflects on recent developments that hold promise for the future of theranostics in this context

    Image acquisition and interpretation of 18F-DCFPyL (piflufolastat F 18) PET/CT: How we do it

    No full text
    Prostate-specific membrane antigen (PSMA)-targeted positron emission tomography (PET) is rapidly becoming widely accepted as the standard-of-care for imaging of men with prostate cancer. Labeled indications for regulatoryapproved agents include primary staging and recurrent disease in men at risk of metastases. The first commercial PSMA PET agent to become available was 18F-DCFPyL (piflufolastat F 18), a radiofluorinated small molecule with high-affinity for PSMA. The regulatory approval of 18F-DCFPyL hinged upon two key, multi-center, registration trials, OSPREY (patient population: highrisk primary staging) and CONDOR (patient population: biochemical recurrence). In this manuscript, we will (1) review key findings from the OSPREY and CONDOR trials, (2) discuss the clinical acquisition protocol we use for 18F-DCFPyL PET scanning, (3) present information on important pearls and pitfalls, (4) provide an overview of the PSMA reporting and data system (PSMA-RADS) interpretive framework, and (5) posit important future directions for research in PSMA PET. Our overall goal is to provide a brief introduction for practices and academic groups that are adopting 18F-DCFPyL PET scans for use in their patients with prostate cancer

    Piflufolastat F-18 (18F-DCFPyL) for PSMA PET imaging in prostate cancer

    No full text
    Introduction: Accurate imaging is essential for staging prostate cancer and guiding management decisions. Conventional imaging modalities are hampered by a limited sensitivity for metastatic disease. Nearly all prostate cancers express prostate-specific membrane antigen (PSMA) and 18F-DCFPyL (piflufolastat F 18) is a new FDA-approved positron emission tomography (PET) agent that targets PSMA for improved staging of prostate cancer. Areas covered: This article provides an overview of PSMA, the mechanism of action of 18F-DCFPyL, and compares the performance of 18F-DCFPyL to conventional prostate imaging modalities. Current prostate cancer imaging guidelines are reviewed, as well as what changes can be expected in the future with increased access to PSMA-PET. Expert opinion: The OSPREY and CONDOR clinical trials have demonstrated the superiority of 18F-DCFPyL over conventional imaging modalities for the staging and restaging of prostate cancer. The remarkable diagnostic accuracy of PSMA-PET is reshaping prostate cancer imaging and the modularity of these agents hint at exciting new diagnostic and therapeutic opportunities that have the potential to improve the care of patients with prostate cancer as a whole.

    Generative Adversarial Networks for the Creation of Realistic Artificial Brain Magnetic Resonance Images

    No full text
    Even as medical data sets become more publicly accessible, most are restricted to specific medical conditions. Thus, data collection for machine learning approaches remains challenging, and synthetic data augmentation, such as generative adversarial networks (GAN), may overcome this hurdle. In the present quality control study, deep convolutional GAN (DCGAN)-based human brain magnetic resonance (MR) images were validated by blinded radiologists. In total, 96 T1-weighted brain images from 30 healthy individuals and 33 patients with cerebrovascular accident were included. A training data set was generated from the T1-weighted images and DCGAN was applied to generate additional artificial brain images. The likelihood that images were DCGAN-created versus acquired was evaluated by 5 radiologists (2 neuroradiologists [NRs], vs 3 non-neuroradiologists [NNRs]) in a binary fashion to identify real vs created images. Images were selected randomly from the data set (variation of created images, 40%-60%). None of the investigated images was rated as unknown. Of the created images, the NRs rated 45% and 71% as real magnetic resonance imaging images (NNRs, 24%, 40%, and 44%). In contradistinction, 44% and 70% of the real images were rated as generated images by NRs (NNRs, 10%, 17%, and 27%). The accuracy for the NRs was 0.55 and 0.30 (NNRs, 0.83, 0.72, and 0.64). DCGAN-created brain MR images are similar enough to acquired MR images so as to be indistinguishable in some cases. Such an artificial intelligence algorithm may contribute to synthetic data augmentation for "data-hungry" technologies, such as supervised machine learning approaches, in various clinical applications
    corecore