144 research outputs found

    Unified Framework for Development, Deployment and Robust Testing of Neuroimaging Algorithms

    Get PDF
    Developing both graphical and commandline user interfaces for neuroimaging algorithms requires considerable effort. Neuroimaging algorithms can meet their potential only if they can be easily and frequently used by their intended users. Deployment of a large suite of such algorithms on multiple platforms requires consistency of user interface controls, consistent results across various platforms and thorough testing. We present the design and implementation of a novel object-oriented framework that allows for rapid development of complex image analysis algorithms with many reusable components and the ability to easily add graphical user interface controls. Our framework also allows for simplified yet robust nightly testing of the algorithms to ensure stability and cross platform interoperability. All of the functionality is encapsulated into a software object requiring no separate source code for user interfaces, testing or deployment. This formulation makes our framework ideal for developing novel, stable and easy-to-use algorithms for medical image analysis and computer assisted interventions. The technological The framework has been both deployed at Yale and released for public use in the open source multi-platform image analysis software - BioImage Suite (bioimagesuite.org)

    TAI-GAN: A Temporally and Anatomically Informed Generative Adversarial Network for early-to-late frame conversion in dynamic cardiac PET inter-frame motion correction

    Full text link
    Inter-frame motion in dynamic cardiac positron emission tomography (PET) using rubidium-82 (82-Rb) myocardial perfusion imaging impacts myocardial blood flow (MBF) quantification and the diagnosis accuracy of coronary artery diseases. However, the high cross-frame distribution variation due to rapid tracer kinetics poses a considerable challenge for inter-frame motion correction, especially for early frames where intensity-based image registration techniques often fail. To address this issue, we propose a novel method called Temporally and Anatomically Informed Generative Adversarial Network (TAI-GAN) that utilizes an all-to-one mapping to convert early frames into those with tracer distribution similar to the last reference frame. The TAI-GAN consists of a feature-wise linear modulation layer that encodes channel-wise parameters generated from temporal information and rough cardiac segmentation masks with local shifts that serve as anatomical information. Our proposed method was evaluated on a clinical 82-Rb PET dataset, and the results show that our TAI-GAN can produce converted early frames with high image quality, comparable to the real reference frames. After TAI-GAN conversion, the motion estimation accuracy and subsequent myocardial blood flow (MBF) quantification with both conventional and deep learning-based motion correction methods were improved compared to using the original frames.Comment: Under revision at Medical Image Analysi

    A Multiclass Radiomics Method-Based WHO Severity Scale for Improving COVID-19 Patient Assessment and Disease Characterization From CT Scans.

    Get PDF
    OBJECTIVES The aim of this study was to evaluate the severity of COVID-19 patients' disease by comparing a multiclass lung lesion model to a single-class lung lesion model and radiologists' assessments in chest computed tomography scans. MATERIALS AND METHODS The proposed method, AssessNet-19, was developed in 2 stages in this retrospective study. Four COVID-19-induced tissue lesions were manually segmented to train a 2D-U-Net network for a multiclass segmentation task followed by extensive extraction of radiomic features from the lung lesions. LASSO regression was used to reduce the feature set, and the XGBoost algorithm was trained to classify disease severity based on the World Health Organization Clinical Progression Scale. The model was evaluated using 2 multicenter cohorts: a development cohort of 145 COVID-19-positive patients from 3 centers to train and test the severity prediction model using manually segmented lung lesions. In addition, an evaluation set of 90 COVID-19-positive patients was collected from 2 centers to evaluate AssessNet-19 in a fully automated fashion. RESULTS AssessNet-19 achieved an F1-score of 0.76 ± 0.02 for severity classification in the evaluation set, which was superior to the 3 expert thoracic radiologists (F1 = 0.63 ± 0.02) and the single-class lesion segmentation model (F1 = 0.64 ± 0.02). In addition, AssessNet-19 automated multiclass lesion segmentation obtained a mean Dice score of 0.70 for ground-glass opacity, 0.68 for consolidation, 0.65 for pleural effusion, and 0.30 for band-like structures compared with ground truth. Moreover, it achieved a high agreement with radiologists for quantifying disease extent with Cohen κ of 0.94, 0.92, and 0.95. CONCLUSIONS A novel artificial intelligence multiclass radiomics model including 4 lung lesions to assess disease severity based on the World Health Organization Clinical Progression Scale more accurately determines the severity of COVID-19 patients than a single-class model and radiologists' assessment

    First Results from The GlueX Experiment

    Get PDF
    The GlueX experiment at Jefferson Lab ran with its first commissioning beam in late 2014 and the spring of 2015. Data were collected on both plastic and liquid hydrogen targets, and much of the detector has been commissioned. All of the detector systems are now performing at or near design specifications and events are being fully reconstructed, including exclusive production of π0\pi^{0}, η\eta and ω\omega mesons. Linearly-polarized photons were successfully produced through coherent bremsstrahlung and polarization transfer to the ρ\rho has been observed.Comment: 8 pages, 6 figures, Invited contribution to the Hadron 2015 Conference, Newport News VA, September 201
    corecore