12 research outputs found

    Israel and the United States Did Not See the 1973 War Coming

    Get PDF
    Israel’s mistaken pre-war assumptions about the 1973 War caused it to fail to foresee the potential outbreak of a war with Egypt and Syria. What were these calculations based on and why did the United States follow suit

    The ISMRM Open Science Initiative for Perfusion Imaging (OSIPI): Results from the OSIPI-Dynamic Contrast-Enhanced challenge

    No full text
    PURPOSE: has often been proposed as a quantitative imaging biomarker for diagnosis, prognosis, and treatment response assessment for various tumors. None of the many software tools for quantification are standardized. The ISMRM Open Science Initiative for Perfusion Imaging-Dynamic Contrast-Enhanced (OSIPI-DCE) challenge was designed to benchmark methods to better help the efforts to standardize measurement. METHODS: A framework was created to evaluate values produced by DCE-MRI analysis pipelines to enable benchmarking. The perfusion MRI community was invited to apply their pipelines for quantification in glioblastoma from clinical and synthetic patients. Submissions were required to include the entrants\u27 values, the applied software, and a standard operating procedure. These were evaluated using the proposed score defined with accuracy, repeatability, and reproducibility components. RESULTS: Across the 10 received submissions, the score ranged from 28% to 78% with a 59% median. The accuracy, repeatability, and reproducibility scores ranged from 0.54 to 0.92, 0.64 to 0.86, and 0.65 to 1.00, respectively (0-1 = lowest-highest). Manual arterial input function selection markedly affected the reproducibility and showed greater variability in analysis than automated methods. Furthermore, provision of a detailed standard operating procedure was critical for higher reproducibility. CONCLUSIONS: This study reports results from the OSIPI-DCE challenge and highlights the high inter-software variability within estimation, providing a framework for ongoing benchmarking against the scores presented. Through this challenge, the participating teams were ranked based on the performance of their software tools in the particular setting of this challenge. In a real-world clinical setting, many of these tools may perform differently with different benchmarking methodology

    The ISMRM Open Science Initiative for Perfusion Imaging (OSIPI):Results from the OSIPI-Dynamic Contrast-Enhanced challenge

    No full text
    Purpose: (Formula presented.) has often been proposed as a quantitative imaging biomarker for diagnosis, prognosis, and treatment response assessment for various tumors. None of the many software tools for (Formula presented.) quantification are standardized. The ISMRM Open Science Initiative for Perfusion Imaging–Dynamic Contrast-Enhanced (OSIPI-DCE) challenge was designed to benchmark methods to better help the efforts to standardize (Formula presented.) measurement. Methods: A framework was created to evaluate (Formula presented.) values produced by DCE-MRI analysis pipelines to enable benchmarking. The perfusion MRI community was invited to apply their pipelines for (Formula presented.) quantification in glioblastoma from clinical and synthetic patients. Submissions were required to include the entrants' (Formula presented.) values, the applied software, and a standard operating procedure. These were evaluated using the proposed (Formula presented.) score defined with accuracy, repeatability, and reproducibility components. Results: Across the 10 received submissions, the (Formula presented.) score ranged from 28% to 78% with a 59% median. The accuracy, repeatability, and reproducibility scores ranged from 0.54 to 0.92, 0.64 to 0.86, and 0.65 to 1.00, respectively (0–1 = lowest–highest). Manual arterial input function selection markedly affected the reproducibility and showed greater variability in (Formula presented.) analysis than automated methods. Furthermore, provision of a detailed standard operating procedure was critical for higher reproducibility. Conclusions: This study reports results from the OSIPI-DCE challenge and highlights the high inter-software variability within (Formula presented.) estimation, providing a framework for ongoing benchmarking against the scores presented. Through this challenge, the participating teams were ranked based on the performance of their software tools in the particular setting of this challenge. In a real-world clinical setting, many of these tools may perform differently with different benchmarking methodology.</p

    The ISMRM Open Science Initiative for Perfusion Imaging (OSIPI): Results from the OSIPI-Dynamic Contrast-Enhanced challenge

    No full text
    purpose: KtransKtrans {K}^{\mathrm{trans}} has often been proposed as a quantitative imaging biomarker for diagnosis, prognosis, and treatment response assessment for various tumors. None of the many software tools for KtransKtrans {K}^{\mathrm{trans}} quantification are standardized. the ISMRM open science initiative for perfusion imaging-dynamic contrast-enhanced (OSIPI-DCE) challenge was designed to benchmark methods to better help the efforts to standardize KtransKtrans {K}^{\mathrm{trans}} measurement. methods: a framework was created to evaluate KtransKtrans {K}^{\mathrm{trans}} values produced by DCE-MRI analysis pipelines to enable benchmarking. the perfusion MRI community was invited to apply their pipelines for KtransKtrans {K}^{\mathrm{trans}} quantification in glioblastoma from clinical and synthetic patients. submissions were required to include the entrants' KtransKtrans {K}^{\mathrm{trans}} values, the applied software, and a standard operating procedure. These were evaluated using the proposed OSIPIgoldOSIPIgold \mathrm{OSIP}{\mathrm{I}}_{\mathrm{gold}} score defined with accuracy, repeatability, and reproducibility components. results: across the 10 received submissions, the OSIPIgoldOSIPIgold \mathrm{OSIP}{\mathrm{I}}_{\mathrm{gold}} score ranged from 28% to 78% with a 59% median. The accuracy, repeatability, and reproducibility scores ranged from 0.54 to 0.92, 0.64 to 0.86, and 0.65 to 1.00, respectively (0-1 = lowest-highest). manual arterial input function selection markedly affected the reproducibility and showed greater variability in KtransKtrans {K}^{\mathrm{trans}} analysis than automated methods. furthermore, provision of a detailed standard operating procedure was critical for higher reproducibility. conclusions: This study reports results from the OSIPI-DCE challenge and highlights the high inter-software variability within KtransKtrans {K}^{\mathrm{trans}} estimation, providing a framework for ongoing benchmarking against the scores presented. through this challenge, the participating teams were ranked based on the performance of their software tools in the particular setting of this challenge. In a real-world clinical setting, many of these tools may perform differently with different benchmarking methodology
    corecore