25 research outputs found

    Model Evaluation Guidelines for Geomagnetic Index Predictions

    Full text link
    Geomagnetic indices are convenient quantities that distill the complicated physics of some region or aspect of near‐Earth space into a single parameter. Most of the best‐known indices are calculated from ground‐based magnetometer data sets, such as Dst, SYM‐H, Kp, AE, AL, and PC. Many models have been created that predict the values of these indices, often using solar wind measurements upstream from Earth as the input variables to the calculation. This document reviews the current state of models that predict geomagnetic indices and the methods used to assess their ability to reproduce the target index time series. These existing methods are synthesized into a baseline collection of metrics for benchmarking a new or updated geomagnetic index prediction model. These methods fall into two categories: (1) fit performance metrics such as root‐mean‐square error and mean absolute error that are applied to a time series comparison of model output and observations and (2) event detection performance metrics such as Heidke Skill Score and probability of detection that are derived from a contingency table that compares model and observation values exceeding (or not) a threshold value. A few examples of codes being used with this set of metrics are presented, and other aspects of metrics assessment best practices, limitations, and uncertainties are discussed, including several caveats to consider when using geomagnetic indices.Plain Language SummaryOne aspect of space weather is a magnetic signature across the surface of the Earth. The creation of this signal involves nonlinear interactions of electromagnetic forces on charged particles and can therefore be difficult to predict. The perturbations that space storms and other activity causes in some observation sets, however, are fairly regular in their pattern. Some of these measurements have been compiled together into a single value, a geomagnetic index. Several such indices exist, providing a global estimate of the activity in different parts of geospace. Models have been developed to predict the time series of these indices, and various statistical methods are used to assess their performance at reproducing the original index. Existing studies of geomagnetic indices, however, use different approaches to quantify the performance of the model. This document defines a standardized set of statistical analyses as a baseline set of comparison tools that are recommended to assess geomagnetic index prediction models. It also discusses best practices, limitations, uncertainties, and caveats to consider when conducting a model assessment.Key PointsWe review existing practices for assessing geomagnetic index prediction models and recommend a “standard set” of metricsAlong with fit performance metrics that use all data‐model pairs in their formulas, event detection performance metrics are recommendedOther aspects of metrics assessment best practices, limitations, uncertainties, and geomagnetic index caveats are also discussedPeer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/147764/1/swe20790_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/147764/2/swe20790.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/147764/3/swe20790-sup-0001-2018SW002067-SI.pd

    Implementation of iterative metal artifact reduction in the pre-planning-procedure of three-dimensional physical modeling

    Get PDF
    Abstract Background To assess the impact of metal artifact reduction techniques in 3D printing by evaluating image quality and segmentation time in both phantom and patient studies with dental restorations and/or other metal implants. An acrylic denture apparatus (Kilgore Typodent, Kilgore International, Coldwater, MI) was set in a 20 cm water phantom and scanned on a single-source CT scanner with gantry tilting capacity (SOMATOM Edge, Siemens Healthcare, Forchheim, Germany) under 5 scenerios: (1) Baseline acquisition at 120 kV with no gantry tilt, no jaw spacer, (2) acquisition at 140 kV, (3) acquisition with a gantry tilt at 15°, (4) acquisition with a non-radiopaque jaw spacer and (5) acquisition with a jaw spacer and a gantry tilt at 15°. All acquisitions were reconstructed both with and without a dedicated iterative metal artifact reduction algorithm (MAR). Patients referred for a head-and-neck exam were included into the study. Acquisitions were performed on the same scanner with 120 kV and the images were reconstructed with and without iterative MAR. Segmentation was performed on a dedicated workstation (Materialise Interactive Medical Image Control Systems; Materialise NV, Leuven, Belgium) to quantify volume of metal artifact and segmentation time. Results In the phantom study, the use of gantry tilt, jaw spacer and increased tube voltage showed no benefit in time or artifact volume reduction. However the jaw spacer allowed easier separation of the upper and lower jaw and a better display of the teeth. The use of dedicated iterative MAR significantly reduced the metal artifact volume and processing time. Same observations were made for the four patients included into the study. Conclusion The use of dedicated iterative MAR and jaw spacer substantially reduced metal artifacts in the head-and-neck CT acquisitions, hence allowing a faster 3D segmentation workflow

    Application usability levels: a framework for tracking project product progress

    No full text
    The space physics community continues to grow and become both more interdisciplinary and more intertwined with commercial and government operations. This has created a need for a framework to easily identify what projects can be used for specific applications and how close the tool is to routine autonomous or on-demand implementation and operation. We propose the Application Usability Level (AUL) framework and publicizing AULs to help the community quantify the progress of successful applications, metrics, and validation efforts. This framework will also aid the scientific community by supplying the type of information needed to build off of previously published work and publicizing the applications and requirements needed by the user communities. In this paper, we define the AUL framework, outline the milestones required for progression to higher AULs, and provide example projects utilizing the AUL framework. This work has been completed as part of the activities of the Assessment of Understanding and Quantifying Progress working group which is part of the International Forum for Space Weather Capabilities Assessment
    corecore