13 research outputs found

    Evolution of the ring current energy during May 2-4, 1998 magnetic storm

    Get PDF
    We study the evolution of the ring current energy density during May 2-4, 1998 storm event as measured by Polar CAMMICE/MICS instrument and modelled by proton tracing in the guiding center approximation. Particle data from Polar shows that during the storm main phase protons with medium energies (20-80 keV) contribute more to the total ring current energy than the high energy protons (80-200 keV) whereas during the recovery phase high energies dominate. We trace protons with arbitrary pitch angles numerically in the guiding center approximation taking into account charge-exchange losses. Tracing is performed in the large-scale and smaller-scale time-dependent magnetic and electric field models. We model the substorm activity by several electric field pulses at times of the substorm onsets. It is shown that impulsive electric fields associated with substorms are effective in the proton transport and energization to higher energies more than 100 keV in the storm time ring current

    Event-oriented modelling of magnetic fields and currents during storms

    Get PDF
    We model the magnetospheric magnetic field during two storms, moderate and intense, using the event-oriented modelling technique which includes the representations of the magnetic field arising from the various magnetospheric current systems. The model free parameters are specified for each time step separately using observations from GOES 8, 9, and 10, Polar, Interball and Geotail satellites and Dst measurements. It is shown that the ring current is most important during intense storms, whereas the near Earth tail currents contribute more to the Dst index than the ring current during moderate storms

    Real‐Time SWMF at CCMC: Assessing the Dst Output From Continuous Operational Simulations

    Full text link
    The ground‐based magnetometer index of Dst is a commonly used measure of near‐Earth current systems, in particular the storm time inner magnetospheric current systems. The ability of a large‐scale, physics‐based model to reproduce, or even predict, this index is therefore a tangible measure of the overall validity of the code for space weather research and space weather operational usage. Experimental real‐time simulations of the Space Weather Modeling Framework (SWMF) are conducted at the Community Coordinated Modeling Center (CCMC). Presently, two configurations of the SWMF are running in real time at CCMC, both focusing on the geospace modules, using the Block Adaptive Tree Solar wind‐type Roe Upwind Solver magnetohydrodynamic model, the Ridley Ionosphere Model, and with and without the Rice Convection Model. While both have been running for several years, nearly continuous results are available since April 2015. A 27‐month interval through July 2017 is used for a quantitative assessment of Dst from the model output compared against the Kyoto real‐time Dst. Quantitative measures are presented to assess the goodness of fit including contingency tables and a receiver operating characteristic curve. It is shown that the SWMF run with the inner magnetosphere model is much better at reproducing storm time values, with a correlation coefficient of 0.69, a prediction efficiency of 0.41, and Heidke skill score of 0.57 (for a −50‐nT threshold). A comparison of real‐time runs with and without the inner magnetospheric drift physics model reveals that nearly all of the storm time Dst signature is from current systems related to kinetic processes on closed magnetic field lines.Plain Language SummaryAs society becomes more dependent on technologies susceptible to adverse space weather, it is becoming increasingly critical to have numerical models capable of running in real time to nowcast/forecast the conditions in the near‐Earth space environment. One such model is available at the Community Coordinated Modeling Center and has been running for several years, allowing for an assessment of the quality of the result. Comparisons are made against globally compiled index of near‐Earth space storm activity, including numerous statistical quantities and tests. The skill of the model is remarkable, especially when a few hours after each of the cold restarts of the model are removed from the comparison. It is also shown that a global model alone is not that good at reproducing this storm index; a regional model for the inner part of geospace is necessary for good data‐model agreement.Key PointsThe SWMF model has been running in experimental real‐time mode at CCMC for several years, and all saved output is availableThe comparison against real‐time Dst is quite good, especially when a few hours after cold restarts are removed from the comparisonIt is necessary to include an inner magnetospheric drift physics model to reproduce Dst; a real‐time run without one does much worsePeer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146631/1/swe20766.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146631/2/swe20766_am.pd

    Model Evaluation Guidelines for Geomagnetic Index Predictions

    Full text link
    Geomagnetic indices are convenient quantities that distill the complicated physics of some region or aspect of near‐Earth space into a single parameter. Most of the best‐known indices are calculated from ground‐based magnetometer data sets, such as Dst, SYM‐H, Kp, AE, AL, and PC. Many models have been created that predict the values of these indices, often using solar wind measurements upstream from Earth as the input variables to the calculation. This document reviews the current state of models that predict geomagnetic indices and the methods used to assess their ability to reproduce the target index time series. These existing methods are synthesized into a baseline collection of metrics for benchmarking a new or updated geomagnetic index prediction model. These methods fall into two categories: (1) fit performance metrics such as root‐mean‐square error and mean absolute error that are applied to a time series comparison of model output and observations and (2) event detection performance metrics such as Heidke Skill Score and probability of detection that are derived from a contingency table that compares model and observation values exceeding (or not) a threshold value. A few examples of codes being used with this set of metrics are presented, and other aspects of metrics assessment best practices, limitations, and uncertainties are discussed, including several caveats to consider when using geomagnetic indices.Plain Language SummaryOne aspect of space weather is a magnetic signature across the surface of the Earth. The creation of this signal involves nonlinear interactions of electromagnetic forces on charged particles and can therefore be difficult to predict. The perturbations that space storms and other activity causes in some observation sets, however, are fairly regular in their pattern. Some of these measurements have been compiled together into a single value, a geomagnetic index. Several such indices exist, providing a global estimate of the activity in different parts of geospace. Models have been developed to predict the time series of these indices, and various statistical methods are used to assess their performance at reproducing the original index. Existing studies of geomagnetic indices, however, use different approaches to quantify the performance of the model. This document defines a standardized set of statistical analyses as a baseline set of comparison tools that are recommended to assess geomagnetic index prediction models. It also discusses best practices, limitations, uncertainties, and caveats to consider when conducting a model assessment.Key PointsWe review existing practices for assessing geomagnetic index prediction models and recommend a “standard set” of metricsAlong with fit performance metrics that use all data‐model pairs in their formulas, event detection performance metrics are recommendedOther aspects of metrics assessment best practices, limitations, uncertainties, and geomagnetic index caveats are also discussedPeer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/147764/1/swe20790_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/147764/2/swe20790.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/147764/3/swe20790-sup-0001-2018SW002067-SI.pd

    Analysis of Features in a Sliding Threshold of Observation for Numeric Evaluation (STONE) Curve

    Full text link
    We apply idealized scatterplot distributions to the sliding threshold of observation for numeric evaluation (STONE) curve, a new model assessment metric, to examine the relationship between the STONE curve and the underlying point-spread distribution. The STONE curve is based on the relative operating characteristic (ROC) curve but is developed to work with a continuous-valued set of observations, sweeping both the observed and modeled event identification threshold simultaneously. This is particularly useful for model predictions of time series data as is the case for much of terrestrial weather and space weather. The identical sweep of both the model and observational thresholds results in changes to both the modeled and observed event states as the quadrant boundaries shift. The changes in a data-model pair’s event status result in nonmonotonic features to appear in the STONE curve when compared to an ROC curve for the same observational and model data sets. Such features reveal characteristics in the underlying distributions of the data and model values. Many idealized data sets were created with known distributions, connecting certain scatterplot features to distinct STONE curve signatures. A comprehensive suite of feature-signature combinations is presented, including their relationship to several other metrics. It is shown that nonmonotonic features appear if a local spread is more than 0.2 of the full domain or if a local bias is more than half of the local spread. The example of real-time plasma sheet electron modeling is used to show the usefulness of this technique, especially in combination with other metrics.Plain Language SummaryMany statistical tools have been developed to aid in the assessment of a numerical model’s quality at reproducing observations. Some of these techniques focus on the identification of events within the data set, times when the observed value is beyond some threshold value that defines it as a value of keen interest. An example of this is whether it will rain, in which events are defined as any precipitation above some defined amount. A method called the sliding threshold of observation for numeric evaluation (STONE) curve sweeps the event definition threshold of both the model output and the observations, resulting in the identification of threshold intervals for which the model does well at sorting the observations into events and nonevents. An excellent data-model comparison will have a smooth STONE curve, but the STONE curve can have wiggles and ripples in it. These features reveal clusters when the model systematically overestimates or underestimates the observations. This study establishes the connection between features in the STONE curve and attributes of the data-model relationship.Key PointsThe sliding threshold of observation for numeric evaluation (STONE) curve, an event detection sweeping-threshold data-model comparison metric, reveals thresholds where the model matches the dataSTONE curves can be nonmonotonic, revealing the location and size of clusters of model under- or over-estimations of the observationsSTONE curve features are analyzed, quantifying the shape of nonmonotonicities relative to distribution characteristics and other metricsPeer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/172933/1/swe21334_am.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/172933/2/swe21334.pd

    Thank You to Our 2019 Reviewers

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/155525/1/jgra55697.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/155525/2/jgra55697_am.pd
    corecore