62 research outputs found

    In Flight Calibration of the Magnetospheric Multiscale Mission Fast Plasma Investigation

    Get PDF
    The Fast Plasma Investigation (FPI) on the Magnetospheric Multiscale mission (MMS) combines data from eight spectrometers, each with four deflection states, into a single map of the sky. Any systematic discontinuity, artifact, noise source, etc. present in this map may be incorrectly interpreted as legitimate data and incorrect conclusions reached. For this reason it is desirable to have all spectrometers return the same output for a given input, and for this output to be low in noise sources or other errors. While many missions use statistical analyses of data to calibrate instruments in flight, this process is difficult with FPI for two reasons: 1. Only a small fraction of high resolution data is downloaded to the ground due to bandwidth limitations and 2: The data that is downloaded is, by definition, scientifically interesting and therefore not ideal for calibration. FPI uses a suite of new tools to calibrate in flight. A new method for detection system ground calibration has been developed involving sweeping the detection threshold to fully define the pulse height distribution. This method has now been extended for use in flight as a means to calibrate MCP voltage and threshold (together forming the operating point) of the Dual Electron Spectrometers (DES) and Dual Ion Spectrometers (DIS). A method of comparing higher energy data (which has low fractional voltage error) to lower energy data (which has a higher fractional voltage error) will be used to calibrate the high voltage outputs. Finally, a comparison of pitch angle distributions will be used to find remaining discrepancies among sensors

    Performance of a Discrete Wavelet Transform for Compressing Plasma Count Data and its Application to the Fast Plasma Investigation on NASA's Magnetospheric Multiscale Mission

    Get PDF
    Plasma measurements in space are becoming increasingly faster, higher resolution, and distributed over multiple instruments. As raw data generation rates can exceed available data transfer bandwidth, data compression is becoming a critical design component. Data compression has been a staple of imaging instruments for years, but only recently have plasma measurement designers become interested in high performance data compression. Missions will often use a simple lossless compression technique yielding compression ratios of approximately 2:1, however future missions may require compression ratios upwards of 10:1. This study aims to explore how a Discrete Wavelet Transform combined with a Bit Plane Encoder (DWT/BPE), implemented via a CCSDS standard, can be used effectively to compress count information common to plasma measurements to high compression ratios while maintaining little or no compression error. The compression ASIC used for the Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale mission (MMS) is used for this study. Plasma count data from multiple sources is examined: resampled data from previous missions, randomly generated data from distribution functions, and simulations of expected regimes. These are run through the compression routines with various parameters to yield the greatest possible compression ratio while maintaining little or no error, the latter indicates that fully lossless compression is obtained. Finally, recommendations are made for future missions as to what can be achieved when compressing plasma count data and how best to do so

    The Legacy of Leaded Gasoline in Bottom Sediment of Small Rural Reservoirs

    Get PDF
    The historical and ongoing lead (Pb) contamination caused by the 20th-century use of leaded gasoline was investigated by an analysis of bottom sediment in eight small rural reservoirs in eastern Kansas, USA. For the reservoirs that were completed before or during the period of maximum Pb emissions from vehicles (i.e., the 1940s through the early 1980s) and that had a major highway in the basin, increased Pb concentrations reflected the pattern of historical leaded gasoline use. For at least some of these reservoirs, residual Pb is still being delivered from the basins. There was no evidence of increased Pb deposition for the reservoirs completed after the period of peak Pb emissions and (or) located in relatively remote areas with little or no highway traffic. Results indicated that several factors affected the magnitude and variability of Pb concentrations in reservoir sediment including traffic volume, reservoir age, and basin size. The increased Pb concentrations at four reservoirs exceeded the U.S. Environmental Protection Agency threshold-effects level (30.2 mg kg-1) and frequently exceeded a consensus-based threshold-effects concentration (35.8 mg kg-1) for possible adverse biological effects. For two reservoirs it was estimated that it will take at least 20 to 70 yr for Pb in the newly deposited sediment to return to baseline (pre-1920s) concentrations (30 mg kg-1) following the phase out of leaded gasoline. The buried sediment with elevated Pb concentrations may pose a future environmental concern if the reservoirs are dredged, the dams are removed, or the dams fail

    Seasonal-to-interannual prediction of North American coastal marine ecosystems: forecast methods, mechanisms of predictability, and priority developments

    Get PDF
    © The Author(s), 2020. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in Jacox, M. G., Alexander, M. A., Siedlecki, S., Chen, K., Kwon, Y., Brodie, S., Ortiz, I., Tommasi, D., Widlansky, M. J., Barrie, D., Capotondi, A., Cheng, W., Di Lorenzo, E., Edwards, C., Fiechter, J., Fratantoni, P., Hazen, E. L., Hermann, A. J., Kumar, A., Miller, A. J., Pirhalla, D., Buil, M. P., Ray, S., Sheridan, S. C., Subramanian, A., Thompson, P., Thorne, L., Annamalai, H., Aydin, K., Bograd, S. J., Griffis, R. B., Kearney, K., Kim, H., Mariotti, A., Merrifield, M., & Rykaczewski, R. Seasonal-to-interannual prediction of North American coastal marine ecosystems: forecast methods, mechanisms of predictability, and priority developments. Progress in Oceanography, 183, (2020): 102307, doi:10.1016/j.pocean.2020.102307.Marine ecosystem forecasting is an area of active research and rapid development. Promise has been shown for skillful prediction of physical, biogeochemical, and ecological variables on a range of timescales, suggesting potential for forecasts to aid in the management of living marine resources and coastal communities. However, the mechanisms underlying forecast skill in marine ecosystems are often poorly understood, and many forecasts, especially for biological variables, rely on empirical statistical relationships developed from historical observations. Here, we review statistical and dynamical marine ecosystem forecasting methods and highlight examples of their application along U.S. coastlines for seasonal-to-interannual (1–24 month) prediction of properties ranging from coastal sea level to marine top predator distributions. We then describe known mechanisms governing marine ecosystem predictability and how they have been used in forecasts to date. These mechanisms include physical atmospheric and oceanic processes, biogeochemical and ecological responses to physical forcing, and intrinsic characteristics of species themselves. In reviewing the state of the knowledge on forecasting techniques and mechanisms underlying marine ecosystem predictability, we aim to facilitate forecast development and uptake by (i) identifying methods and processes that can be exploited for development of skillful regional forecasts, (ii) informing priorities for forecast development and verification, and (iii) improving understanding of conditional forecast skill (i.e., a priori knowledge of whether a forecast is likely to be skillful). While we focus primarily on coastal marine ecosystems surrounding North America (and the U.S. in particular), we detail forecast methods, physical and biological mechanisms, and priority developments that are globally relevant.This study was supported by the NOAA Climate Program Office’s Modeling, Analysis, Predictions, and Projections (MAPP) program through grants NA17OAR4310108, NA17OAR4310112, NA17OAR4310111, NA17OAR4310110, NA17OAR4310109, NA17OAR4310104, NA17OAR4310106, and NA17OAR4310113. This paper is a product of the NOAA/MAPP Marine Prediction Task Force

    Evaluation of individual and ensemble probabilistic forecasts of COVID-19 mortality in the United States

    Get PDF
    Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both the general public and decision-makers. Forecasting models provide specific, quantitative, and evaluable predictions that inform short-term decisions such as healthcare staffing needs, school closures, and allocation of medical supplies. Starting in April 2020, the US COVID-19 Forecast Hub (https://covid19forecasthub.org/) collected, disseminated, and synthesized tens of millions of specific predictions from more than 90 different academic, industry, and independent research groups. A multimodel ensemble forecast that combined predictions from dozens of groups every week provided the most consistently accurate probabilistic forecasts of incident deaths due to COVID-19 at the state and national level from April 2020 through October 2021. The performance of 27 individual models that submitted complete forecasts of COVID-19 deaths consistently throughout this year showed high variability in forecast skill across time, geospatial units, and forecast horizons. Two-thirds of the models evaluated showed better accuracy than a naïve baseline model. Forecast accuracy degraded as models made predictions further into the future, with probabilistic error at a 20-wk horizon three to five times larger than when predicting at a 1-wk horizon. This project underscores the role that collaboration and active coordination between governmental public-health agencies, academic modeling teams, and industry partners can play in developing modern modeling capabilities to support local, state, and federal response to outbreaks

    The United States COVID-19 Forecast Hub dataset

    Get PDF
    Academic researchers, government agencies, industry groups, and individuals have produced forecasts at an unprecedented scale during the COVID-19 pandemic. To leverage these forecasts, the United States Centers for Disease Control and Prevention (CDC) partnered with an academic research lab at the University of Massachusetts Amherst to create the US COVID-19 Forecast Hub. Launched in April 2020, the Forecast Hub is a dataset with point and probabilistic forecasts of incident cases, incident hospitalizations, incident deaths, and cumulative deaths due to COVID-19 at county, state, and national, levels in the United States. Included forecasts represent a variety of modeling approaches, data sources, and assumptions regarding the spread of COVID-19. The goal of this dataset is to establish a standardized and comparable set of short-term forecasts from modeling teams. These data can be used to develop ensemble models, communicate forecasts to the public, create visualizations, compare models, and inform policies regarding COVID-19 mitigation. These open-source data are available via download from GitHub, through an online API, and through R packages

    The development and validation of a scoring tool to predict the operative duration of elective laparoscopic cholecystectomy

    Get PDF
    Background: The ability to accurately predict operative duration has the potential to optimise theatre efficiency and utilisation, thus reducing costs and increasing staff and patient satisfaction. With laparoscopic cholecystectomy being one of the most commonly performed procedures worldwide, a tool to predict operative duration could be extremely beneficial to healthcare organisations. Methods: Data collected from the CholeS study on patients undergoing cholecystectomy in UK and Irish hospitals between 04/2014 and 05/2014 were used to study operative duration. A multivariable binary logistic regression model was produced in order to identify significant independent predictors of long (> 90 min) operations. The resulting model was converted to a risk score, which was subsequently validated on second cohort of patients using ROC curves. Results: After exclusions, data were available for 7227 patients in the derivation (CholeS) cohort. The median operative duration was 60 min (interquartile range 45–85), with 17.7% of operations lasting longer than 90 min. Ten factors were found to be significant independent predictors of operative durations > 90 min, including ASA, age, previous surgical admissions, BMI, gallbladder wall thickness and CBD diameter. A risk score was then produced from these factors, and applied to a cohort of 2405 patients from a tertiary centre for external validation. This returned an area under the ROC curve of 0.708 (SE = 0.013, p  90 min increasing more than eightfold from 5.1 to 41.8% in the extremes of the score. Conclusion: The scoring tool produced in this study was found to be significantly predictive of long operative durations on validation in an external cohort. As such, the tool may have the potential to enable organisations to better organise theatre lists and deliver greater efficiencies in care

    Evaluation of appendicitis risk prediction models in adults with suspected appendicitis

    Get PDF
    Background Appendicitis is the most common general surgical emergency worldwide, but its diagnosis remains challenging. The aim of this study was to determine whether existing risk prediction models can reliably identify patients presenting to hospital in the UK with acute right iliac fossa (RIF) pain who are at low risk of appendicitis. Methods A systematic search was completed to identify all existing appendicitis risk prediction models. Models were validated using UK data from an international prospective cohort study that captured consecutive patients aged 16–45 years presenting to hospital with acute RIF in March to June 2017. The main outcome was best achievable model specificity (proportion of patients who did not have appendicitis correctly classified as low risk) whilst maintaining a failure rate below 5 per cent (proportion of patients identified as low risk who actually had appendicitis). Results Some 5345 patients across 154 UK hospitals were identified, of which two‐thirds (3613 of 5345, 67·6 per cent) were women. Women were more than twice as likely to undergo surgery with removal of a histologically normal appendix (272 of 964, 28·2 per cent) than men (120 of 993, 12·1 per cent) (relative risk 2·33, 95 per cent c.i. 1·92 to 2·84; P < 0·001). Of 15 validated risk prediction models, the Adult Appendicitis Score performed best (cut‐off score 8 or less, specificity 63·1 per cent, failure rate 3·7 per cent). The Appendicitis Inflammatory Response Score performed best for men (cut‐off score 2 or less, specificity 24·7 per cent, failure rate 2·4 per cent). Conclusion Women in the UK had a disproportionate risk of admission without surgical intervention and had high rates of normal appendicectomy. Risk prediction models to support shared decision‐making by identifying adults in the UK at low risk of appendicitis were identified
    corecore