596 research outputs found

    Raw and Count Data Comparability of Hip-Worn ActiGraph GT3X+ and Link Accelerometers

    Full text link
    To enable inter- and intrastudy comparisons it is important to ascertain comparability among accelerometer models. Purpose: The purpose of this study was to compare raw and count data between hip-worn ActiGraph GT3X+ and GT9X Link accelerometers. Methods: Adults (n = 26 (n = 15 women); age, 49.1 T 20.0 yr) wore GT3X+ and Link accelerometers over the right hip for an 80-min protocol involving 12–21 sedentary, household, and ambulatory/exercise activities lasting 2–15 min each. For each accelerometer, mean and variance of the raw (60 Hz) data for each axis and vector magnitude (VM) were extracted in 30-s epochs. A machine learning model (Montoye 2015) was used to predict energy expenditure in METs from the raw data. Raw data were also processed into activity counts in 30-s epochs for each axis and VM, with Freedson 1998 and 2011 count-based regression models used to predictMETs. Time spent in sedentary, light, moderate, and vigorous intensities was derived from predicted METs from each model. Correlations were calculated to compare raw and count data between accelerometers, and percent agreement was used to compare epoch-by-epoch activity intensity. Results: For raw data, correlations for mean acceleration were 0.96 T 0.05, 0.89 T 0.16, 0.71 T 0.33, and 0.80 T 0.28, and those for variance were 0.98 T 0.02, 0.98 T 0.03, 0.91 T 0.06, and 1.00 T 0.00 in the X, Y, and Z axes and VM, respectively. For count data, corresponding correlations were 1.00 T 0.01, 0.98 T 0.02, 0.96 T 0.04, and 1.00 T 0.00, respectively. Freedson 1998 and 2011 count-based models had significantly higher percent agreement for activity intensity (95.1% T 5.6% and 95.5% T 4.0%) compared with theMontoye 2015 raw data model (61.5% T 27.6%; P G 0.001). Conclusions: Count data were more highly comparable than raw data between accelerometers. Data filtering and/or more robust raw data models are needed to improve raw data comparability between ActiGraph GT3X+ and Link accelerometers

    Development of Cut-points for Determining Activity Intensity From a Wrist-worn ActiGraph Accelerometer in Free-living Adults

    Full text link
    Despite recent popularity of wrist-worn accelerometers for assessing free-living physical behaviours, there is a lack of user-friendly methods to characterize physical activity from a wrist-worn ActiGraph accelerometer. Participants in this study completed a laboratory protocol and/or 3–8 hours of directly observed free-living (criterion measure of activity intensity) while wearing ActiGraph GT9X Link accel- erometers on the right hip and non-dominant wrist. All laboratory data (n = 36) and 11 participants’ free- living data were used to develop vector magnitude count cut-points (counts/min) for activity intensity for the wrist-worn accelerometer, and 12 participants’ free-living data were used to cross-validate cut-point accuracy. The cut-points were: \u3c2,860 counts/min (sedentary); 2,860–3,940 counts/min (light); and ≥3,941counts/min (moderate-to-vigorous (MVPA)). These cut-points had an accuracy of 70.8% for asses- sing free-living activity intensity, whereas Sasaki/Freedson cut-points for the hip accelerometer had an accuracy of 77.1%, and Hildebrand Euclidean Norm Minus One (ENMO) cut-points for the wrist accel- erometer had an accuracy of 75.2%. While accuracy was higher for a hip-worn accelerometer and for ENMO wrist cut-points, the high wear compliance of wrist accelerometers shown in past work and the ease of use of count-based analysis methods may justify use of these developed cut-points until more accurate, equally usable methods can be developed

    The Vermont oxford neonatal encephalopathy registry: Rationale, methods, and initial results

    Get PDF
    BACKGROUND: In 2006, the Vermont Oxford Network (VON) established the Neonatal Encephalopathy Registry (NER) to characterize infants born with neonatal encephalopathy, describe evaluations and medical treatments, monitor hypothermic therapy (HT) dissemination, define clinical research questions, and identify opportunities for improved care. METHODS: Eligible infants were ≥ 36 weeks with seizures, altered consciousness (stupor, coma) during the first 72 hours of life, a 5 minute Apgar score of ≤ 3, or receiving HT. Infants with central nervous system birth defects were excluded. RESULTS: From 2006–2010, 95 centers registered 4232 infants. Of those, 59% suffered a seizure, 50% had a 5 minute Apgar score of ≤ 3, 38% received HT, and 18% had stupor/coma documented on neurologic exam. Some infants experienced more than one eligibility criterion. Only 53% had a cord gas obtained and only 63% had a blood gas obtained within 24 hours of birth, important components for determining HT eligibility. Sixty-four percent received ventilator support, 65% received anticonvulsants, 66% had a head MRI, 23% had a cranial CT, 67% had a full channel encephalogram (EEG) and 33% amplitude integrated EEG. Of all infants, 87% survived. CONCLUSIONS: The VON NER describes the heterogeneous population of infants with NE, the subset that received HT, their patterns of care, and outcomes. The optimal routine care of infants with neonatal encephalopathy is unknown. The registry method is well suited to identify opportunities for improvement in the care of infants affected by NE and study interventions such as HT as they are implemented in clinical practice

    Bacterial genomics reveal the complex epidemiology of an emerging pathogen in Arctic and boreal ungulates

    Get PDF
    Northern ecosystems are currently experiencing unprecedented ecological change, largely driven by a rapidly changing climate. Pathogen range expansion, and emergence and altered patterns of infectious disease, are increasingly reported in wildlife at high latitudes. Understanding the causes and consequences of shifting pathogen diversity and host-pathogen interactions in these ecosystems is important for wildlife conservation, and for indigenous populations that depend on wildlife. Among the key questions are whether disease events are associated with endemic or recently introduced pathogens, and whether emerging strains are spreading throughout the region. In this study, we used a phylogenomic approach to address these questions of pathogen endemicity and spread for Erysipelothrix rhusiopathiae, an opportunistic multi-host bacterial pathogen associated with recent mortalities in arctic and boreal ungulate populations in North America. We isolated E. rhusiopathiae from carcasses associated with large-scale die-offs of muskoxen in the Canadian Arctic Archipelago, and from contemporaneous mortality events and/or population declines among muskoxen in northwestern Alaska and caribou and moose in western Canada. Bacterial genomic diversity differed markedly among these locations; minimal divergence was present among isolates from muskoxen in the Canadian Arctic, while in caribou and moose populations, strains from highly divergent clades were isolated from the same location, or even from within a single carcass. These results indicate that mortalities among northern ungulates are not associated with a single emerging strain of E. rhusiopathiae, and that alternate hypotheses need to be explored. Our study illustrates the value and limitations of bacterial genomic data for discriminating between ecological hypotheses of disease emergence, and highlights the importance of studying emerging pathogens within the broader context of environmental and host factors

    A Consensus Method for Estimating Physical Activity Levels in Adults Using Accelerometry

    Full text link
    Identifying the best analytical approach for capturing moderate-to-vigorous physical activity (MVPA) using accelerometry is complex but inconsistent approaches employed in research and surveillance limits comparability. We illustrate the use of a consensus method that pools estimates from multiple approaches for characterising MVPA using accelerometry. Participants (n = 30) wore an accelerometer on their right hip during two laboratory visits. Ten individual classification methods estimated minutes of MVPA, including cut-point, two-regression, and machine learning approaches, using open-source count and raw inputs and several epoch lengths. Results were averaged to derive the consensus estimate. Mean MVPA ranged from 33.9–50.4 min across individual methods, but only one (38.9 min) was statistically equivalent to the criterion of direct observation (38.2 min). The consensus estimate (39.2 min) was equivalent to the criterion (even after removal of the one individual method that was equivalent to the criterion), had a smaller mean absolute error (4.2 min) compared to individual methods (4.9–12.3 min), and enabled the estimation of participant-level variance (mean standard deviation: 7.7 min). The consensus method allows for addition/removal of methods depending on data availability or field progression and may improve accuracy and comparability of device-based MVPA estimates while limiting variability due to convergence between estimate

    Ocean Dumping of Containerized DDT Waste Was a Sloppy Process

    Get PDF
    Author Posting. © American Chemical Society, 2019. This article is posted here by permission of American Chemical Society for personal use, not for redistribution. The definitive version was published in Kivenson, V., Lemkau, K. L., Pizarro, O., Yoerger, D. R., Kaiser, C., Nelson, R. K., Carmichael, C., Paul, B. G., Reddy, C. M., & Valentine, D. L. (2019). Ocean Dumping of Containerized DDT Waste Was a Sloppy Process. Environmental Science and Technology (2019), doi:10.1021/acs.est.8b05859.Industrial-scale dumping of organic waste to the deep ocean was once common practice, leaving a legacy of chemical pollution for which a paucity of information exists. Using a nested approach with autonomous and remotely operated underwater vehicles, a dumpsite offshore California was surveyed and sampled. Discarded waste containers littered the site and structured the suboxic benthic environment. Dichlorodiphenyltrichloroethane (DDT) was reportedly dumped in the area, and sediment analysis revealed substantial variability in concentrations of p,p-DDT and its analogs, with a peak concentration of 257 μg g–1, ∼40 times greater than the highest level of surface sediment contamination at the nearby DDT Superfund site. The occurrence of a conspicuous hydrocarbon mixture suggests that multiple petroleum distillates, potentially used in DDT manufacture, contributed to the waste stream. Application of a two end-member mixing model with DDTs and polychlorinated biphenyls enabled source differentiation between shelf discharge versus containerized waste. Ocean dumping was found to be the major source of DDT to more than 3000 km2 of the region’s deep seafloor. These results reveal that ocean dumping of containerized DDT waste was inherently sloppy, with the contents readily breaching containment and leading to regional scale contamination of the deep benthos.This material is based upon work supported by the National Science Foundation Graduate Research Fellowship for V.K. under Grant No. 1650114. Expeditions AT-18-11 and AT-26-06 were funded by the NSF (OCE-0961725 and OCE-1046144). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. We thank the captain and crew of the RV Atlantis, the pilots and crew of the ROV Jason, the crew of the AUV Sentry, the scientific party of the AT-18-11 and AT-26-06 expeditions, Justin Tran for assistance with the preparation of multibeam data, M. Indira Venkatesan for a helpful discussion of the NOAA datasets, and Nathan Dodder for advice on the procedure for compound identification

    Floating oil-covered debris from Deepwater Horizon : identification and application

    Get PDF
    Author Posting. © IOP Publishing, 2012. This article is posted here by permission of IOP Publishing. Re-use is limited to non-commercial purposes. The definitive version was published in Environmental Research Letters 7 (2012): 015301, doi:10.1088/1748-9326/7/1/015301.The discovery of oiled and non-oiled honeycomb material in the Gulf of Mexico surface waters and along coastal beaches shortly after the explosion of Deepwater Horizon sparked debate about its origin and the oil covering it. We show that the unknown pieces of oiled and non-oiled honeycomb material collected in the Gulf of Mexico were pieces of the riser pipe buoyancy module of Deepwater Horizon. Biomarker ratios confirmed that the oil had originated from the Macondo oil well and had undergone significant weathering. Using the National Oceanic and Atmospheric Administration's records of the oil spill trajectory at the sea surface, we show that the honeycomb material preceded the front edge of the uncertainty of the oil slick trajectory by several kilometers. We conclude that the observation of debris fields deriving from damaged marine materials may be incorporated into emergency response efforts and forecasting of coastal impacts during future offshore oil spills, and ground truthing predicative models.This research was supported by NSF grant OCE-1043976 to CR

    Cauchy's infinitesimals, his sum theorem, and foundational paradigms

    Full text link
    Cauchy's sum theorem is a prototype of what is today a basic result on the convergence of a series of functions in undergraduate analysis. We seek to interpret Cauchy's proof, and discuss the related epistemological questions involved in comparing distinct interpretive paradigms. Cauchy's proof is often interpreted in the modern framework of a Weierstrassian paradigm. We analyze Cauchy's proof closely and show that it finds closer proxies in a different modern framework. Keywords: Cauchy's infinitesimal; sum theorem; quantifier alternation; uniform convergence; foundational paradigms.Comment: 42 pages; to appear in Foundations of Scienc
    • …
    corecore