935 research outputs found

    Alien Registration- Parsons, Joseph W. (Paris, Oxford County)

    Get PDF
    https://digitalmaine.com/alien_docs/22268/thumbnail.jp

    Injury Rates in Age-Only Versus Age-and-Weight Playing Standard Conditions in American Youth Football

    Get PDF
    BACKGROUND: American youth football leagues are typically structured using either age-only (AO) or age-and-weight (AW) playing standard conditions. These playing standard conditions group players by age in the former condition and by a combination of age and weight in the latter condition. However, no study has systematically compared injury risk between these 2 playing standards. PURPOSE: To compare injury rates between youth tackle football players in the AO and AW playing standard conditions. STUDY DESIGN: Cohort study; Level of evidence, 2. METHODS: Athletic trainers evaluated and recorded injuries at each practice and game during the 2012 and 2013 football seasons. Players (age, 5-14 years) were drawn from 13 recreational leagues across 6 states. The sample included 4092 athlete-seasons (AW, 2065; AO, 2027) from 210 teams (AW, 106; O, 104). Injury rate ratios (RRs) with 95% CIs were used to compare the playing standard conditions. Multivariate Poisson regression was used to estimate RRs adjusted for residual effects of age and clustering by team and league. There were 4 endpoints of interest: (1) any injury, (2) non-time loss (NTL) injuries only, (3) time loss (TL) injuries only, and (4) concussions only. RESULTS: Over 2 seasons, the cohort accumulated 1475 injuries and 142,536 athlete-exposures (AEs). The most common injuries were contusions (34.4%), ligament sprains (16.3%), concussions (9.6%), and muscle strains (7.8%). The overall injury rate for both playing standard conditions combined was 10.3 per 1000 AEs (95% CI, 9.8-10.9). The TL injury, NTL injury, and concussion rates in both playing standard conditions combined were 3.1, 7.2, and 1.0 per 1000 AEs, respectively. In multivariate Poisson regression models controlling for age, team, and league, no differences were found between playing standard conditions in the overall injury rate (RRoverall, 1.1; 95% CI, 0.4-2.6). Rates for the other 3 endpoints were also similar (RRNTL, 1.1 [95% CI, 0.4-3.0]; RRTL, 0.9 [95% CI, 0.4-1.9]; RRconcussion, 0.6 [95% CI, 0.3-1.4]). CONCLUSION: For the injury endpoints examined in this study, the injury rates were similar in the AO and AW playing standards. Future research should examine other policies, rules, and behavioral factors that may affect injury risk within youth football

    Surveying the Dynamic Radio Sky with the Long Wavelength Demonstrator Array

    Full text link
    This paper presents a search for radio transients at a frequency of 73.8 MHz (4 m wavelength) using the all-sky imaging capabilities of the Long Wavelength Demonstrator Array (LWDA). The LWDA was a 16-dipole phased array telescope, located on the site of the Very Large Array in New Mexico. The field of view of the individual dipoles was essentially the entire sky, and the number of dipoles was sufficiently small that a simple software correlator could be used to make all-sky images. From 2006 October to 2007 February, we conducted an all-sky transient search program, acquiring a total of 106 hr of data; the time sampling varied, being 5 minutes at the start of the program and improving to 2 minutes by the end of the program. We were able to detect solar flares, and in a special-purpose mode, radio reflections from ionized meteor trails during the 2006 Leonid meteor shower. We detected no transients originating outside of the solar system above a flux density limit of 500 Jy, equivalent to a limit of no more than about 10^{-2} events/yr/deg^2, having a pulse energy density >~ 1.5 x 10^{-20} J/m^2/Hz at 73.8 MHz for pulse widths of about 300 s. This event rate is comparable to that determined from previous all-sky transient searches, but at a lower frequency than most previous all-sky searches. We believe that the LWDA illustrates how an all-sky imaging mode could be a useful operational model for low-frequency instruments such as the Low Frequency Array, the Long Wavelength Array station, the low-frequency component of the Square Kilometre Array, and potentially the Lunar Radio Array.Comment: 20 pages; accepted for publication in A

    An empirical evaluation of camera trap study design: How many, how long and when?

    Get PDF
    Abstract Camera traps deployed in grids or stratified random designs are a well‐established survey tool for wildlife but there has been little evaluation of study design parameters. We used an empirical subsampling approach involving 2,225 camera deployments run at 41 study areas around the world to evaluate three aspects of camera trap study design (number of sites, duration and season of sampling) and their influence on the estimation of three ecological metrics (species richness, occupancy and detection rate) for mammals. We found that 25–35 camera sites were needed for precise estimates of species richness, depending on scale of the study. The precision of species‐level estimates of occupancy (ψ) was highly sensitive to occupancy level, with 0.75) species, but more than 150 camera sites likely needed for rare (ψ < 0.25) species. Species detection rates were more difficult to estimate precisely at the grid level due to spatial heterogeneity, presumably driven by unaccounted habitat variability factors within the study area. Running a camera at a site for 2 weeks was most efficient for detecting new species, but 3–4 weeks were needed for precise estimates of local detection rate, with no gains in precision observed after 1 month. Metrics for all mammal communities were sensitive to seasonality, with 37%–50% of the species at the sites we examined fluctuating significantly in their occupancy or detection rates over the year. This effect was more pronounced in temperate sites, where seasonally sensitive species varied in relative abundance by an average factor of 4–5, and some species were completely absent in one season due to hibernation or migration. We recommend the following guidelines to efficiently obtain precise estimates of species richness, occupancy and detection rates with camera trap arrays: run each camera for 3–5 weeks across 40–60 sites per array. We recommend comparisons of detection rates be model based and include local covariates to help account for small‐scale variation. Furthermore, comparisons across study areas or times must account for seasonality, which could have strong impacts on mammal communities in both tropical and temperate sites

    Screen for IDH1, IDH2, IDH3, D2HGDH and L2HGDH Mutations in Glioblastoma

    Get PDF
    Isocitrate dehydrogenases (IDHs) catalyse oxidative decarboxylation of isocitrate to α-ketoglutarate (α-KG). IDH1 functions in the cytosol and peroxisomes, whereas IDH2 and IDH3 are both localized in the mitochondria. Heterozygous somatic mutations in IDH1 occur at codon 132 in 70% of grade II–III gliomas and secondary glioblastomas (GBMs), and in 5% of primary GBMs. Mutations in IDH2 at codon 172 are present in grade II–III gliomas at a low frequency. IDH1 and IDH2 mutations cause both loss of normal enzyme function and gain-of-function, causing reduction of α-KG to D-2-hydroxyglutarate (D-2HG) which accumulates. Excess hydroxyglutarate (2HG) can also be caused by germline mutations in D- and L-2-hydroxyglutarate dehydrogenases (D2HGDH and L2HGDH). If loss of IDH function is critical for tumourigenesis, we might expect some tumours to acquire somatic IDH3 mutations. Alternatively, if 2HG accumulation is critical, some tumours might acquire somatic D2HGDH or L2HGDH mutations. We therefore screened 47 glioblastoma samples looking for changes in these genes. Although IDH1 R132H was identified in 12% of samples, no mutations were identified in any of the other genes. This suggests that mutations in IDH3, D2HGDH and L2HGDH do not occur at an appreciable frequency in GBM. One explanation is simply that mono-allelic IDH1 and IDH2 mutations occur more frequently by chance than the bi-allelic mutations expected at IDH3, D2HGDH and L2HGDH. Alternatively, both loss of IDH function and 2HG accumulation might be required for tumourigenesis, and only IDH1 and IDH2 mutations have these dual effects

    Persistent Lipophilic Environmental Chemicals and Endometriosis: The ENDO Study

    Get PDF
    Background: An equivocal literature exists regarding the relation between persistent organochlorine pollutants (POPs) and endometriosis in women, with differences attributed to methodologies

    LSST: from Science Drivers to Reference Design and Anticipated Data Products

    Get PDF
    (Abridged) We describe here the most ambitious survey currently planned in the optical, the Large Synoptic Survey Telescope (LSST). A vast array of science will be enabled by a single wide-deep-fast sky survey, and LSST will have unique survey capability in the faint time domain. The LSST design is driven by four main science themes: probing dark energy and dark matter, taking an inventory of the Solar System, exploring the transient optical sky, and mapping the Milky Way. LSST will be a wide-field ground-based system sited at Cerro Pach\'{o}n in northern Chile. The telescope will have an 8.4 m (6.5 m effective) primary mirror, a 9.6 deg2^2 field of view, and a 3.2 Gigapixel camera. The standard observing sequence will consist of pairs of 15-second exposures in a given field, with two such visits in each pointing in a given night. With these repeats, the LSST system is capable of imaging about 10,000 square degrees of sky in a single filter in three nights. The typical 5σ\sigma point-source depth in a single visit in rr will be 24.5\sim 24.5 (AB). The project is in the construction phase and will begin regular survey operations by 2022. The survey area will be contained within 30,000 deg2^2 with δ<+34.5\delta<+34.5^\circ, and will be imaged multiple times in six bands, ugrizyugrizy, covering the wavelength range 320--1050 nm. About 90\% of the observing time will be devoted to a deep-wide-fast survey mode which will uniformly observe a 18,000 deg2^2 region about 800 times (summed over all six bands) during the anticipated 10 years of operations, and yield a coadded map to r27.5r\sim27.5. The remaining 10\% of the observing time will be allocated to projects such as a Very Deep and Fast time domain survey. The goal is to make LSST data products, including a relational database of about 32 trillion observations of 40 billion objects, available to the public and scientists around the world.Comment: 57 pages, 32 color figures, version with high-resolution figures available from https://www.lsst.org/overvie

    Single hadron response measurement and calorimeter jet energy scale uncertainty with the ATLAS detector at the LHC

    Get PDF
    The uncertainty on the calorimeter energy response to jets of particles is derived for the ATLAS experiment at the Large Hadron Collider (LHC). First, the calorimeter response to single isolated charged hadrons is measured and compared to the Monte Carlo simulation using proton-proton collisions at centre-of-mass energies of sqrt(s) = 900 GeV and 7 TeV collected during 2009 and 2010. Then, using the decay of K_s and Lambda particles, the calorimeter response to specific types of particles (positively and negatively charged pions, protons, and anti-protons) is measured and compared to the Monte Carlo predictions. Finally, the jet energy scale uncertainty is determined by propagating the response uncertainty for single charged and neutral particles to jets. The response uncertainty is 2-5% for central isolated hadrons and 1-3% for the final calorimeter jet energy scale.Comment: 24 pages plus author list (36 pages total), 23 figures, 1 table, submitted to European Physical Journal
    corecore