233 research outputs found
Recommended from our members
The role of the stratospheric polar vortex for the austral jet response to greenhouse gas forcing
Future shifts of the austral midlatitude jet are subject to large uncertainties in climate model projections. Here we show that, in addition to other previously identified sources of inter-model uncertainty, changes in the timing of the stratospheric polar vortex breakdown modulate the austral jet response to greenhouse gas forcing during summertime (December-February). The relationship is such that a larger delay in vortex breakdown favors a more poleward jet shift, with an estimated 0.7-0.8-degree increase in jet shift per 10 days delay in vortex breakdown. The causality of the link between the timing of the vortex breakdown and the tropospheric jet response is demonstrated through climate modeling experiments with imposed changes in the seasonality of the stratospheric polar vortex. The vortex response is estimated to account for about 30% of the inter-model variance in the shift of the summertime austral jet, and about 45% of the mean jet shift
Modelling the relationship between relative load and match outcome in junior tennis players
The acute:chronic workload ratio (ACWR) is a metric that can be used to monitor training loads during sport. Over the last decade researchers have investigated how this metric relates to injury, yet little consideration has been given to how this metric interacts with performance. Two prospective longitudinal studies were implemented investigating internal and external ACWRs and match outcome in junior tennis players. Forty-two and 24 players were recruited to participate in the internal and external load studies, respectively. Internal load was measured using session rate of perceive exertion, while external load was defined as total swing counts. The main dependent variable was tennis match performance which was extracted from the universal tennis rating website. The ACWR for internal and external load were the primary independent variables. Acute load was defined as the total load for one week, while a 4-week rolling average represented chronic load. There were no significant associations between internal (p-value = .23) or external (p-value = .81) ACWR and tennis match performance as assessed by multivariate regressions. The ACWRs in these datasets were close to 1.00, thus a balanced training load was undertaken by these athletes upon entering match play but was not related to match success
Do Scapular Kinematics Alter during the Performance of the Scapular Assistance Test and Scapular Retraction Test: A Pilot Study
Objective: To describe to what degree and in what plane biomechanical alterations occur during the performance of the Scapular Retraction test (SRT) and Scapular Assistance Test (SAT).
Design: Laboratory Pilot Study
Participants: Eight symptomatic and 7 asymptomatic subjects were instrumented with electromagnetic sensors.
Main Outcome Measures: The SRT and SAT were performed with the scapula stabilized and unstabilized. The scapular kinematic variables of posterior tilt, internal rotation, upward rotation, protraction, and elevation were measured during both tests.
Results: Descriptive analysis of scapular kinematics suggested that posterior tilt was primarily increased during both clinical tests in both groups. Both groups decreased in scapular elevation, indicating that the scapula was being depressed during the SRT. There was no meaningful change in force during the SRT.
Conclusion: These findings indicate that both the SRT and SAT appear to alter scapular motion in both groups. The interpretations of these results are limited due to the small sample size and large confidence intervals, but suggest that these tests change specific positions of the scapula. Further research into these tests is needed to confirm these biomechanical alterations, and to determine the value of these tests when developing rehabilitation protocols in patients with shoulder pain
Increasing Ball Velocity in the Overhead Athlete: A Meta-Analysis of Randomized Controlled Trials
Overhead athletes routinely search for ways to improve sport performance, and one component of performance is ball velocity. The purpose of this meta-analysis was to investigate the effect of different strengthening interventions on ball and serve velocity. A comprehensive literature search with pre-set inclusion and exclusion criteria from 1970 to 2014 was conducted. Eligible studies were randomized control trials including the mean and SDs of both pretest and posttest ball velocities in both the experimental and the control groups. The outcome of interest was ball/serve velocity in baseball, tennis, or softball athletes. Level 2 evidence or higher was investigated to determine the effect different training interventions had on velocity. Pretest and posttest data were extracted to calculate Hedges\u27s g effect sizes with 95% confidence intervals (CIs). Methodological qualities of the final 13 articles within the analysis were assessed using the Physiotherapy Evidence Database scale. The majority of the articles included in this analysis had an effect on velocity with the strongest effect sizes found in periodized training (Hedges\u27s g = 3.445; 95% CI = 1.976-4.914). Six studies had CI that crossed zero, indicating that those specific interventions should be interpreted with caution. Consistent and high-quality evidence exists that specific resistance training interventions have an effect on velocity. These findings suggest that interventions consisting of isokinetic training, multimodal training, and periodization training are clinically beneficial at increasing velocity in the overhead athlete over different windows of time
Reliability of an Observational Method Used to Assess Tennis Serve Mechanics in a Group of Novice Raters
Background: Previous research has developed an observational tennis serve analysis (OTSA) tool to assess serve mechanics. The OTSA has displayed substantial agreement between the two health care professionals that developed the tool; however, it is currently unknown if the OTSA is reliable when administered by novice users.
Purpose: The purpose of this investigation was to determine if reliability for the OTSA could be established in novice users via an interactive classroom training session.
Methods: Eight observers underwent a classroom instructional training protocol highlighting the OTSA. Following training, observers participated in two different rating sessions approximately a week apart. Each observer independently viewed 16 non-professional tennis players performing a first serve. All observers were asked to rate the tennis serve using the OTSA. Both intra and inter-observer reliability were determined using Kappa coefficients.
Results: Kappa coefficients for intra and inter-observer agreement ranged from 0.09 to 0.83 depending on the body position. A majority of all body positions yeilded moderate agreement and higher.
Conclusion: This study suggests that the majority of components associated with the OTSA are reliable and can be taught to novice users via a classroom training session
Robust Machine Learning Applied to Astronomical Datasets III: Probabilistic Photometric Redshifts for Galaxies and Quasars in the SDSS and GALEX
We apply machine learning in the form of a nearest neighbor instance-based
algorithm (NN) to generate full photometric redshift probability density
functions (PDFs) for objects in the Fifth Data Release of the Sloan Digital Sky
Survey (SDSS DR5). We use a conceptually simple but novel application of NN to
generate the PDFs - perturbing the object colors by their measurement error -
and using the resulting instances of nearest neighbor distributions to generate
numerous individual redshifts. When the redshifts are compared to existing SDSS
spectroscopic data, we find that the mean value of each PDF has a dispersion
between the photometric and spectroscopic redshift consistent with other
machine learning techniques, being sigma = 0.0207 +/- 0.0001 for main sample
galaxies to r < 17.77 mag, sigma = 0.0243 +/- 0.0002 for luminous red galaxies
to r < ~19.2 mag, and sigma = 0.343 +/- 0.005 for quasars to i < 20.3 mag. The
PDFs allow the selection of subsets with improved statistics. For quasars, the
improvement is dramatic: for those with a single peak in their probability
distribution, the dispersion is reduced from 0.343 to sigma = 0.117 +/- 0.010,
and the photometric redshift is within 0.3 of the spectroscopic redshift for
99.3 +/- 0.1% of the objects. Thus, for this optical quasar sample, we can
virtually eliminate 'catastrophic' photometric redshift estimates. In addition
to the SDSS sample, we incorporate ultraviolet photometry from the Third Data
Release of the Galaxy Evolution Explorer All-Sky Imaging Survey (GALEX AIS GR3)
to create PDFs for objects seen in both surveys. For quasars, the increased
coverage of the observed frame UV of the SED results in significant improvement
over the full SDSS sample, with sigma = 0.234 +/- 0.010. We demonstrate that
this improvement is genuine. [Abridged]Comment: Accepted to ApJ, 10 pages, 12 figures, uses emulateapj.cl
Mapping neighborhood scale survey responses with uncertainty metrics
This paper presents a methodology of mapping population-centric social, infrastructural, and environmental metrics at neighborhood scale. This methodology extends traditional survey analysis methods to create cartographic products useful in agent-based modeling and geographic information analysis. It utilizes and synthesizes survey microdata, sub-upazila attributes, land use information, and ground truth locations of attributes to create neighborhood scale multi-attribute maps. Monte Carlo methods are employed to combine any number of survey responses to stochastically weight survey cases and to simulate survey cases\u27 locations in a study area. Through such Monte Carlo methods, known errors from each of the input sources can be retained. By keeping individual survey cases as the atomic unit of data representation, this methodology ensures that important covariates are retained and that ecological inference fallacy is eliminated. These techniques are demonstrated with a case study from the Chittagong Division in Bangladesh. The results provide a population-centric understanding of many social, infrastructural, and environmental metrics desired in humanitarian aid and disaster relief planning and operations wherever long term familiarity is lacking. Of critical importance is that the resulting products have easy to use explicit representation of the errors and uncertainties of each of the input sources via the automatically generated summary statistics created at the application\u27s geographic scale
Recommended from our members
Longitudinal assessment of demographic representativeness in the Medical Imaging and Data Resource Center open data commons
Purpose: The Medical Imaging and Data Resource Center (MIDRC) open data commons was launched to accelerate the development of artificial intelligence (AI) algorithms to help address the COVID-19 pandemic. The purpose of this study was to quantify longitudinal representativeness of the demographic characteristics of the primary MIDRC dataset compared to the United States general population (US Census) and COVID-19 positive case counts from the Centers for Disease Control and Prevention (CDC). Approach: The Jensen-Shannon distance (JSD), a measure of similarity of two distributions, was used to longitudinally measure the representativeness of the distribution of (1) all unique patients in the MIDRC data to the 2020 US Census and (2) all unique COVID-19 positive patients in the MIDRC data to the case counts reported by the CDC. The distributions were evaluated in the demographic categories of age at index, sex, race, ethnicity, and the combination of race and ethnicity. Results: Representativeness of the MIDRC data by ethnicity and the combination of race and ethnicity was impacted by the percentage of CDC case counts for which this was not reported. The distributions by sex and race have retained their level of representativeness over time. Conclusion: The representativeness of the open medical imaging datasets in the curated public data commons at MIDRC has evolved over time as the number of contributing institutions and overall number of subjects have grown. The use of metrics, such as the JSD support measurement of representativeness, is one step needed for fair and generalizable AI algorithm development.</p
The SDSS-III Baryon Oscillation Spectroscopic Survey: Quasar Target Selection for Data Release Nine
The SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS), a five-year
spectroscopic survey of 10,000 deg^2, achieved first light in late 2009. One of
the key goals of BOSS is to measure the signature of baryon acoustic
oscillations in the distribution of Ly-alpha absorption from the spectra of a
sample of ~150,000 z>2.2 quasars. Along with measuring the angular diameter
distance at z\approx2.5, BOSS will provide the first direct measurement of the
expansion rate of the Universe at z > 2. One of the biggest challenges in
achieving this goal is an efficient target selection algorithm for quasars over
2.2 < z < 3.5, where their colors overlap those of stars. During the first year
of the BOSS survey, quasar target selection methods were developed and tested
to meet the requirement of delivering at least 15 quasars deg^-2 in this
redshift range, out of 40 targets deg^-2. To achieve these surface densities,
the magnitude limit of the quasar targets was set at g <= 22.0 or r<=21.85.
While detection of the BAO signature in the Ly-alpha absorption in quasar
spectra does not require a uniform target selection, many other astrophysical
studies do. We therefore defined a uniformly-selected subsample of 20 targets
deg^-2, for which the selection efficiency is just over 50%. This "CORE"
subsample will be fixed for Years Two through Five of the survey. In this paper
we describe the evolution and implementation of the BOSS quasar target
selection algorithms during the first two years of BOSS operations. We analyze
the spectra obtained during the first year. 11,263 new z>2.2 quasars were
spectroscopically confirmed by BOSS. Our current algorithms select an average
of 15 z > 2.2 quasars deg^-2 from 40 targets deg^-2 using single-epoch SDSS
imaging. Multi-epoch optical data and data at other wavelengths can further
improve the efficiency and completeness of BOSS quasar target selection.
[Abridged]Comment: 33 pages, 26 figures, 12 tables and a whole bunch of quasars.
Submitted to Ap
- …