1,937 research outputs found
Even Between-Lap Pacing Despite High Within-Lap Variation During Mountain Biking
Purpose: Given the paucity of research on pacing strategies during competitive events, this study examined
changes in dynamic high-resolution performance parameters to analyze pacing profiles during a multiple-lap
mountain-bike race over variable terrain. Methods: A global-positioning-system (GPS) unit (Garmin, Edge
305, USA) recorded velocity (m/s), distance (m), elevation (m), and heart rate at 1 Hz from 6 mountain-bike
riders (mean ± SD age = 27.2 ± 5.0 y, stature = 176.8 ± 8.1 cm, mass = 76.3 ± 11.7 kg, VO2max = 55.1 ± 6.0 mL
· kg–1 . min–1) competing in a multilap race. Lap-by-lap (interlap) pacing was analyzed using a 1-way ANOVA
for mean time and mean velocity. Velocity data were averaged every 100 m and plotted against race distance
and elevation to observe the presence of intralap variation. Results: There was no significant difference in lap times (P = .99) or lap velocity (P = .65) across the 5 laps. Within each lap, a high degree of oscillation in velocity was observed, which broadly reflected changes in terrain, but high-resolution data demonstrated additional
nonmonotonic variation not related to terrain. Conclusion: Participants adopted an even pace strategy across
the 5 laps despite rapid adjustments in velocity during each lap. While topographical and technical variations
of the course accounted for some of the variability in velocity, the additional rapid adjustments in velocity
may be associated with dynamic regulation of self-paced exercise
Use of mixed methods designs in substance research: a methodological necessity in Nigeria
The utility of mixed methods (qualitative and quantitative) is becoming increasingly accepted in health sciences, but substance studies are yet to substantially benefit from such utilities. While there is a growing number of mixed methods alcohol articles concerning developed countries, developing nations are yet to embrace this method. In the Nigerian context, the importance of mixed methods research is yet to be acknowledged. This article therefore, draws on alcohol studies to argue that mixed methods designs will better equip scholars to understand, explore, describe and explain why alcohol consumption and its related problems are increasing in Nigeria. It argues that as motives for consuming alcohol in contemporary Nigeria are multiple, complex and evolving, mixed method approaches that provide multiple pathways for proffering solutions to problems should be embraced
Universality, limits and predictability of gold-medal performances at the Olympic Games
Inspired by the Games held in ancient Greece, modern Olympics represent the
world's largest pageant of athletic skill and competitive spirit. Performances
of athletes at the Olympic Games mirror, since 1896, human potentialities in
sports, and thus provide an optimal source of information for studying the
evolution of sport achievements and predicting the limits that athletes can
reach. Unfortunately, the models introduced so far for the description of
athlete performances at the Olympics are either sophisticated or unrealistic,
and more importantly, do not provide a unified theory for sport performances.
Here, we address this issue by showing that relative performance improvements
of medal winners at the Olympics are normally distributed, implying that the
evolution of performance values can be described in good approximation as an
exponential approach to an a priori unknown limiting performance value. This
law holds for all specialties in athletics-including running, jumping, and
throwing-and swimming. We present a self-consistent method, based on normality
hypothesis testing, able to predict limiting performance values in all
specialties. We further quantify the most likely years in which athletes will
breach challenging performance walls in running, jumping, throwing, and
swimming events, as well as the probability that new world records will be
established at the next edition of the Olympic Games.Comment: 8 pages, 3 figures, 1 table. Supporting information files and data
are available at filrad.homelinux.or
Search for new phenomena in final states with an energetic jet and large missing transverse momentum in pp collisions at √ s = 8 TeV with the ATLAS detector
Results of a search for new phenomena in final states with an energetic jet and large missing transverse momentum are reported. The search uses 20.3 fb−1 of √ s = 8 TeV data collected in 2012 with the ATLAS detector at the LHC. Events are required to have at least one jet with pT > 120 GeV and no leptons. Nine signal regions are considered with increasing missing transverse momentum requirements between Emiss T > 150 GeV and Emiss T > 700 GeV. Good agreement is observed between the number of events in data and Standard Model expectations. The results are translated into exclusion limits on models with either large extra spatial dimensions, pair production of weakly interacting dark matter candidates, or production of very light gravitinos in a gauge-mediated supersymmetric model. In addition, limits on the production of an invisibly decaying Higgs-like boson leading to similar topologies in the final state are presente
Search for direct pair production of the top squark in all-hadronic final states in proton-proton collisions at s√=8 TeV with the ATLAS detector
The results of a search for direct pair production of the scalar partner to the top quark using an integrated luminosity of 20.1fb−1 of proton–proton collision data at √s = 8 TeV recorded with the ATLAS detector at the LHC are reported. The top squark is assumed to decay via t˜→tχ˜01 or t˜→ bχ˜±1 →bW(∗)χ˜01 , where χ˜01 (χ˜±1 ) denotes the lightest neutralino (chargino) in supersymmetric models. The search targets a fully-hadronic final state in events with four or more jets and large missing transverse momentum. No significant excess over the Standard Model background prediction is observed, and exclusion limits are reported in terms of the top squark and neutralino masses and as a function of the branching fraction of t˜ → tχ˜01 . For a branching fraction of 100%, top squark masses in the range 270–645 GeV are excluded for χ˜01 masses below 30 GeV. For a branching fraction of 50% to either t˜ → tχ˜01 or t˜ → bχ˜±1 , and assuming the χ˜±1 mass to be twice the χ˜01 mass, top squark masses in the range 250–550 GeV are excluded for χ˜01 masses below 60 GeV
Observation of associated near-side and away-side long-range correlations in √sNN=5.02 TeV proton-lead collisions with the ATLAS detector
Two-particle correlations in relative azimuthal angle (Δϕ) and pseudorapidity (Δη) are measured in √sNN=5.02 TeV p+Pb collisions using the ATLAS detector at the LHC. The measurements are performed using approximately 1 μb-1 of data as a function of transverse momentum (pT) and the transverse energy (ΣETPb) summed over 3.1<η<4.9 in the direction of the Pb beam. The correlation function, constructed from charged particles, exhibits a long-range (2<|Δη|<5) “near-side” (Δϕ∼0) correlation that grows rapidly with increasing ΣETPb. A long-range “away-side” (Δϕ∼π) correlation, obtained by subtracting the expected contributions from recoiling dijets and other sources estimated using events with small ΣETPb, is found to match the near-side correlation in magnitude, shape (in Δη and Δϕ) and ΣETPb dependence. The resultant Δϕ correlation is approximately symmetric about π/2, and is consistent with a dominant cos2Δϕ modulation for all ΣETPb ranges and particle pT
Design of Experiments for Screening
The aim of this paper is to review methods of designing screening
experiments, ranging from designs originally developed for physical experiments
to those especially tailored to experiments on numerical models. The strengths
and weaknesses of the various designs for screening variables in numerical
models are discussed. First, classes of factorial designs for experiments to
estimate main effects and interactions through a linear statistical model are
described, specifically regular and nonregular fractional factorial designs,
supersaturated designs and systematic fractional replicate designs. Generic
issues of aliasing, bias and cancellation of factorial effects are discussed.
Second, group screening experiments are considered including factorial group
screening and sequential bifurcation. Third, random sampling plans are
discussed including Latin hypercube sampling and sampling plans to estimate
elementary effects. Fourth, a variety of modelling methods commonly employed
with screening designs are briefly described. Finally, a novel study
demonstrates six screening methods on two frequently-used exemplars, and their
performances are compared
A comparison between ultraviolet disinfection and copper alginate beads within a vortex bioreactor for the deactivation of bacteria in simulated waste streams with high levels of colour, humic acid and suspended solids.
We show in this study that the combination of a swirl flow reactor and an antimicrobial agent (in this case copper alginate beads) is a promising technique for the remediation of contaminated water in waste streams recalcitrant to UV-C treatment. This is demonstrated by comparing the viability of both common and UV-C resistant organisms in operating conditions where UV-C proves ineffective - notably high levels of solids and compounds which deflect UV-C. The swirl flow reactor is easy to construct from commonly available plumbing parts and may prove a versatile and powerful tool in waste water treatment in developing countries
Identification of gene modules associated with low temperatures response in Bambara groundnut by network-based analysis
Bambara groundnut (Vigna subterranea (L.) Verdc.) is an African legume and is a promising underutilized crop with good seed nutritional values. Low temperature stress in a number of African countries at night, such as Botswana, can effect the growth and development of bambara groundnut, leading to losses in potential crop yield. Therefore, in this study we developed a computational pipeline to identify and analyze the genes and gene modules associated with low temperature stress responses in bambara groundnut using the cross-species microarray technique (as bambara groundnut has no microarray chip) coupled with network-based analysis. Analyses of the bambara groundnut transcriptome using cross-species gene expression data resulted in the identification of 375 and 659 differentially expressed genes (p<0.01) under the sub-optimal (23°C) and very sub-optimal (18°C) temperatures, respectively, of which 110 genes are commonly shared between the two stress conditions. The construction of a Highest Reciprocal Rank-based gene co-expression network, followed by its partition using a Heuristic Cluster Chiseling Algorithm resulted in 6 and 7 gene modules in sub-optimal and very sub-optimal temperature stresses being identified, respectively. Modules of sub-optimal temperature stress are principally enriched with carbohydrate and lipid metabolic processes, while most of the modules of very sub-optimal temperature stress are significantly enriched with responses to stimuli and various metabolic processes. Several transcription factors (from MYB, NAC, WRKY, WHIRLY & GATA classes) that may regulate the downstream genes involved in response to stimulus in order for the plant to withstand very sub-optimal temperature stress were highlighted. The identified gene modules could be useful in breeding for low-temperature stress tolerant bambara groundnut varieties
Aerosols in the Pre-industrial Atmosphere
Purpose of Review: We assess the current understanding of the state and behaviour of aerosols under pre-industrial conditions and the importance for climate. Recent Findings: Studies show that the magnitude of anthropogenic aerosol radiative forcing over the industrial period calculated by climate models is strongly affected by the abundance and properties of aerosols in the pre-industrial atmosphere. The low concentration of aerosol particles under relatively pristine conditions means that global mean cloud albedo may have been twice as sensitive to changes in natural aerosol emissions under pre-industrial conditions compared to present-day conditions. Consequently, the discovery of new aerosol formation processes and revisions to aerosol emissions have large effects on simulated historical aerosol radiative forcing. Summary: We review what is known about the microphysical, chemical, and radiative properties of aerosols in the pre-industrial atmosphere and the processes that control them. Aerosol properties were controlled by a combination of natural emissions, modification of the natural emissions by human activities such as land-use change, and anthropogenic emissions from biofuel combustion and early industrial processes. Although aerosol concentrations were lower in the pre-industrial atmosphere than today, model simulations show that relatively high aerosol concentrations could have been maintained over continental regions due to biogenically controlled new particle formation and wildfires. Despite the importance of pre-industrial aerosols for historical climate change, the relevant processes and emissions are given relatively little consideration in climate models, and there have been very few attempts to evaluate them. Consequently, we have very low confidence in the ability of models to simulate the aerosol conditions that form the baseline for historical climate simulations. Nevertheless, it is clear that the 1850s should be regarded as an early industrial reference period, and the aerosol forcing calculated from this period is smaller than the forcing since 1750. Improvements in historical reconstructions of natural and early anthropogenic emissions, exploitation of new Earth system models, and a deeper understanding and evaluation of the controlling processes are key aspects to reducing uncertainties in future
- …
