3,981 research outputs found

    Why bayesian “evidence for H1” in one condition and bayesian “evidence for H0” in another condition does not mean good-enough bayesian evidence for a difference between the conditions

    Get PDF
    Psychologists are often interested in whether an independent variable has a different effect in condition A than in condition B. To test such a question, one needs to directly compare the effect of that variable in the two conditions (i.e., test the interaction). Yet many researchers tend to stop when they find a significant test in one condition and a nonsignificant test in the other condition, deeming this as sufficient evidence for a difference between the two conditions. In this Tutorial, we aim to raise awareness of this inferential mistake when Bayes factors are used with conventional cutoffs to draw conclusions. For instance, some researchers might falsely conclude that there must be good-enough evidence for the interaction if they find good-enough Bayesian evidence for the alternative hypothesis, H1, in condition A and good-enough Bayesian evidence for the null hypothesis, H0, in condition B. The case study we introduce highlights that ignoring the test of the interaction can lead to unjustified conclusions and demonstrates that the principle that any assertion about the existence of an interaction necessitates the direct comparison of the conditions is as true for Bayesian as it is for frequentist statistics. We provide an R script of the analyses of the case study and a Shiny app that can be used with a 2 × 2 design to develop intuitions on this issue, and we introduce a rule of thumb with which one can estimate the sample size one might need to have a well-powered design

    Laser Doppler technology applied to atmospheric environmental operating problems

    Get PDF
    Carbon dioxide laser Doppler ground wind data were very favorably compared with data from standard anemometers. As a result of these measurements, two breadboard systems were developed for taking research data: a continuous wave velocimeter and a pulsed Doppler system. The scanning continuous wave laser Doppler velocimeter developed for detecting, tracking and measuring aircraft wake vortices was successfully tested at an airport where it located vortices to an accuracy of 3 meters at a range of 150 meters. The airborne pulsed laser Doppler system was developed to detect and measure clear air turbulence (CAT). This system was tested aboard an aircraft, but jet stream CAT was not encountered. However, low altitude turbulence in cumulus clouds near a mountain range was detected by the system and encountered by the aircraft at the predicted time

    An investigation into the effect of varying plyometric volume on reactive strength and leg stiffness in collegiate rugby players

    Get PDF
    The purpose of this study was to identify the role that low and high volume plyometric loads have on the effectiveness of developing stretch shortening cycle capability in collegiate rugby players. The experiment was carried out utilising a between- group, repeated measures design. Thirty six participants (Age 20.3 ±1.6 yrs, mass 91.63 ±10.36kg, height 182.03 ±5.24cm) were randomly assigned to one of three groups, a control group (CG), a low volume plyometric group (LPG) and a high volume plyometric group (HPG). Data were collected from a force plate, and measures of reactive strength index (RSI) and leg stiffness were calculated from jump height, contact time and flight time data. Drop Jumps were used to gather data to measure RSI and double leg hops were used to gather leg stiffness data. The analysis demonstrated an overall significance in the interaction effect between group* time (F =4.01, p <0.05) for RSI. Bonferroni post hoc analysis indicated that both the LPG training group (p = 0.002) and HPG training group (p = 0.009) demonstrated a significance from the control group. No significant interaction effect between time*group or main effect were observed for leg stiffness (F = 1.39, p = .25). The current study has demonstrated that it is possible to improve reactive strength capabilities to a significant level via the use of a low volume plyometric programme. The low volume programme elicited the same performance improvement in RSI values as a high volume programme whilst undertaking only a quarter of the volume. This suggests that strength and conditioning coaches may be able to benefit from the ability to develop more time efficient and effective plyometric programmes

    Bayesian Analysis of Inflation II: Model Selection and Constraints on Reheating

    Full text link
    We discuss the model selection problem for inflationary cosmology. We couple ModeCode, a publicly-available numerical solver for the primordial perturbation spectra, to the nested sampler MultiNest, in order to efficiently compute Bayesian evidence. Particular attention is paid to the specification of physically realistic priors, including the parametrization of the post-inflationary expansion and associated thermalization scale. It is confirmed that while present-day data tightly constrains the properties of the power spectrum, it cannot usefully distinguish between the members of a large class of simple inflationary models. We also compute evidence using a simulated Planck likelihood, showing that while Planck will have more power than WMAP to discriminate between inflationary models, it will not definitively address the inflationary model selection problem on its own. However, Planck will place very tight constraints on any model with more than one observationally-distinct inflationary regime -- e.g. the large- and small-field limits of the hilltop inflation model -- and put useful limits on different reheating scenarios for a given model.Comment: ModeCode package available from http://zuserver2.star.ucl.ac.uk/~hiranya/ModeCode/ModeCode (requires CosmoMC and MultiNest); to be published in PRD. Typos fixe

    Constructing smooth potentials of mean force, radial, distribution functions and probability densities from sampled data

    Full text link
    In this paper a method of obtaining smooth analytical estimates of probability densities, radial distribution functions and potentials of mean force from sampled data in a statistically controlled fashion is presented. The approach is general and can be applied to any density of a single random variable. The method outlined here avoids the use of histograms, which require the specification of a physical parameter (bin size) and tend to give noisy results. The technique is an extension of the Berg-Harris method [B.A. Berg and R.C. Harris, Comp. Phys. Comm. 179, 443 (2008)], which is typically inaccurate for radial distribution functions and potentials of mean force due to a non-uniform Jacobian factor. In addition, the standard method often requires a large number of Fourier modes to represent radial distribution functions, which tends to lead to oscillatory fits. It is shown that the issues of poor sampling due to a Jacobian factor can be resolved using a biased resampling scheme, while the requirement of a large number of Fourier modes is mitigated through an automated piecewise construction approach. The method is demonstrated by analyzing the radial distribution functions in an energy-discretized water model. In addition, the fitting procedure is illustrated on three more applications for which the original Berg-Harris method is not suitable, namely, a random variable with a discontinuous probability density, a density with long tails, and the distribution of the first arrival times of a diffusing particle to a sphere, which has both long tails and short-time structure. In all cases, the resampled, piecewise analytical fit outperforms the histogram and the original Berg-Harris method.Comment: 14 pages, 15 figures. To appear in J. Chem. Phy

    Laser Doppler dust devil measurements

    Get PDF
    A scanning laser doppler velocimeter (SLDV) system was used to detect, track, and measure the velocity flow field of naturally occurring tornado-like flows (dust devils) in the atmosphere. A general description of the dust devil phenomenon is given along with a description of the test program, measurement system, and data processing techniques used to collect information on the dust devil flow field. The general meteorological conditions occurring during the test program are also described, and the information collected on two selected dust devils are discussed in detail to show the type of information which can be obtained with a SLDV system. The results from these measurements agree well with those of other investigators and illustrate the potential for the SLDV in future endeavors

    Neuroendocrinology and resistance training in adult males

    Get PDF
    An understanding of the neuroendocrine system will assist the Strength and Conditioning coach in the design of progressive strength training programmes by allowing them to manipulate acute training variables according to hormone release profiles. For muscle hypertrophy, training programmes should utilise 3 sets of 10 repetitions at 10RM loads, with short rest periods of no longer than 1 minute. This will ensure the accumulation and maintenance of lactate and hydrogen ions, to which anabolic hormone release is correlated. For strength adaptations without concomitant muscle hypertrophy, the training load and the length of rest periods should be increased, (>85% 1RM and >2mins respectively), and body parts should be rotated (e.g. upper body to lower body or agonist to antagonist). Finally, catabolic hormones and neurohormones significantly affect training adaptations. Therefore the strength and conditioning coach should be cognisant of the specific exercise programming and psychological interventions that manipulate their release

    Testing and selection of cosmological models with (1+z)6(1+z)^6 corrections

    Full text link
    In the paper we check whether the contribution of ()(1+z)6(-)(1+z)^6 type in the Friedmann equation can be tested. We consider some astronomical tests to constrain the density parameters in such models. We describe different interpretations of such an additional term: geometric effects of Loop Quantum Cosmology, effects of braneworld cosmological models, non-standard cosmological models in metric-affine gravity, and models with spinning fluid. Kinematical (or geometrical) tests based on null geodesics are insufficient to separate individual matter components when they behave like perfect fluid and scale in the same way. Still, it is possible to measure their overall effect. We use recent measurements of the coordinate distances from the Fanaroff-Riley type IIb (FRIIb) radio galaxy (RG) data, supernovae type Ia (SNIa) data, baryon oscillation peak and cosmic microwave background radiation (CMBR) observations to obtain stronger bounds for the contribution of the type considered. We demonstrate that, while ρ2\rho^2 corrections are very small, they can be tested by astronomical observations -- at least in principle. Bayesian criteria of model selection (the Bayesian factor, AIC, and BIC) are used to check if additional parameters are detectable in the present epoch. As it turns out, the Λ\LambdaCDM model is favoured over the bouncing model driven by loop quantum effects. Or, in other words, the bounds obtained from cosmography are very weak, and from the point of view of the present data this model is indistinguishable from the Λ\LambdaCDM one.Comment: 19 pages, 1 figure. Version 2 generally revised and accepted for publicatio

    Determining the Neutrino Mass Hierarchy with Cosmology

    Full text link
    The combination of current large scale structure and cosmic microwave background (CMB) anisotropies data can place strong constraints on the sum of the neutrino masses. Here we show that future cosmic shear experiments, in combination with CMB constraints, can provide the statistical accuracy required to answer questions about differences in the mass of individual neutrino species. Allowing for the possibility that masses are non-degenerate we combine Fisher matrix forecasts for a weak lensing survey like Euclid with those for the forthcoming Planck experiment. Under the assumption that neutrino mass splitting is described by a normal hierarchy we find that the combination Planck and Euclid will possibly reach enough sensitivity to put a constraint on the mass of a single species. Using a Bayesian evidence calculation we find that such future experiments could provide strong evidence for either a normal or an inverted neutrino hierachy. Finally we show that if a particular neutrino hierachy is assumed then this could bias cosmological parameter constraints, for example the dark energy equation of state parameter, by > 1\sigma, and the sum of masses by 2.3\sigma.Comment: 9 pages, 6 figures, 3 table
    corecore