3,151 research outputs found

    Why bayesian “evidence for H1” in one condition and bayesian “evidence for H0” in another condition does not mean good-enough bayesian evidence for a difference between the conditions

    Get PDF
    Psychologists are often interested in whether an independent variable has a different effect in condition A than in condition B. To test such a question, one needs to directly compare the effect of that variable in the two conditions (i.e., test the interaction). Yet many researchers tend to stop when they find a significant test in one condition and a nonsignificant test in the other condition, deeming this as sufficient evidence for a difference between the two conditions. In this Tutorial, we aim to raise awareness of this inferential mistake when Bayes factors are used with conventional cutoffs to draw conclusions. For instance, some researchers might falsely conclude that there must be good-enough evidence for the interaction if they find good-enough Bayesian evidence for the alternative hypothesis, H1, in condition A and good-enough Bayesian evidence for the null hypothesis, H0, in condition B. The case study we introduce highlights that ignoring the test of the interaction can lead to unjustified conclusions and demonstrates that the principle that any assertion about the existence of an interaction necessitates the direct comparison of the conditions is as true for Bayesian as it is for frequentist statistics. We provide an R script of the analyses of the case study and a Shiny app that can be used with a 2 Ă— 2 design to develop intuitions on this issue, and we introduce a rule of thumb with which one can estimate the sample size one might need to have a well-powered design

    Laser Doppler technology applied to atmospheric environmental operating problems

    Get PDF
    Carbon dioxide laser Doppler ground wind data were very favorably compared with data from standard anemometers. As a result of these measurements, two breadboard systems were developed for taking research data: a continuous wave velocimeter and a pulsed Doppler system. The scanning continuous wave laser Doppler velocimeter developed for detecting, tracking and measuring aircraft wake vortices was successfully tested at an airport where it located vortices to an accuracy of 3 meters at a range of 150 meters. The airborne pulsed laser Doppler system was developed to detect and measure clear air turbulence (CAT). This system was tested aboard an aircraft, but jet stream CAT was not encountered. However, low altitude turbulence in cumulus clouds near a mountain range was detected by the system and encountered by the aircraft at the predicted time

    Laser Doppler dust devil measurements

    Get PDF
    A scanning laser doppler velocimeter (SLDV) system was used to detect, track, and measure the velocity flow field of naturally occurring tornado-like flows (dust devils) in the atmosphere. A general description of the dust devil phenomenon is given along with a description of the test program, measurement system, and data processing techniques used to collect information on the dust devil flow field. The general meteorological conditions occurring during the test program are also described, and the information collected on two selected dust devils are discussed in detail to show the type of information which can be obtained with a SLDV system. The results from these measurements agree well with those of other investigators and illustrate the potential for the SLDV in future endeavors

    Neuroendocrinology and resistance training in adult males

    Get PDF
    An understanding of the neuroendocrine system will assist the Strength and Conditioning coach in the design of progressive strength training programmes by allowing them to manipulate acute training variables according to hormone release profiles. For muscle hypertrophy, training programmes should utilise 3 sets of 10 repetitions at 10RM loads, with short rest periods of no longer than 1 minute. This will ensure the accumulation and maintenance of lactate and hydrogen ions, to which anabolic hormone release is correlated. For strength adaptations without concomitant muscle hypertrophy, the training load and the length of rest periods should be increased, (>85% 1RM and >2mins respectively), and body parts should be rotated (e.g. upper body to lower body or agonist to antagonist). Finally, catabolic hormones and neurohormones significantly affect training adaptations. Therefore the strength and conditioning coach should be cognisant of the specific exercise programming and psychological interventions that manipulate their release

    Getting the Measure of the Flatness Problem

    Get PDF
    The problem of estimating cosmological parameters such as Ω\Omega from noisy or incomplete data is an example of an inverse problem and, as such, generally requires a probablistic approach. We adopt the Bayesian interpretation of probability for such problems and stress the connection between probability and information which this approach makes explicit. This connection is important even when information is ``minimal'' or, in other words, when we need to argue from a state of maximum ignorance. We use the transformation group method of Jaynes to assign minimally--informative prior probability measure for cosmological parameters in the simple example of a dust Friedman model, showing that the usual statements of the cosmological flatness problem are based on an inappropriate choice of prior. We further demonstrate that, in the framework of a classical cosmological model, there is no flatness problem.Comment: 11 pages, submitted to Classical and Quantum Gravity, Tex source file, no figur

    Determining the Neutrino Mass Hierarchy with Cosmology

    Full text link
    The combination of current large scale structure and cosmic microwave background (CMB) anisotropies data can place strong constraints on the sum of the neutrino masses. Here we show that future cosmic shear experiments, in combination with CMB constraints, can provide the statistical accuracy required to answer questions about differences in the mass of individual neutrino species. Allowing for the possibility that masses are non-degenerate we combine Fisher matrix forecasts for a weak lensing survey like Euclid with those for the forthcoming Planck experiment. Under the assumption that neutrino mass splitting is described by a normal hierarchy we find that the combination Planck and Euclid will possibly reach enough sensitivity to put a constraint on the mass of a single species. Using a Bayesian evidence calculation we find that such future experiments could provide strong evidence for either a normal or an inverted neutrino hierachy. Finally we show that if a particular neutrino hierachy is assumed then this could bias cosmological parameter constraints, for example the dark energy equation of state parameter, by > 1\sigma, and the sum of masses by 2.3\sigma.Comment: 9 pages, 6 figures, 3 table

    A Wind Driven Warping Instability in Accretion Disks

    Get PDF
    A wind passing over a surface may cause an instability in the surface such as the flapping seen when wind blows across a flag or waves when wind blows across water. We show that when a radially outflowing wind blows across a dense thin rotating disk, an initially flat disk is unstable to warping. When the wind is subsonic, the growth rate is dependent on the lift generated by the wind and the phase lag between the pressure perturbation and the vertical displacement in the disk caused by drag. When the wind is supersonic, the grow rate is primarily dependent on the form drag caused by the surface. While the radiative warping instability proposed by Pringle is promising for generating warps near luminous accreting objects, we expect the wind driven instability introduced here would dominate in objects which generate energetic outflows

    The length of time's arrow

    Full text link
    An unresolved problem in physics is how the thermodynamic arrow of time arises from an underlying time reversible dynamics. We contribute to this issue by developing a measure of time-symmetry breaking, and by using the work fluctuation relations, we determine the time asymmetry of recent single molecule RNA unfolding experiments. We define time asymmetry as the Jensen-Shannon divergence between trajectory probability distributions of an experiment and its time-reversed conjugate. Among other interesting properties, the length of time's arrow bounds the average dissipation and determines the difficulty of accurately estimating free energy differences in nonequilibrium experiments

    Tests of Bayesian Model Selection Techniques for Gravitational Wave Astronomy

    Full text link
    The analysis of gravitational wave data involves many model selection problems. The most important example is the detection problem of selecting between the data being consistent with instrument noise alone, or instrument noise and a gravitational wave signal. The analysis of data from ground based gravitational wave detectors is mostly conducted using classical statistics, and methods such as the Neyman-Pearson criteria are used for model selection. Future space based detectors, such as the \emph{Laser Interferometer Space Antenna} (LISA), are expected to produced rich data streams containing the signals from many millions of sources. Determining the number of sources that are resolvable, and the most appropriate description of each source poses a challenging model selection problem that may best be addressed in a Bayesian framework. An important class of LISA sources are the millions of low-mass binary systems within our own galaxy, tens of thousands of which will be detectable. Not only are the number of sources unknown, but so are the number of parameters required to model the waveforms. For example, a significant subset of the resolvable galactic binaries will exhibit orbital frequency evolution, while a smaller number will have measurable eccentricity. In the Bayesian approach to model selection one needs to compute the Bayes factor between competing models. Here we explore various methods for computing Bayes factors in the context of determining which galactic binaries have measurable frequency evolution. The methods explored include a Reverse Jump Markov Chain Monte Carlo (RJMCMC) algorithm, Savage-Dickie density ratios, the Schwarz-Bayes Information Criterion (BIC), and the Laplace approximation to the model evidence. We find good agreement between all of the approaches.Comment: 11 pages, 6 figure

    Thermodynamic Properties of Generalized Exclusion Statistics

    Full text link
    We analytically calculate some thermodynamic quantities of an ideal gg-on gas obeying generalized exclusion statistics. We show that the specific heat of a gg-on gas (g≠0g \neq 0) vanishes linearly in any dimension as T→0T \to 0 when the particle number is conserved and exhibits an interesting dual symmetry that relates the particle-statistics at gg to the hole-statistics at 1/g1/g at low temperatures. We derive the complete solution for the cluster coefficients bl(g)b_l(g) as a function of Haldane's statistical interaction gg in DD dimensions. We also find that the cluster coefficients bl(g)b_l(g) and the virial coefficients al(g)a_l(g) are exactly mirror symmetric (ll=odd) or antisymmetric (ll=even) about g=1/2g=1/2. In two dimensions, we completely determine the closed forms about the cluster and the virial coefficients of the generalized exclusion statistics, which exactly agree with the virial coefficients of an anyon gas of linear energies. We show that the gg-on gas with zero chemical potential shows thermodynamic properties similar to the photon statistics. We discuss some physical implications of our results.Comment: 24 pages, Revtex, Corrected typo
    • …
    corecore