2,921 research outputs found

    Universal efficiency at optimal work with Bayesian statistics

    Full text link
    If the work per cycle of a quantum heat engine is averaged over an appropriate prior distribution for an external parameter aa, the work becomes optimal at Curzon-Ahlborn efficiency. More general priors of the form Π(a)∝1/aγ\Pi(a) \propto 1/a^{\gamma} yield optimal work at an efficiency which stays close to CA value, in particular near equilibrium the efficiency scales as one-half of the Carnot value. This feature is analogous to the one recently observed in literature for certain models of finite-time thermodynamics. Further, the use of Bayes' theorem implies that the work estimated with posterior probabilities also bears close analogy with the classical formula. These findings suggest that the notion of prior information can be used to reveal thermodynamic features in quantum systems, thus pointing to a new connection between thermodynamic behavior and the concept of information.Comment: revtex4, 5 pages, abstract changed and presentation improved; results unchanged. New result with Bayes Theorem adde

    Development of a laser Doppler system for the detection and monitoring of atmospheric disturbances

    Get PDF
    A Scanning Laser Doppler Velocimeter System (SLDVS) capable of detecting and monitoring atmospheric disturbances, including wake vortices of landing aircraft and vertical wind profiles in the atmosphere was developed. The SLDVS is a focused, continuous wave, CO2 system that determines the line-of-sight velocities of particles in the focal volume by measuring the Doppler shift created by these particles. At present, the SLDVS is designed to have a range coverage of approximately 2000 ft with a vertical angle coverage of approximately 60 deg. It is also designed to detect Doppler velocities of up to 200 ft/sec with a velocity resolution of approximately 1.8 ft/sec. A complete velocity spectrum is provided by the SLDVS at each point in space at which it is focused. The overall operation and performance of the system and the description of its individual components and data handling capabilities were given

    Dark matter and dark energy as a effects of Modified Gravity

    Get PDF
    We explain the effect of dark matter (flat rotation curve) using modified gravitational dynamics. We investigate in this context a low energy limit of generalized general relativity with a nonlinear Lagrangian L∝Rn{\cal L}\propto R^n, where RR is the (generalized) Ricci scalar and nn is parameter estimated from SNIa data. We estimate parameter ÎČ\beta in modified gravitational potential V(r)∝−1r(1+(rrc)ÎČ)V(r) \propto -\frac{1}{r}(1+(\frac{r}{r_c})^{\beta}). Then we compare value of ÎČ\beta obtained from SNIa data with ÎČ\beta parameter evaluated from the best fitted rotation curve. We find ÎČ≃0.7\beta \simeq 0.7 which becomes in good agreement with an observation of spiral galaxies rotation curve. We also find preferred value of Ωm,0\Omega_{m,0} from the combined analysis of supernovae data and baryon oscillation peak. We argue that although amount of "dark energy" (of non-substantial origin) is consistent with SNIa data and flat curves of spiral galaxies are reproduces in the framework of modified Einstein's equation we still need substantial dark matter. For comparison predictions of the model with predictions of the Λ\LambdaCDM concordance model we apply the Akaike and Bayesian information criteria of model selection.Comment: Lectures given at 42nd Karpacz Winter School of Theoretical Physics: Ladek, Poland, 6-11 Feb 200

    Why bayesian “evidence for H1” in one condition and bayesian “evidence for H0” in another condition does not mean good-enough bayesian evidence for a difference between the conditions

    Get PDF
    Psychologists are often interested in whether an independent variable has a different effect in condition A than in condition B. To test such a question, one needs to directly compare the effect of that variable in the two conditions (i.e., test the interaction). Yet many researchers tend to stop when they find a significant test in one condition and a nonsignificant test in the other condition, deeming this as sufficient evidence for a difference between the two conditions. In this Tutorial, we aim to raise awareness of this inferential mistake when Bayes factors are used with conventional cutoffs to draw conclusions. For instance, some researchers might falsely conclude that there must be good-enough evidence for the interaction if they find good-enough Bayesian evidence for the alternative hypothesis, H1, in condition A and good-enough Bayesian evidence for the null hypothesis, H0, in condition B. The case study we introduce highlights that ignoring the test of the interaction can lead to unjustified conclusions and demonstrates that the principle that any assertion about the existence of an interaction necessitates the direct comparison of the conditions is as true for Bayesian as it is for frequentist statistics. We provide an R script of the analyses of the case study and a Shiny app that can be used with a 2 × 2 design to develop intuitions on this issue, and we introduce a rule of thumb with which one can estimate the sample size one might need to have a well-powered design

    Model selection in cosmology

    Get PDF
    Model selection aims to determine which theoretical models are most plausible given some data, without necessarily considering preferred values of model parameters. A common model selection question is to ask when new data require introduction of an additional parameter, describing a newly discovered physical effect. We review model selection statistics, then focus on the Bayesian evidence, which implements Bayesian analysis at the level of models rather than parameters. We describe our CosmoNest code, the first computationally efficient implementation of Bayesian model selection in a cosmological context. We apply it to recent WMAP satellite data, examining the need for a perturbation spectral index differing from the scaleinvariant (Harrison–Zel'dovich) case

    Laser Doppler technology applied to atmospheric environmental operating problems

    Get PDF
    Carbon dioxide laser Doppler ground wind data were very favorably compared with data from standard anemometers. As a result of these measurements, two breadboard systems were developed for taking research data: a continuous wave velocimeter and a pulsed Doppler system. The scanning continuous wave laser Doppler velocimeter developed for detecting, tracking and measuring aircraft wake vortices was successfully tested at an airport where it located vortices to an accuracy of 3 meters at a range of 150 meters. The airborne pulsed laser Doppler system was developed to detect and measure clear air turbulence (CAT). This system was tested aboard an aircraft, but jet stream CAT was not encountered. However, low altitude turbulence in cumulus clouds near a mountain range was detected by the system and encountered by the aircraft at the predicted time

    Laser Doppler dust devil measurements

    Get PDF
    A scanning laser doppler velocimeter (SLDV) system was used to detect, track, and measure the velocity flow field of naturally occurring tornado-like flows (dust devils) in the atmosphere. A general description of the dust devil phenomenon is given along with a description of the test program, measurement system, and data processing techniques used to collect information on the dust devil flow field. The general meteorological conditions occurring during the test program are also described, and the information collected on two selected dust devils are discussed in detail to show the type of information which can be obtained with a SLDV system. The results from these measurements agree well with those of other investigators and illustrate the potential for the SLDV in future endeavors

    The length of time's arrow

    Get PDF
    An unresolved problem in physics is how the thermodynamic arrow of time arises from an underlying time reversible dynamics. We contribute to this issue by developing a measure of time-symmetry breaking, and by using the work fluctuation relations, we determine the time asymmetry of recent single molecule RNA unfolding experiments. We define time asymmetry as the Jensen-Shannon divergence between trajectory probability distributions of an experiment and its time-reversed conjugate. Among other interesting properties, the length of time's arrow bounds the average dissipation and determines the difficulty of accurately estimating free energy differences in nonequilibrium experiments

    A Wind Driven Warping Instability in Accretion Disks

    Get PDF
    A wind passing over a surface may cause an instability in the surface such as the flapping seen when wind blows across a flag or waves when wind blows across water. We show that when a radially outflowing wind blows across a dense thin rotating disk, an initially flat disk is unstable to warping. When the wind is subsonic, the growth rate is dependent on the lift generated by the wind and the phase lag between the pressure perturbation and the vertical displacement in the disk caused by drag. When the wind is supersonic, the grow rate is primarily dependent on the form drag caused by the surface. While the radiative warping instability proposed by Pringle is promising for generating warps near luminous accreting objects, we expect the wind driven instability introduced here would dominate in objects which generate energetic outflows

    Tests of Bayesian Model Selection Techniques for Gravitational Wave Astronomy

    Full text link
    The analysis of gravitational wave data involves many model selection problems. The most important example is the detection problem of selecting between the data being consistent with instrument noise alone, or instrument noise and a gravitational wave signal. The analysis of data from ground based gravitational wave detectors is mostly conducted using classical statistics, and methods such as the Neyman-Pearson criteria are used for model selection. Future space based detectors, such as the \emph{Laser Interferometer Space Antenna} (LISA), are expected to produced rich data streams containing the signals from many millions of sources. Determining the number of sources that are resolvable, and the most appropriate description of each source poses a challenging model selection problem that may best be addressed in a Bayesian framework. An important class of LISA sources are the millions of low-mass binary systems within our own galaxy, tens of thousands of which will be detectable. Not only are the number of sources unknown, but so are the number of parameters required to model the waveforms. For example, a significant subset of the resolvable galactic binaries will exhibit orbital frequency evolution, while a smaller number will have measurable eccentricity. In the Bayesian approach to model selection one needs to compute the Bayes factor between competing models. Here we explore various methods for computing Bayes factors in the context of determining which galactic binaries have measurable frequency evolution. The methods explored include a Reverse Jump Markov Chain Monte Carlo (RJMCMC) algorithm, Savage-Dickie density ratios, the Schwarz-Bayes Information Criterion (BIC), and the Laplace approximation to the model evidence. We find good agreement between all of the approaches.Comment: 11 pages, 6 figure
    • 

    corecore