1,217 research outputs found

    Markov Chain Monte Carlo Method without Detailed Balance

    Full text link
    We present a specific algorithm that generally satisfies the balance condition without imposing the detailed balance in the Markov chain Monte Carlo. In our algorithm, the average rejection rate is minimized, and even reduced to zero in many relevant cases. The absence of the detailed balance also introduces a net stochastic flow in a configuration space, which further boosts up the convergence. We demonstrate that the autocorrelation time of the Potts model becomes more than 6 times shorter than that by the conventional Metropolis algorithm. Based on the same concept, a bounce-free worm algorithm for generic quantum spin models is formulated as well.Comment: 5 pages, 5 figure

    Estimating population cardinal health state valuation models from individual ordinal (rank) health state preference data

    Get PDF
    Ranking exercises have routinely been used as warm-up exercises within health state valuation surveys. Very little use has been made of the information obtained in this process. Instead, research has focussed upon the analysis of health state valuation data obtained using the visual analogue scale, standard gamble and time trade off methods. Thurstone’s law of comparative judgement postulates a stable relationship between ordinal and cardinal preferences, based upon the information provided by pairwise choices. McFadden proposed that this relationship could be modelled by estimating conditional logistic regression models where alternatives had been ranked. In this paper we report the estimation of such models for the Health Utilities Index Mark 2 and the SF-6D. The results are compared to the conventional regression models estimated from standard gamble data, and to the observed mean standard gamble health state valuations. For both the HUI2 and the SF-6D, the models estimated using rank data are broadly comparable to the models estimated on standard gamble data and the predictive performance of these models is close to that of the standard gamble models. Our research indicates that rank data has the potential to provide useful insights into community health state preferences. However, important questions remain

    Estimating population cardinal health state valuation models from individual ordinal (rank) health state preference data

    Get PDF
    Ranking exercises have routinely been used as warm-up exercises within health state valuation surveys. Very little use has been made of the information obtained in this process. Instead, research has focussed upon the analysis of health state valuation data obtained using the visual analogue scale, standard gamble and time trade off methods. Thurstone’s law of comparative judgement postulates a stable relationship between ordinal and cardinal preferences, based upon the information provided by pairwise choices. McFadden proposed that this relationship could be modelled by estimating conditional logistic regression models where alternatives had been ranked. In this paper we report the estimation of such models for the Health Utilities Index Mark 2 and the SF-6D. The results are compared to the conventional regression models estimated from standard gamble data, and to the observed mean standard gamble health state valuations. For both the HUI2 and the SF-6D, the models estimated using rank data are broadly comparable to the models estimated on standard gamble data and the predictive performance of these models is close to that of the standard gamble models. Our research indicates that rank data has the potential to provide useful insights into community health state preferences. However, important questions remain.health state valuation; HUI-2; SF-6D

    Estimating population cardinal health state valuation models from individual ordinal (rank) health state preference data

    Get PDF
    Ranking exercises have routinely been used as warm-up exercises within health state valuation surveys. Very little use has been made of the information obtained in this process. Instead, research has focussed upon the analysis of health state valuation data obtained using the visual analogue scale, standard gamble and time trade off methods. Thurstone’s law of comparative judgement postulates a stable relationship between ordinal and cardinal preferences, based upon the information provided by pairwise choices. McFadden proposed that this relationship could be modelled by estimating conditional logistic regression models where alternatives had been ranked. In this paper we report the estimation of such models for the Health Utilities Index Mark 2 and the SF-6D. The results are compared to the conventional regression models estimated from standard gamble data, and to the observed mean standard gamble health state valuations. For both the HUI2 and the SF-6D, the models estimated using rank data are broadly comparable to the models estimated on standard gamble data and the predictive performance of these models is close to that of the standard gamble models. Our research indicates that rank data has the potential to provide useful insights into community health state preferences. However, important questions remain

    Mountain trail formation and the active walker model

    Full text link
    We extend the active walker model to address the formation of paths on gradients, which have been observed to have a zigzag form. Our extension includes a new rule which prohibits direct descent or ascent on steep inclines, simulating aversion to falling. Further augmentation of the model stops walkers from changing direction very rapidly as that would likely lead to a fall. The extended model predicts paths with qualitatively similar forms to the observed trails, but only if the terms suppressing sudden direction changes are included. The need to include terms into the model that stop rapid direction change when simulating mountain trails indicates that a similar rule should also be included in the standard active walker model.Comment: Introduction improved. Analysis of discretization errors added. Calculations from alternative scheme include

    A Bayesian approach to the follow-up of candidate gravitational wave signals

    Full text link
    Ground-based gravitational wave laser interferometers (LIGO, GEO-600, Virgo and Tama-300) have now reached high sensitivity and duty cycle. We present a Bayesian evidence-based approach to the search for gravitational waves, in particular aimed at the followup of candidate events generated by the analysis pipeline. We introduce and demonstrate an efficient method to compute the evidence and odds ratio between different models, and illustrate this approach using the specific case of the gravitational wave signal generated during the inspiral phase of binary systems, modelled at the leading quadrupole Newtonian order, in synthetic noise. We show that the method is effective in detecting signals at the detection threshold and it is robust against (some types of) instrumental artefacts. The computational efficiency of this method makes it scalable to the analysis of all the triggers generated by the analysis pipelines to search for coalescing binaries in surveys with ground-based interferometers, and to a whole variety of signal waveforms, characterised by a larger number of parameters.Comment: 9 page

    Using Markov chain Monte Carlo methods for estimating parameters with gravitational radiation data

    Get PDF
    We present a Bayesian approach to the problem of determining parameters for coalescing binary systems observed with laser interferometric detectors. By applying a Markov Chain Monte Carlo (MCMC) algorithm, specifically the Gibbs sampler, we demonstrate the potential that MCMC techniques may hold for the computation of posterior distributions of parameters of the binary system that created the gravity radiation signal. We describe the use of the Gibbs sampler method, and present examples whereby signals are detected and analyzed from within noisy data.Comment: 21 pages, 10 figure

    A Bayesian approach to discrete object detection in astronomical datasets

    Full text link
    A Bayesian approach is presented for detecting and characterising the signal from discrete objects embedded in a diffuse background. The approach centres around the evaluation of the posterior distribution for the parameters of the discrete objects, given the observed data, and defines the theoretically-optimal procedure for parametrised object detection. Two alternative strategies are investigated: the simultaneous detection of all the discrete objects in the dataset, and the iterative detection of objects. In both cases, the parameter space characterising the object(s) is explored using Markov-Chain Monte-Carlo sampling. For the iterative detection of objects, another approach is to locate the global maximum of the posterior at each iteration using a simulated annealing downhill simplex algorithm. The techniques are applied to a two-dimensional toy problem consisting of Gaussian objects embedded in uncorrelated pixel noise. A cosmological illustration of the iterative approach is also presented, in which the thermal and kinetic Sunyaev-Zel'dovich effects from clusters of galaxies are detected in microwave maps dominated by emission from primordial cosmic microwave background anisotropies.Comment: 20 pages, 12 figures, accepted by MNRAS; contains some additional material in response to referee's comment

    Bayesian coherent analysis of in-spiral gravitational wave signals with a detector network

    Full text link
    The present operation of the ground-based network of gravitational-wave laser interferometers in "enhanced" configuration brings the search for gravitational waves into a regime where detection is highly plausible. The development of techniques that allow us to discriminate a signal of astrophysical origin from instrumental artefacts in the interferometer data and to extract the full range of information are some of the primary goals of the current work. Here we report the details of a Bayesian approach to the problem of inference for gravitational wave observations using a network of instruments, for the computation of the Bayes factor between two hypotheses and the evaluation of the marginalised posterior density functions of the unknown model parameters. The numerical algorithm to tackle the notoriously difficult problem of the evaluation of large multi-dimensional integrals is based on a technique known as Nested Sampling, which provides an attractive alternative to more traditional Markov-chain Monte Carlo (MCMC) methods. We discuss the details of the implementation of this algorithm and its performance against a Gaussian model of the background noise, considering the specific case of the signal produced by the in-spiral of binary systems of black holes and/or neutron stars, although the method is completely general and can be applied to other classes of sources. We also demonstrate the utility of this approach by introducing a new coherence test to distinguish between the presence of a coherent signal of astrophysical origin in the data of multiple instruments and the presence of incoherent accidental artefacts, and the effects on the estimation of the source parameters as a function of the number of instruments in the network.Comment: 22 page

    Relic gravitational waves in the light of 7-year Wilkinson Microwave Anisotropy Probe data and improved prospects for the Planck mission

    Full text link
    The new release of data from Wilkinson Microwave Anisotropy Probe improves the observational status of relic gravitational waves. The 7-year results enhance the indications of relic gravitational waves in the existing data and change to the better the prospects of confident detection of relic gravitational waves by the currently operating Planck satellite. We apply to WMAP7 data the same methods of analysis that we used earlier [W. Zhao, D. Baskaran, and L.P. Grishchuk, Phys. Rev. D 80, 083005 (2009)] with WMAP5 data. We also revised by the same methods our previous analysis of WMAP3 data. It follows from the examination of consecutive WMAP data releases that the maximum likelihood value of the quadrupole ratio RR, which characterizes the amount of relic gravitational waves, increases up to R=0.264R=0.264, and the interval separating this value from the point R=0R=0 (the hypothesis of no gravitational waves) increases up to a 2σ2\sigma level. The primordial spectra of density perturbations and gravitational waves remain blue in the relevant interval of wavelengths, but the spectral indices increase up to ns=1.111n_s =1.111 and nt=0.111n_t=0.111. Assuming that the maximum likelihood estimates of the perturbation parameters that we found from WMAP7 data are the true values of the parameters, we find that the signal-to-noise ratio S/NS/N for the detection of relic gravitational waves by the Planck experiment increases up to S/N=4.04S/N=4.04, even under pessimistic assumptions with regard to residual foreground contamination and instrumental noises. We comment on theoretical frameworks that, in the case of success, will be accepted or decisively rejected by the Planck observations.Comment: 27 pages, 12 (colour) figures. Published in Phys. Rev. D. V.3: modifications made to reflect the published versio
    • …
    corecore