777 research outputs found

    Catching Super Massive Black Hole Binaries Without a Net

    Full text link
    The gravitational wave signals from coalescing Supermassive Black Hole Binaries are prime targets for the Laser Interferometer Space Antenna (LISA). With optimal data processing techniques, the LISA observatory should be able to detect black hole mergers anywhere in the Universe. The challenge is to find ways to dig the signals out of a combination of instrument noise and the large foreground from stellar mass binaries in our own galaxy. The standard procedure of matched filtering against a grid of templates can be computationally prohibitive, especially when the black holes are spinning or the mass ratio is large. Here we develop an alternative approach based on Metropolis-Hastings sampling and simulated annealing that is orders of magnitude cheaper than a grid search. We demonstrate our approach on simulated LISA data streams that contain the signals from binary systems of Schwarzschild Black Holes, embedded in instrument noise and a foreground containing 26 million galactic binaries. The search algorithm is able to accurately recover the 9 parameters that describe the black hole binary without first having to remove any of the bright foreground sources, even when the black hole system has low signal-to-noise.Comment: 4 pages, 3 figures, Refined search algorithm, added low SNR exampl

    Mining Frequent Graph Patterns with Differential Privacy

    Full text link
    Discovering frequent graph patterns in a graph database offers valuable information in a variety of applications. However, if the graph dataset contains sensitive data of individuals such as mobile phone-call graphs and web-click graphs, releasing discovered frequent patterns may present a threat to the privacy of individuals. {\em Differential privacy} has recently emerged as the {\em de facto} standard for private data analysis due to its provable privacy guarantee. In this paper we propose the first differentially private algorithm for mining frequent graph patterns. We first show that previous techniques on differentially private discovery of frequent {\em itemsets} cannot apply in mining frequent graph patterns due to the inherent complexity of handling structural information in graphs. We then address this challenge by proposing a Markov Chain Monte Carlo (MCMC) sampling based algorithm. Unlike previous work on frequent itemset mining, our techniques do not rely on the output of a non-private mining algorithm. Instead, we observe that both frequent graph pattern mining and the guarantee of differential privacy can be unified into an MCMC sampling framework. In addition, we establish the privacy and utility guarantee of our algorithm and propose an efficient neighboring pattern counting technique as well. Experimental results show that the proposed algorithm is able to output frequent patterns with good precision

    Using Markov chain Monte Carlo methods for estimating parameters with gravitational radiation data

    Get PDF
    We present a Bayesian approach to the problem of determining parameters for coalescing binary systems observed with laser interferometric detectors. By applying a Markov Chain Monte Carlo (MCMC) algorithm, specifically the Gibbs sampler, we demonstrate the potential that MCMC techniques may hold for the computation of posterior distributions of parameters of the binary system that created the gravity radiation signal. We describe the use of the Gibbs sampler method, and present examples whereby signals are detected and analyzed from within noisy data.Comment: 21 pages, 10 figure

    Editorial: Special issue on statistical bioinformatics

    Get PDF

    Markov Chain Monte Carlo Method without Detailed Balance

    Full text link
    We present a specific algorithm that generally satisfies the balance condition without imposing the detailed balance in the Markov chain Monte Carlo. In our algorithm, the average rejection rate is minimized, and even reduced to zero in many relevant cases. The absence of the detailed balance also introduces a net stochastic flow in a configuration space, which further boosts up the convergence. We demonstrate that the autocorrelation time of the Potts model becomes more than 6 times shorter than that by the conventional Metropolis algorithm. Based on the same concept, a bounce-free worm algorithm for generic quantum spin models is formulated as well.Comment: 5 pages, 5 figure

    A Bayesian approach to the follow-up of candidate gravitational wave signals

    Full text link
    Ground-based gravitational wave laser interferometers (LIGO, GEO-600, Virgo and Tama-300) have now reached high sensitivity and duty cycle. We present a Bayesian evidence-based approach to the search for gravitational waves, in particular aimed at the followup of candidate events generated by the analysis pipeline. We introduce and demonstrate an efficient method to compute the evidence and odds ratio between different models, and illustrate this approach using the specific case of the gravitational wave signal generated during the inspiral phase of binary systems, modelled at the leading quadrupole Newtonian order, in synthetic noise. We show that the method is effective in detecting signals at the detection threshold and it is robust against (some types of) instrumental artefacts. The computational efficiency of this method makes it scalable to the analysis of all the triggers generated by the analysis pipelines to search for coalescing binaries in surveys with ground-based interferometers, and to a whole variety of signal waveforms, characterised by a larger number of parameters.Comment: 9 page

    Quality determination and the repair of poor quality spots in array experiments.

    Get PDF
    BACKGROUND: A common feature of microarray experiments is the occurrence of missing gene expression data. These missing values occur for a variety of reasons, in particular, because of the filtering of poor quality spots and the removal of undefined values when a logarithmic transformation is applied to negative background-corrected intensities. The efficiency and power of an analysis performed can be substantially reduced by having an incomplete matrix of gene intensities. Additionally, most statistical methods require a complete intensity matrix. Furthermore, biases may be introduced into analyses through missing information on some genes. Thus methods for appropriately replacing (imputing) missing data and/or weighting poor quality spots are required. RESULTS: We present a likelihood-based method for imputing missing data or weighting poor quality spots that requires a number of biological or technical replicates. This likelihood-based approach assumes that the data for a given spot arising from each channel of a two-dye (two-channel) cDNA microarray comparison experiment independently come from a three-component mixture distribution--the parameters of which are estimated through use of a constrained E-M algorithm. Posterior probabilities of belonging to each component of the mixture distributions are calculated and used to decide whether imputation is required. These posterior probabilities may also be used to construct quality weights that can down-weight poor quality spots in any analysis performed afterwards. The approach is illustrated using data obtained from an experiment to observe gene expression changes with 24 hr paclitaxel (Taxol) treatment on a human cervical cancer derived cell line (HeLa). CONCLUSION: As the quality of microarray experiments affect downstream processes, it is important to have a reliable and automatic method of identifying poor quality spots and arrays. We propose a method of identifying poor quality spots, and suggest a method of repairing the arrays by either imputation or assigning quality weights to the spots. This repaired data set would be less biased and can be analysed using any of the appropriate statistical methods found in the microarray literature.RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are

    Testing for double inflation with WMAP

    Get PDF
    With the WMAP data we can now begin to test realistic models of inflation involving multiple scalar fields. These naturally lead to correlated adiabatic and isocurvature (entropy) perturbations with a running spectral index. We present the first full (9 parameter) likelihood analysis of double inflation with WMAP data and find that despite the extra freedom, supersymmetric hybrid potentials are strongly constrained with less than 7% correlated isocurvature component allowed when standard priors are imposed on the cosomological parameters. As a result we also find that Akaike & Bayesian model selection criteria rather strongly prefer single-field inflation, just as equivalent analysis prefers a cosmological constant over dynamical dark energy in the late universe. It appears that simplicity is the best guide to our universe.Comment: 7 pages, 6 figure

    Direct reconstruction of the quintessence potential

    Get PDF
    We describe an algorithm which directly determines the quintessence potential from observational data, without using an equation of state parametrisation. The strategy is to numerically determine observational quantities as a function of the expansion coefficients of the quintessence potential, which are then constrained using a likelihood approach. We further impose a model selection criterion, the Bayesian Information Criterion, to determine the appropriate level of the potential expansion. In addition to the potential parameters, the present-day quintessence field velocity is kept as a free parameter. Our investigation contains unusual model types, including a scalar field moving on a flat potential, or in an uphill direction, and is general enough to permit oscillating quintessence field models. We apply our method to the `gold' Type Ia supernovae sample of Riess et al. (2004), confirming the pure cosmological constant model as the best description of current supernovae luminosity-redshift data. Our method is optimal for extracting quintessence parameters from future data.Comment: 9 pages RevTeX4 with lots of incorporated figure

    Bayesian coherent analysis of in-spiral gravitational wave signals with a detector network

    Full text link
    The present operation of the ground-based network of gravitational-wave laser interferometers in "enhanced" configuration brings the search for gravitational waves into a regime where detection is highly plausible. The development of techniques that allow us to discriminate a signal of astrophysical origin from instrumental artefacts in the interferometer data and to extract the full range of information are some of the primary goals of the current work. Here we report the details of a Bayesian approach to the problem of inference for gravitational wave observations using a network of instruments, for the computation of the Bayes factor between two hypotheses and the evaluation of the marginalised posterior density functions of the unknown model parameters. The numerical algorithm to tackle the notoriously difficult problem of the evaluation of large multi-dimensional integrals is based on a technique known as Nested Sampling, which provides an attractive alternative to more traditional Markov-chain Monte Carlo (MCMC) methods. We discuss the details of the implementation of this algorithm and its performance against a Gaussian model of the background noise, considering the specific case of the signal produced by the in-spiral of binary systems of black holes and/or neutron stars, although the method is completely general and can be applied to other classes of sources. We also demonstrate the utility of this approach by introducing a new coherence test to distinguish between the presence of a coherent signal of astrophysical origin in the data of multiple instruments and the presence of incoherent accidental artefacts, and the effects on the estimation of the source parameters as a function of the number of instruments in the network.Comment: 22 page
    • …
    corecore