205,978 research outputs found

    Random template banks and relaxed lattice coverings

    Full text link
    Template-based searches for gravitational waves are often limited by the computational cost associated with searching large parameter spaces. The study of efficient template banks, in the sense of using the smallest number of templates, is therefore of great practical interest. The "traditional" approach to template-bank construction requires every point in parameter space to be covered by at least one template, which rapidly becomes inefficient at higher dimensions. Here we study an alternative approach, where any point in parameter space is covered only with a given probability < 1. We find that by giving up complete coverage in this way, large reductions in the number of templates are possible, especially at higher dimensions. The prime examples studied here are "random template banks", in which templates are placed randomly with uniform probability over the parameter space. In addition to its obvious simplicity, this method turns out to be surprisingly efficient. We analyze the statistical properties of such random template banks, and compare their efficiency to traditional lattice coverings. We further study "relaxed" lattice coverings (using Zn and An* lattices), which similarly cover any signal location only with probability < 1. The relaxed An* lattice is found to yield the most efficient template banks at low dimensions (n < 10), while random template banks increasingly outperform any other method at higher dimensions.Comment: 13 pages, 10 figures, submitted to PR

    Certainty Closure: Reliable Constraint Reasoning with Incomplete or Erroneous Data

    Full text link
    Constraint Programming (CP) has proved an effective paradigm to model and solve difficult combinatorial satisfaction and optimisation problems from disparate domains. Many such problems arising from the commercial world are permeated by data uncertainty. Existing CP approaches that accommodate uncertainty are less suited to uncertainty arising due to incomplete and erroneous data, because they do not build reliable models and solutions guaranteed to address the user's genuine problem as she perceives it. Other fields such as reliable computation offer combinations of models and associated methods to handle these types of uncertain data, but lack an expressive framework characterising the resolution methodology independently of the model. We present a unifying framework that extends the CP formalism in both model and solutions, to tackle ill-defined combinatorial problems with incomplete or erroneous data. The certainty closure framework brings together modelling and solving methodologies from different fields into the CP paradigm to provide reliable and efficient approches for uncertain constraint problems. We demonstrate the applicability of the framework on a case study in network diagnosis. We define resolution forms that give generic templates, and their associated operational semantics, to derive practical solution methods for reliable solutions.Comment: Revised versio

    Learning from networked examples

    Get PDF
    Many machine learning algorithms are based on the assumption that training examples are drawn independently. However, this assumption does not hold anymore when learning from a networked sample because two or more training examples may share some common objects, and hence share the features of these shared objects. We show that the classic approach of ignoring this problem potentially can have a harmful effect on the accuracy of statistics, and then consider alternatives. One of these is to only use independent examples, discarding other information. However, this is clearly suboptimal. We analyze sample error bounds in this networked setting, providing significantly improved results. An important component of our approach is formed by efficient sample weighting schemes, which leads to novel concentration inequalities
    • …
    corecore