383,461 research outputs found

    Random template banks and relaxed lattice coverings

    Full text link
    Template-based searches for gravitational waves are often limited by the computational cost associated with searching large parameter spaces. The study of efficient template banks, in the sense of using the smallest number of templates, is therefore of great practical interest. The "traditional" approach to template-bank construction requires every point in parameter space to be covered by at least one template, which rapidly becomes inefficient at higher dimensions. Here we study an alternative approach, where any point in parameter space is covered only with a given probability < 1. We find that by giving up complete coverage in this way, large reductions in the number of templates are possible, especially at higher dimensions. The prime examples studied here are "random template banks", in which templates are placed randomly with uniform probability over the parameter space. In addition to its obvious simplicity, this method turns out to be surprisingly efficient. We analyze the statistical properties of such random template banks, and compare their efficiency to traditional lattice coverings. We further study "relaxed" lattice coverings (using Zn and An* lattices), which similarly cover any signal location only with probability < 1. The relaxed An* lattice is found to yield the most efficient template banks at low dimensions (n < 10), while random template banks increasingly outperform any other method at higher dimensions.Comment: 13 pages, 10 figures, submitted to PR

    Exact value for the average optimal cost of bipartite traveling-salesman and 2-factor problems in two dimensions

    Get PDF
    We show that the average cost for the traveling-salesman problem in two dimensions, which is the archetypal problem in combinatorial optimization, in the bipartite case, is simply related to the average cost of the assignment problem with the same Euclidean, increasing, convex weights. In this way we extend a result already known in one dimension where exact solutions are avalaible. The recently determined average cost for the assignment when the cost function is the square of the distance between the points provides therefore an exact prediction EN=1πlogN\overline{E_N} = \frac{1}{\pi}\, \log N for large number of points 2N2N. As a byproduct of our analysis also the loop covering problem has the same optimal average cost. We also explain why this result cannot be extended at higher dimensions. We numerically check the exact predictions.Comment: 5 pages, 3 figure

    Certainty Closure: Reliable Constraint Reasoning with Incomplete or Erroneous Data

    Full text link
    Constraint Programming (CP) has proved an effective paradigm to model and solve difficult combinatorial satisfaction and optimisation problems from disparate domains. Many such problems arising from the commercial world are permeated by data uncertainty. Existing CP approaches that accommodate uncertainty are less suited to uncertainty arising due to incomplete and erroneous data, because they do not build reliable models and solutions guaranteed to address the user's genuine problem as she perceives it. Other fields such as reliable computation offer combinations of models and associated methods to handle these types of uncertain data, but lack an expressive framework characterising the resolution methodology independently of the model. We present a unifying framework that extends the CP formalism in both model and solutions, to tackle ill-defined combinatorial problems with incomplete or erroneous data. The certainty closure framework brings together modelling and solving methodologies from different fields into the CP paradigm to provide reliable and efficient approches for uncertain constraint problems. We demonstrate the applicability of the framework on a case study in network diagnosis. We define resolution forms that give generic templates, and their associated operational semantics, to derive practical solution methods for reliable solutions.Comment: Revised versio

    Amorphous Placement and Informed Diffusion for Timely Monitoring by Autonomous, Resource-Constrained, Mobile Sensors

    Full text link
    Personal communication devices are increasingly equipped with sensors for passive monitoring of encounters and surroundings. We envision the emergence of services that enable a community of mobile users carrying such resource-limited devices to query such information at remote locations in the field in which they collectively roam. One approach to implement such a service is directed placement and retrieval (DPR), whereby readings/queries about a specific location are routed to a node responsible for that location. In a mobile, potentially sparse setting, where end-to-end paths are unavailable, DPR is not an attractive solution as it would require the use of delay-tolerant (flooding-based store-carry-forward) routing of both readings and queries, which is inappropriate for applications with data freshness constraints, and which is incompatible with stringent device power/memory constraints. Alternatively, we propose the use of amorphous placement and retrieval (APR), in which routing and field monitoring are integrated through the use of a cache management scheme coupled with an informed exchange of cached samples to diffuse sensory data throughout the network, in such a way that a query answer is likely to be found close to the query origin. We argue that knowledge of the distribution of query targets could be used effectively by an informed cache management policy to maximize the utility of collective storage of all devices. Using a simple analytical model, we show that the use of informed cache management is particularly important when the mobility model results in a non-uniform distribution of users over the field. We present results from extensive simulations which show that in sparsely-connected networks, APR is more cost-effective than DPR, that it provides extra resilience to node failure and packet losses, and that its use of informed cache management yields superior performance

    Restricted Isometries for Partial Random Circulant Matrices

    Get PDF
    In the theory of compressed sensing, restricted isometry analysis has become a standard tool for studying how efficiently a measurement matrix acquires information about sparse and compressible signals. Many recovery algorithms are known to succeed when the restricted isometry constants of the sampling matrix are small. Many potential applications of compressed sensing involve a data-acquisition process that proceeds by convolution with a random pulse followed by (nonrandom) subsampling. At present, the theoretical analysis of this measurement technique is lacking. This paper demonstrates that the ssth order restricted isometry constant is small when the number mm of samples satisfies m(slogn)3/2m \gtrsim (s \log n)^{3/2}, where nn is the length of the pulse. This bound improves on previous estimates, which exhibit quadratic scaling
    corecore