1,726 research outputs found

    Evaluating testing methods by delivered reliability

    Get PDF
    There are two main goals in testing software: (1) to achieve adequate quality (debug testing), where the objective is to probe the software for defects so that these can be removed, and (2) to assess existing quality (operational testing), where the objective is to gain confidence that the software is reliable. Debug methods tend to ignore random selection of test data from an operational profile, while for operational methods this selection is all-important. Debug methods are thought to be good at uncovering defects so that these can be repaired, but having done so they do not provide a technically defensible assessment of the reliability that results. On the other hand, operational methods provide accurate assessment, but may not be as useful for achieving reliability. This paper examines the relationship between the two testing goals, using a probabilistic analysis. We define simple models of programs and their testing, and try to answer the question of how to attain program reliability: is it better to test by probing for defects as in debug testing, or to assess reliability directly as in operational testing? Testing methods are compared in a model where program failures are detected and the software changed to eliminate them. The “better” method delivers higher reliability after all test failures have been eliminated. Special cases are exhibited in which each kind of testing is superior. An analysis of the distribution of the delivered reliability indicates that even simple models have unusual statistical properties, suggesting caution in interpreting theoretical comparisons

    Two-Source Dispersers for Polylogarithmic Entropy and Improved Ramsey Graphs

    Full text link
    In his 1947 paper that inaugurated the probabilistic method, Erd\H{o}s proved the existence of 2logn2\log{n}-Ramsey graphs on nn vertices. Matching Erd\H{o}s' result with a constructive proof is a central problem in combinatorics, that has gained a significant attention in the literature. The state of the art result was obtained in the celebrated paper by Barak, Rao, Shaltiel and Wigderson [Ann. Math'12], who constructed a 22(loglogn)1α2^{2^{(\log\log{n})^{1-\alpha}}}-Ramsey graph, for some small universal constant α>0\alpha > 0. In this work, we significantly improve the result of Barak~\etal and construct 2(loglogn)c2^{(\log\log{n})^c}-Ramsey graphs, for some universal constant cc. In the language of theoretical computer science, our work resolves the problem of explicitly constructing two-source dispersers for polylogarithmic entropy

    How (Not) to Cut Your Cheese

    Get PDF
    It is well known that a line can intersect at most 2n−1 unit squares of the n × n chessboard. Here we consider the three-dimensional version: how many unit cubes of the 3-dimensional cube [0,n]3 can a hyperplane intersect

    The Erdös-Ko-Rado theorem for vector spaces

    Get PDF
    AbstractLet V be an n-dimensional vector space over GF(q) and for integers k⩾t>0 let mq(n, k, t) denote the maximum possible number of subspaces in a t-intersecting family F of k-dimensional subspaces of V, i.e., dim F ∩ F′ ⩾ t holds for all F, F′ ϵ F. It is shown that mq(n,k,t)=maxn−tk−t, 2k−tk for n⩾2k−t while for n⩽2k−t trivially mq(n,k,t)=nk holds

    Induced restricted Ramsey theorems for spaces

    Get PDF
    AbstractThe induced restricted versions of the vector space Ramsey theorem and of the Graham-Rothschild parameter set theorem are proved

    A Characterization of Mixed Unit Interval Graphs

    Full text link
    We give a complete characterization of mixed unit interval graphs, the intersection graphs of closed, open, and half-open unit intervals of the real line. This is a proper superclass of the well known unit interval graphs. Our result solves a problem posed by Dourado, Le, Protti, Rautenbach and Szwarcfiter (Mixed unit interval graphs, Discrete Math. 312, 3357-3363 (2012)).Comment: 17 pages, referees' comments adde

    Non locality, closing the detection loophole and communication complexity

    Get PDF
    It is shown that the detection loophole which arises when trying to rule out local realistic theories as alternatives for quantum mechanics can be closed if the detection efficiency η\eta is larger than ηd1/220.0035d\eta \geq d^{1/2} 2^{-0.0035d} where dd is the dimension of the entangled system. Furthermore it is argued that this exponential decrease of the detector efficiency required to close the detection loophole is almost optimal. This argument is based on a close connection that exists between closing the detection loophole and the amount of classical communication required to simulate quantum correlation when the detectors are perfect.Comment: 4 pages Latex, minor typos correcte
    corecore