36 research outputs found

    Group testing with Random Pools: Phase Transitions and Optimal Strategy

    Full text link
    The problem of Group Testing is to identify defective items out of a set of objects by means of pool queries of the form "Does the pool contain at least a defective?". The aim is of course to perform detection with the fewest possible queries, a problem which has relevant practical applications in different fields including molecular biology and computer science. Here we study GT in the probabilistic setting focusing on the regime of small defective probability and large number of objects, p→0p \to 0 and N→∞N \to \infty. We construct and analyze one-stage algorithms for which we establish the occurrence of a non-detection/detection phase transition resulting in a sharp threshold, Mˉ\bar M, for the number of tests. By optimizing the pool design we construct algorithms whose detection threshold follows the optimal scaling Mˉ∝Np∣log⁡p∣\bar M\propto Np|\log p|. Then we consider two-stages algorithms and analyze their performance for different choices of the first stage pools. In particular, via a proper random choice of the pools, we construct algorithms which attain the optimal value (previously determined in Ref. [16]) for the mean number of tests required for complete detection. We finally discuss the optimal pool design in the case of finite pp

    Superselectors: Efficient Constructions and Applications

    Full text link
    We introduce a new combinatorial structure: the superselector. We show that superselectors subsume several important combinatorial structures used in the past few years to solve problems in group testing, compressed sensing, multi-channel conflict resolution and data security. We prove close upper and lower bounds on the size of superselectors and we provide efficient algorithms for their constructions. Albeit our bounds are very general, when they are instantiated on the combinatorial structures that are particular cases of superselectors (e.g., (p,k,n)-selectors, (d,\ell)-list-disjunct matrices, MUT_k(r)-families, FUT(k, a)-families, etc.) they match the best known bounds in terms of size of the structures (the relevant parameter in the applications). For appropriate values of parameters, our results also provide the first efficient deterministic algorithms for the construction of such structures

    On Deterministic Sketching and Streaming for Sparse Recovery and Norm Estimation

    Full text link
    We study classic streaming and sparse recovery problems using deterministic linear sketches, including l1/l1 and linf/l1 sparse recovery problems (the latter also being known as l1-heavy hitters), norm estimation, and approximate inner product. We focus on devising a fixed matrix A in R^{m x n} and a deterministic recovery/estimation procedure which work for all possible input vectors simultaneously. Our results improve upon existing work, the following being our main contributions: * A proof that linf/l1 sparse recovery and inner product estimation are equivalent, and that incoherent matrices can be used to solve both problems. Our upper bound for the number of measurements is m=O(eps^{-2}*min{log n, (log n / log(1/eps))^2}). We can also obtain fast sketching and recovery algorithms by making use of the Fast Johnson-Lindenstrauss transform. Both our running times and number of measurements improve upon previous work. We can also obtain better error guarantees than previous work in terms of a smaller tail of the input vector. * A new lower bound for the number of linear measurements required to solve l1/l1 sparse recovery. We show Omega(k/eps^2 + klog(n/k)/eps) measurements are required to recover an x' with |x - x'|_1 <= (1+eps)|x_{tail(k)}|_1, where x_{tail(k)} is x projected onto all but its largest k coordinates in magnitude. * A tight bound of m = Theta(eps^{-2}log(eps^2 n)) on the number of measurements required to solve deterministic norm estimation, i.e., to recover |x|_2 +/- eps|x|_1. For all the problems we study, tight bounds are already known for the randomized complexity from previous work, except in the case of l1/l1 sparse recovery, where a nearly tight bound is known. Our work thus aims to study the deterministic complexities of these problems

    A Specific Test Methodology for Symmetric SRAM-Based FPGAs

    No full text

    Cellular arrays for the solution of graph problems

    No full text

    Dynamic heterogeneity in hydrogen-bonded polymers

    Get PDF
    We report on neutron spin echo experiments on hydrogen-bonded polymers and compare the experimentally found dynamical structure factor with theoretical predictions. Surprisingly, we find that in the melt phase the expected scaling of the Rouse dynamics is not satisfied. We propose an explanation based upon the large spatial volume occupied by the connecting groups. When the effects of these bulky groups on the local friction are taken into account, the usual scaling behavior is restored

    Sociomateriality and information systems success and failure

    Get PDF
    The aim of this essay is to put forward a performative, sociomaterial perspective on Information Systems (IS) success and failure in organisations by focusing intently upon the discursive-material nature of IS development and use in practice. Through the application of Actor Network Theory (ANT) to the case of an IS that transacts insurance products we demonstrate the contribution of such a perspective to the understanding of how IS success and failure occur in practice. The manuscript puts our argument forward by first critiquing the existing perspectives on IS success and failure in the literature for their inadequate consideration of the materiality of IS, of its underling technologies and of the entanglement of the social and material aspects of IS development and use. From a sociomaterial perspective IS are not seen as objects that impact organisations one way or another, but instead as relational effects continually enacted in practice. As enactments in practice IS development and use produce realities of IS success and failure

    On the wake-up problem in radio networks

    No full text
    Abstract. Radio networks model wireless communication when processing units communicate using one wave frequency. This is captured by the property that multiple messages arriving simultaneously to a node interfere with one another and none of them can be read reliably. We present improved solutions to the problem of waking up such a network. This requires activating all nodes in a scenario when some nodes start to be active spontaneously, while every sleeping node needs to be awaken by receiving successfully a message from a neighbor. Our contributions concern the existence and efficient construction of universal radio synchronizers, which are combinatorial structures introduced in [6] as building blocks of efficient wake-up algorithms. First we show by counting that there are (n, g)-universal synchronizers for g(k) = O(k log k log n). Next we show an explicit construction of (n, g)-universal-synchronizers for g(k) = O(k 2 polylog n). By way of applications, we obtain an existential wake-up algorithm which works in time O(n log 2 n) and an explicitly instantiated algorithm that works in time O(n ∆ polylog n), where n is the number of nodes and ∆ is the maximum in-degree in the network. Algorithms for leader-election and synchronization can be developed on top of wake-up ones, as shown in [7], such that they work in time slower by a factor of O(log n) than the underlying wake-up ones.
    corecore