103,664 research outputs found

    Computationally Tractable Algorithms for Finding a Subset of Non-defective Items from a Large Population

    Full text link
    In the classical non-adaptive group testing setup, pools of items are tested together, and the main goal of a recovery algorithm is to identify the "complete defective set" given the outcomes of different group tests. In contrast, the main goal of a "non-defective subset recovery" algorithm is to identify a "subset" of non-defective items given the test outcomes. In this paper, we present a suite of computationally efficient and analytically tractable non-defective subset recovery algorithms. By analyzing the probability of error of the algorithms, we obtain bounds on the number of tests required for non-defective subset recovery with arbitrarily small probability of error. Our analysis accounts for the impact of both the additive noise (false positives) and dilution noise (false negatives). By comparing with the information theoretic lower bounds, we show that the upper bounds on the number of tests are order-wise tight up to a log2K\log^2K factor, where KK is the number of defective items. We also provide simulation results that compare the relative performance of the different algorithms and provide further insights into their practical utility. The proposed algorithms significantly outperform the straightforward approaches of testing items one-by-one, and of first identifying the defective set and then choosing the non-defective items from the complement set, in terms of the number of measurements required to ensure a given success rate.Comment: In this revision: Unified some proofs and reorganized the paper, corrected a small mistake in one of the proofs, added more reference

    A smart contract system for decentralized borda count voting

    Get PDF
    In this article, we propose the first self-tallying decentralized e-voting protocol for a ranked-choice voting system based on Borda count. Our protocol does not need any trusted setup or tallying authority to compute the tally. The voters interact through a publicly accessible bulletin board for executing the protocol in a way that is publicly verifiable. Our main protocol consists of two rounds. In the first round, the voters publish their public keys, and in the second round they publish their randomized ballots. All voters provide Non-interactive Zero-Knowledge (NIZK) proofs to show that they have been following the protocol specification honestly without revealing their secret votes. At the end of the election, anyone including a third-party observer will be able to compute the tally without needing any tallying authority. We provide security proofs to show that our protocol guarantees the maximum privacy for each voter. We have implemented our protocol using Ethereum's blockchain as a public bulletin board to record voting operations as publicly verifiable transactions. The experimental data obtained from our tests show the protocol's potential for the real-world deployment

    Deriving Proved Equality Tests in Coq-Elpi: Stronger Induction Principles for Containers in Coq

    Get PDF
    We describe a procedure to derive equality tests and their correctness proofs from inductive type declarations in Coq. Programs and proofs are derived compositionally, reusing code and proofs derived previously. The key steps are two. First, we design appropriate induction principles for data types defined using parametric containers. Second, we develop a technique to work around the modularity limitations imposed by the purely syntactic termination check Coq performs on recursive proofs. The unary parametricity translation of inductive data types turns out to be the key to both steps. Last but not least, we provide an implementation of the procedure for the Coq proof assistant based on the Elpi [Dunchev et al., 2015] extension language

    On the Power of Invariant Tests for Hypotheses on a Covariance Matrix

    Get PDF
    The behavior of the power function of autocorrelation tests such as the Durbin-Watson test in time series regressions or the Cliff-Ord test in spatial regression models has been intensively studied in the literature. When the correlation becomes strong, Kr\"amer (1985) (for the Durbin-Watson test) and Kr\"amer (2005) (for the Cliff-Ord test) have shown that the power can be very low, in fact can converge to zero, under certain circumstances. Motivated by these results, Martellosio (2010) set out to build a general theory that would explain these findings. Unfortunately, Martellosio (2010) does not achieve this goal, as a substantial portion of his results and proofs suffer from serious flaws. The present paper now builds a theory as envisioned in Martellosio (2010) in a fairly general framework, covering general invariant tests of a hypothesis on the disturbance covariance matrix in a linear regression model. The general results are then specialized to testing for spatial correlation and to autocorrelation testing in time series regression models. We also characterize the situation where the null and the alternative hypothesis are indistinguishable by invariant tests

    Testing the Pauli Exclusion Principle for electrons at LNGS

    Get PDF
    High-precision experiments have been done to test the Pauli exclusion principle (PEP) for electrons by searching for anomalous KK-series X-rays from a Cu target supplied with electric current. With the highest sensitivity, the VIP (VIolation of Pauli Exclusion Principle) experiment set an upper limit at the level of 102910^{-29} for the probability that an external electron captured by a Cu atom can make the transition from the 2pp state to a 1ss state already occupied by two electrons. In a follow-up experiment at Gran Sasso, we aim to increase the sensitivity by two orders of magnitude. We show proofs that the proposed improvement factor is realistic based on the results from recent performance tests of the detectors we did at Laboratori Nazionali di Frascati (LNF).Comment: 8 pages, 5 figures, conference proceedings on TAUP 201

    Bootstrap Unit Root Tests: Comparison and Extensions

    Get PDF
    In this paper we study and compare the properties of several bootstrap unit root tests recently proposed in the literature. The tests are Dickey-Fuller or Augmented DF-tests, either based on residuals from an autoregression and the use of the block bootstrap (Paparoditis & Politis, 2003) or on first differenced data and the use of the stationary bootstrap (Swensen, 2003a) or sieve bootstrap (Psaradakis, 2001; Chang & Park, 2003). We extend the analysis by interchanging the data transformations (differences versus residuals), the types of bootstrap and the presence or absence of a correction for autocorrelation in the tests. We prove that two sieve bootstrap tests based on residuals remain asymptotically valid, thereby completing the proofs of validity for all the types of DF bootstrap tests. In contrast to the literature which basically focuses on a comparison of the bootstrap tests with an asymptotic test, we compare the bootstrap tests among them using response surfaces for their size and power in a simulation study. We also investigate how the tests behave when accounting for a deterministic trend, even in the absence of such a trend in the data. This study leads to the following conclusions: (i) augmented DF-tests are always preferred to standard DF-tests; (ii) the sieve bootstrap performs slightly better than the block bootstrap; (iii) difference-based and residual-based tests behave similarly in terms of size although the latter appear more powerful. The results for the response surfaces allow us to make statements about the behaviour of the bootstrap tests as sample size increases.Economics ;
    corecore