2,939 research outputs found

    The Random-Query Model and the Memory-Bounded Coupon Collector

    Get PDF

    A Generalized Coupon Collector Problem

    Full text link
    This paper provides analysis to a generalized version of the coupon collector problem, in which the collector gets dd distinct coupons each run and she chooses the one that she has the least so far. On the asymptotic case when the number of coupons nn goes to infinity, we show that on average nlognd+nd(m1)loglogn+O(mn)\frac{n\log n}{d} + \frac{n}{d}(m-1)\log\log{n}+O(mn) runs are needed to collect mm sets of coupons. An efficient exact algorithm is also developed for any finite case to compute the average needed runs exactly. Numerical examples are provided to verify our theoretical predictions.Comment: 20 pages, 6 figures, preprin

    Non-adaptive Group Testing: Explicit bounds and novel algorithms

    Full text link
    We consider some computationally efficient and provably correct algorithms with near-optimal sample-complexity for the problem of noisy non-adaptive group testing. Group testing involves grouping arbitrary subsets of items into pools. Each pool is then tested to identify the defective items, which are usually assumed to be "sparse". We consider non-adaptive randomly pooling measurements, where pools are selected randomly and independently of the test outcomes. We also consider a model where noisy measurements allow for both some false negative and some false positive test outcomes (and also allow for asymmetric noise, and activation noise). We consider three classes of algorithms for the group testing problem (we call them specifically the "Coupon Collector Algorithm", the "Column Matching Algorithms", and the "LP Decoding Algorithms" -- the last two classes of algorithms (versions of some of which had been considered before in the literature) were inspired by corresponding algorithms in the Compressive Sensing literature. The second and third of these algorithms have several flavours, dealing separately with the noiseless and noisy measurement scenarios. Our contribution is novel analysis to derive explicit sample-complexity bounds -- with all constants expressly computed -- for these algorithms as a function of the desired error probability; the noise parameters; the number of items; and the size of the defective set (or an upper bound on it). We also compare the bounds to information-theoretic lower bounds for sample complexity based on Fano's inequality and show that the upper and lower bounds are equal up to an explicitly computable universal constant factor (independent of problem parameters).Comment: Accepted for publication in the IEEE Transactions on Information Theory; current version, Oct. 9, 2012. Main change from v4 to v5: fixed some typos, corrected details of the LP decoding algorithm

    Bounded Model Checking for Probabilistic Programs

    Get PDF
    In this paper we investigate the applicability of standard model checking approaches to verifying properties in probabilistic programming. As the operational model for a standard probabilistic program is a potentially infinite parametric Markov decision process, no direct adaption of existing techniques is possible. Therefore, we propose an on-the-fly approach where the operational model is successively created and verified via a step-wise execution of the program. This approach enables to take key features of many probabilistic programs into account: nondeterminism and conditioning. We discuss the restrictions and demonstrate the scalability on several benchmarks
    corecore