367 research outputs found

    Probabilistic Counting in Generalized Turnstile Models

    Full text link
    Traditionally in the turnstile model of data streams, there is a state vector x=(x1,x2,…,xn)x=(x_1,x_2,\ldots,x_n) which is updated through a stream of pairs (i,k)(i,k) where i∈[n]i\in [n] and k∈Zk\in \Z. Upon receiving (i,k)(i,k), xi←xi+kx_i\gets x_i + k. A distinct count algorithm in the turnstile model takes one pass of the stream and then estimates \norm{x}_0 = |\{i\in[n]\mid x_i\neq 0\}| (aka L0L_0, the Hamming norm). In this paper, we define a finite-field version of the turnstile model. Let FF be any finite field. Then in the FF-turnstile model, for each i∈[n]i\in [n], xi∈Fx_i\in F; for each update (i,k)(i,k), k∈Fk\in F. The update xi←xi+kx_i\gets x_i+k is then computed in the field FF. A distinct count algorithm in the FF-turnstile model takes one pass of the stream and estimates \norm{x}_{0;F} = |\{i\in[n]\mid x_i\neq 0_F\}|. We present a simple distinct count algorithm, called FF-\pcsa{}, in the FF-turnstile model for any finite field FF. The new FF-\pcsa{} algorithm takes mlog⁑(n)log⁑(∣F∣)m\log(n)\log (|F|) bits of memory and estimates \norm{x}_{0;F} with O(1m)O(\frac{1}{\sqrt{m}}) relative error where the hidden constant depends on the order of the field. FF-\pcsa{} is straightforward to implement and has several applications in the real world with different choices of FF. Most notably, it makes distinct count with deletions as simple as distinct count without deletions

    Non-Mergeable Sketching for Cardinality Estimation

    Get PDF
    Cardinality estimation is perhaps the simplest non-trivial statistical problem that can be solved via sketching. Industrially-deployed sketches like HyperLogLog, MinHash, and PCSA are mergeable, which means that large data sets can be sketched in a distributed environment, and then merged into a single sketch of the whole data set. In the last decade a variety of sketches have been developed that are non-mergeable, but attractive for other reasons. They are simpler, their cardinality estimates are strictly unbiased, and they have substantially lower variance. We evaluate sketching schemes on a reasonably level playing field, in terms of their memory-variance product (MVP). E.g., a sketch that occupies 5m bits and whose relative variance is 2/m (standard error ?{2/m}) has an MVP of 10. Our contributions are as follows. - Cohen [Edith Cohen, 2015] and Ting [Daniel Ting, 2014] independently discovered what we call the {Martingale transform} for converting a mergeable sketch into a non-mergeable sketch. We present a simpler way to analyze the limiting MVP of Martingale-type sketches. - Pettie and Wang proved that the Fishmonger sketch [Seth Pettie and Dingyu Wang, 2021] has the best MVP, H?/I? ? 1.98, among a class of mergeable sketches called "linearizable" sketches. (H? and I? are precisely defined constants.) We prove that the Martingale transform is optimal in the non-mergeable world, and that Martingale Fishmonger in particular is optimal among linearizable sketches, with an MVP of H?/2 ? 1.63. E.g., this is circumstantial evidence that to achieve 1% standard error, we cannot do better than a 2 kilobyte sketch. - Martingale Fishmonger is neither simple nor practical. We develop a new mergeable sketch called Curtain that strikes a nice balance between simplicity and efficiency, and prove that Martingale Curtain has limiting MVP? 2.31. It can be updated with O(1) memory accesses and it has lower empirical variance than Martingale LogLog, a practical non-mergeable version of HyperLogLog
    • …
    corecore