3 research outputs found

    Streaming Euclidean Max-Cut: Dimension vs Data Reduction

    Full text link
    Max-Cut is a fundamental problem that has been studied extensively in various settings. We design an algorithm for Euclidean Max-Cut, where the input is a set of points in Rd\mathbb{R}^d, in the model of dynamic geometric streams, where the input X[Δ]dX\subseteq [\Delta]^d is presented as a sequence of point insertions and deletions. Previously, Frahling and Sohler [STOC 2005] designed a (1+ϵ)(1+\epsilon)-approximation algorithm for the low-dimensional regime, i.e., it uses space exp(d)\exp(d). To tackle this problem in the high-dimensional regime, which is of growing interest, one must improve the dependence on the dimension dd, ideally to space complexity poly(ϵ1dlogΔ)\mathrm{poly}(\epsilon^{-1} d \log\Delta). Lammersen, Sidiropoulos, and Sohler [WADS 2009] proved that Euclidean Max-Cut admits dimension reduction with target dimension d=poly(ϵ1)d' = \mathrm{poly}(\epsilon^{-1}). Combining this with the aforementioned algorithm that uses space exp(d)\exp(d'), they obtain an algorithm whose overall space complexity is indeed polynomial in dd, but unfortunately exponential in ϵ1\epsilon^{-1}. We devise an alternative approach of \emph{data reduction}, based on importance sampling, and achieve space bound poly(ϵ1dlogΔ)\mathrm{poly}(\epsilon^{-1} d \log\Delta), which is exponentially better (in ϵ\epsilon) than the dimension-reduction approach. To implement this scheme in the streaming model, we employ a randomly-shifted quadtree to construct a tree embedding. While this is a well-known method, a key feature of our algorithm is that the embedding's distortion O(dlogΔ)O(d\log\Delta) affects only the space complexity, and the approximation ratio remains 1+ϵ1+\epsilon

    Tight Bounds for Adversarially Robust Streams and Sliding Windows via Difference Estimators

    Full text link
    In the adversarially robust streaming model, a stream of elements is presented to an algorithm and is allowed to depend on the output of the algorithm at earlier times during the stream. In the classic insertion-only model of data streams, Ben-Eliezer et. al. (PODS 2020, best paper award) show how to convert a non-robust algorithm into a robust one with a roughly 1/ε1/\varepsilon factor overhead. This was subsequently improved to a 1/ε1/\sqrt{\varepsilon} factor overhead by Hassidim et. al. (NeurIPS 2020, oral presentation), suppressing logarithmic factors. For general functions the latter is known to be best-possible, by a result of Kaplan et. al. (CRYPTO 2021). We show how to bypass this impossibility result by developing data stream algorithms for a large class of streaming problems, with no overhead in the approximation factor. Our class of streaming problems includes the most well-studied problems such as the L2L_2-heavy hitters problem, FpF_p-moment estimation, as well as empirical entropy estimation. We substantially improve upon all prior work on these problems, giving the first optimal dependence on the approximation factor. As in previous work, we obtain a general transformation that applies to any non-robust streaming algorithm and depends on the so-called flip number. However, the key technical innovation is that we apply the transformation to what we call a difference estimator for the streaming problem, rather than an estimator for the streaming problem itself. We then develop the first difference estimators for a wide range of problems. Our difference estimator methodology is not only applicable to the adversarially robust model, but to other streaming models where temporal properties of the data play a central role. (Abstract shortened to meet arXiv limit.)Comment: FOCS 202
    corecore