802 research outputs found

    Spectral Sparsification and Regret Minimization Beyond Matrix Multiplicative Updates

    Full text link
    In this paper, we provide a novel construction of the linear-sized spectral sparsifiers of Batson, Spielman and Srivastava [BSS14]. While previous constructions required Ω(n4)\Omega(n^4) running time [BSS14, Zou12], our sparsification routine can be implemented in almost-quadratic running time O(n2+ε)O(n^{2+\varepsilon}). The fundamental conceptual novelty of our work is the leveraging of a strong connection between sparsification and a regret minimization problem over density matrices. This connection was known to provide an interpretation of the randomized sparsifiers of Spielman and Srivastava [SS11] via the application of matrix multiplicative weight updates (MWU) [CHS11, Vis14]. In this paper, we explain how matrix MWU naturally arises as an instance of the Follow-the-Regularized-Leader framework and generalize this approach to yield a larger class of updates. This new class allows us to accelerate the construction of linear-sized spectral sparsifiers, and give novel insights on the motivation behind Batson, Spielman and Srivastava [BSS14]

    Privacy-Preserving Distance Computation and Proximity Testing on Earth, Done Right

    Get PDF
    In recent years, the availability of GPS-enabled smartphones have made location-based services extremely popular. A multitude of applications rely on location information to provide a wide range of services. Location information is, however, extremely sensitive and can be easily abused. In this paper, we introduce the first protocols for secure computation of distance and for proximity testing over a sphere. Our secure distance protocols allow two parties, Alice and Bob, to determine their mutual distance without disclosing any additional information about their location. Through our secure proximity testing protocols, Alice only learns if Bob is in close proximity, i.e., within some arbitrary distance. Our techniques rely on three different representations of Earth, which provide different trade-os between accuracy and performance. We show, via experiments on a prototype implementation, that our protocols are practical on resource- constrained smartphone devices. Our distance computation protocols runs, in fact, in 54 to 78 ms on a commodity Android smartphone. Similarly, our proximity tests require between 1.2 s and 2.8 s on the same platform. The imprecision introduced by our protocols is very small, i.e., between 0.1% and 3% on average, depending on the distance

    Limitations to Frechet's Metric Embedding Method

    Full text link
    Frechet's classical isometric embedding argument has evolved to become a major tool in the study of metric spaces. An important example of a Frechet embedding is Bourgain's embedding. The authors have recently shown that for every e>0 any n-point metric space contains a subset of size at least n^(1-e) which embeds into l_2 with distortion O(\log(2/e) /e). The embedding we used is non-Frechet, and the purpose of this note is to show that this is not coincidental. Specifically, for every e>0, we construct arbitrarily large n-point metric spaces, such that the distortion of any Frechet embedding into l_p on subsets of size at least n^{1/2 + e} is \Omega((\log n)^{1/p}).Comment: 10 pages, 1 figur

    Nonlinear spectral calculus and super-expanders

    Get PDF
    Nonlinear spectral gaps with respect to uniformly convex normed spaces are shown to satisfy a spectral calculus inequality that establishes their decay along Cesaro averages. Nonlinear spectral gaps of graphs are also shown to behave sub-multiplicatively under zigzag products. These results yield a combinatorial construction of super-expanders, i.e., a sequence of 3-regular graphs that does not admit a coarse embedding into any uniformly convex normed space.Comment: Typos fixed based on referee comments. Some of the results of this paper were announced in arXiv:0910.2041. The corresponding parts of arXiv:0910.2041 are subsumed by the current pape

    The central limit problem for random vectors with symmetries

    Full text link
    Motivated by the central limit problem for convex bodies, we study normal approximation of linear functionals of high-dimensional random vectors with various types of symmetries. In particular, we obtain results for distributions which are coordinatewise symmetric, uniform in a regular simplex, or spherically symmetric. Our proofs are based on Stein's method of exchangeable pairs; as far as we know, this approach has not previously been used in convex geometry and we give a brief introduction to the classical method. The spherically symmetric case is treated by a variation of Stein's method which is adapted for continuous symmetries.Comment: AMS-LaTeX, uses xy-pic, 23 pages; v3: added new corollary to Theorem

    (Pseudo) Random Quantum States with Binary Phase

    Full text link
    We prove a quantum information-theoretic conjecture due to Ji, Liu and Song (CRYPTO 2018) which suggested that a uniform superposition with random \emph{binary} phase is statistically indistinguishable from a Haar random state. That is, any polynomial number of copies of the aforementioned state is within exponentially small trace distance from the same number of copies of a Haar random state. As a consequence, we get a provable elementary construction of \emph{pseudorandom} quantum states from post-quantum pseudorandom functions. Generating pseduorandom quantum states is desirable for physical applications as well as for computational tasks such as quantum money. We observe that replacing the pseudorandom function with a (2t)(2t)-wise independent function (either in our construction or in previous work), results in an explicit construction for \emph{quantum state tt-designs} for all tt. In fact, we show that the circuit complexity (in terms of both circuit size and depth) of constructing tt-designs is bounded by that of (2t)(2t)-wise independent functions. Explicitly, while in prior literature tt-designs required linear depth (for t>2t > 2), this observation shows that polylogarithmic depth suffices for all tt. We note that our constructions yield pseudorandom states and state designs with only real-valued amplitudes, which was not previously known. Furthermore, generating these states require quantum circuit of restricted form: applying one layer of Hadamard gates, followed by a sequence of Toffoli gates. This structure may be useful for efficiency and simplicity of implementation

    Efficient semi-static secure broadcast encryption scheme

    Get PDF
    In this paper, we propose a semi-static secure broadcast encryption scheme with constant-sized private keys and ciphertexts. Our result improves the semi-static secure broadcast encryption scheme introduced by Gentry and Waters. Specifically, we reduce the private key and ciphertext size by half. By applying the generic transformation proposed by Gentry and Waters, our scheme also achieves adaptive security. Finally, we present an improved implementation idea which can reduce the ciphertext size in the aforementioned generic transformation

    Leakage-resilient coin tossing

    Get PDF
    Proceedings 25th International Symposium, DISC 2011, Rome, Italy, September 20-22, 2011.The ability to collectively toss a common coin among n parties in the presence of faults is an important primitive in the arsenal of randomized distributed protocols. In the case of dishonest majority, it was shown to be impossible to achieve less than 1 r bias in O(r) rounds (Cleve STOC ’86). In the case of honest majority, in contrast, unconditionally secure O(1)-round protocols for generating common unbiased coins follow from general completeness theorems on multi-party secure protocols in the secure channels model (e.g., BGW, CCD STOC ’88). However, in the O(1)-round protocols with honest majority, parties generate and hold secret values which are assumed to be perfectly hidden from malicious parties: an assumption which is crucial to proving the resulting common coin is unbiased. This assumption unfortunately does not seem to hold in practice, as attackers can launch side-channel attacks on the local state of honest parties and leak information on their secrets. In this work, we present an O(1)-round protocol for collectively generating an unbiased common coin, in the presence of leakage on the local state of the honest parties. We tolerate t ≤ ( 1 3 − )n computationallyunbounded Byzantine faults and in addition a Ω(1)-fraction leakage on each (honest) party’s secret state. Our results hold in the memory leakage model (of Akavia, Goldwasser, Vaikuntanathan ’08) adapted to the distributed setting. Additional contributions of our work are the tools we introduce to achieve the collective coin toss: a procedure for disjoint committee election, and leakage-resilient verifiable secret sharing.National Defense Science and Engineering Graduate FellowshipNational Science Foundation (U.S.) (CCF-1018064

    Public-Key Encryption Schemes with Auxiliary Inputs

    Get PDF
    7th Theory of Cryptography Conference, TCC 2010, Zurich, Switzerland, February 9-11, 2010. ProceedingsWe construct public-key cryptosystems that remain secure even when the adversary is given any computationally uninvertible function of the secret key as auxiliary input (even one that may reveal the secret key information-theoretically). Our schemes are based on the decisional Diffie-Hellman (DDH) and the Learning with Errors (LWE) problems. As an independent technical contribution, we extend the Goldreich-Levin theorem to provide a hard-core (pseudorandom) value over large fields.National Science Foundation (U.S.) (Grant CCF-0514167)National Science Foundation (U.S.) (Grant CCF-0635297)National Science Foundation (U.S.) (Grant NSF-0729011)Israel Science Foundation (700/08)Chais Family Fellows Progra

    A Framework for Adversarially Robust Streaming Algorithms

    Full text link
    We investigate the adversarial robustness of streaming algorithms. In this context, an algorithm is considered robust if its performance guarantees hold even if the stream is chosen adaptively by an adversary that observes the outputs of the algorithm along the stream and can react in an online manner. While deterministic streaming algorithms are inherently robust, many central problems in the streaming literature do not admit sublinear-space deterministic algorithms; on the other hand, classical space-efficient randomized algorithms for these problems are generally not adversarially robust. This raises the natural question of whether there exist efficient adversarially robust (randomized) streaming algorithms for these problems. In this work, we show that the answer is positive for various important streaming problems in the insertion-only model, including distinct elements and more generally FpF_p-estimation, FpF_p-heavy hitters, entropy estimation, and others. For all of these problems, we develop adversarially robust (1+ε)(1+\varepsilon)-approximation algorithms whose required space matches that of the best known non-robust algorithms up to a poly(logn,1/ε)\text{poly}(\log n, 1/\varepsilon) multiplicative factor (and in some cases even up to a constant factor). Towards this end, we develop several generic tools allowing one to efficiently transform a non-robust streaming algorithm into a robust one in various scenarios.Comment: Conference version in PODS 2020. Version 3 addressing journal referees' comments; improved exposition of sketch switchin
    corecore