10,575 research outputs found

    On Simultaneous Two-player Combinatorial Auctions

    Full text link
    We consider the following communication problem: Alice and Bob each have some valuation functions v1()v_1(\cdot) and v2()v_2(\cdot) over subsets of mm items, and their goal is to partition the items into S,SˉS, \bar{S} in a way that maximizes the welfare, v1(S)+v2(Sˉ)v_1(S) + v_2(\bar{S}). We study both the allocation problem, which asks for a welfare-maximizing partition and the decision problem, which asks whether or not there exists a partition guaranteeing certain welfare, for binary XOS valuations. For interactive protocols with poly(m)poly(m) communication, a tight 3/4-approximation is known for both [Fei06,DS06]. For interactive protocols, the allocation problem is provably harder than the decision problem: any solution to the allocation problem implies a solution to the decision problem with one additional round and logm\log m additional bits of communication via a trivial reduction. Surprisingly, the allocation problem is provably easier for simultaneous protocols. Specifically, we show: 1) There exists a simultaneous, randomized protocol with polynomial communication that selects a partition whose expected welfare is at least 3/43/4 of the optimum. This matches the guarantee of the best interactive, randomized protocol with polynomial communication. 2) For all ε>0\varepsilon > 0, any simultaneous, randomized protocol that decides whether the welfare of the optimal partition is 1\geq 1 or 3/41/108+ε\leq 3/4 - 1/108+\varepsilon correctly with probability >1/2+1/poly(m)> 1/2 + 1/ poly(m) requires exponential communication. This provides a separation between the attainable approximation guarantees via interactive (3/43/4) versus simultaneous (3/41/108\leq 3/4-1/108) protocols with polynomial communication. In other words, this trivial reduction from decision to allocation problems provably requires the extra round of communication

    On the Power of Multiple Anonymous Messages

    Get PDF
    An exciting new development in differential privacy is the shuffled model, in which an anonymous channel enables non-interactive, differentially private protocols with error much smaller than what is possible in the local model, while relying on weaker trust assumptions than in the central model. In this paper, we study basic counting problems in the shuffled model and establish separations between the error that can be achieved in the single-message shuffled model and in the shuffled model with multiple messages per user. For the problem of frequency estimation for nn users and a domain of size BB, we obtain: - A nearly tight lower bound of Ω~(min(n4,B))\tilde{\Omega}( \min(\sqrt[4]{n}, \sqrt{B})) on the error in the single-message shuffled model. This implies that the protocols obtained from the amplification via shuffling work of Erlingsson et al. (SODA 2019) and Balle et al. (Crypto 2019) are essentially optimal for single-message protocols. A key ingredient in the proof is a lower bound on the error of locally-private frequency estimation in the low-privacy (aka high ϵ\epsilon) regime. - Protocols in the multi-message shuffled model with poly(logB,logn)poly(\log{B}, \log{n}) bits of communication per user and polylogBpoly\log{B} error, which provide an exponential improvement on the error compared to what is possible with single-message algorithms. For the related selection problem on a domain of size BB, we prove: - A nearly tight lower bound of Ω(B)\Omega(B) on the number of users in the single-message shuffled model. This significantly improves on the Ω(B1/17)\Omega(B^{1/17}) lower bound obtained by Cheu et al. (Eurocrypt 2019), and when combined with their O~(B)\tilde{O}(\sqrt{B})-error multi-message protocol, implies the first separation between single-message and multi-message protocols for this problem.Comment: 70 pages, 2 figures, 3 table

    A Survey on Differential Privacy with Machine Learning and Future Outlook

    Full text link
    Nowadays, machine learning models and applications have become increasingly pervasive. With this rapid increase in the development and employment of machine learning models, a concern regarding privacy has risen. Thus, there is a legitimate need to protect the data from leaking and from any attacks. One of the strongest and most prevalent privacy models that can be used to protect machine learning models from any attacks and vulnerabilities is differential privacy (DP). DP is strict and rigid definition of privacy, where it can guarantee that an adversary is not capable to reliably predict if a specific participant is included in the dataset or not. It works by injecting a noise to the data whether to the inputs, the outputs, the ground truth labels, the objective functions, or even to the gradients to alleviate the privacy issue and protect the data. To this end, this survey paper presents different differentially private machine learning algorithms categorized into two main categories (traditional machine learning models vs. deep learning models). Moreover, future research directions for differential privacy with machine learning algorithms are outlined.Comment: 12 pages, 3 figure

    Classical Algorithms from Quantum and Arthur-Merlin Communication Protocols

    Get PDF
    In recent years, the polynomial method from circuit complexity has been applied to several fundamental problems and obtains the state-of-the-art running times (e.g., R. Williams\u27s n^3 / 2^{Omega(sqrt{log n})} time algorithm for APSP). As observed in [Alman and Williams, STOC 2017], almost all applications of the polynomial method in algorithm design ultimately rely on certain (probabilistic) low-rank decompositions of the computation matrices corresponding to key subroutines. They suggest that making use of low-rank decompositions directly could lead to more powerful algorithms, as the polynomial method is just one way to derive such a decomposition. Inspired by their observation, in this paper, we study another way of systematically constructing low-rank decompositions of matrices which could be used by algorithms - communication protocols. Since their introduction, it is known that various types of communication protocols lead to certain low-rank decompositions (e.g., P protocols/rank, BQP protocols/approximate rank). These are usually interpreted as approaches for proving communication lower bounds, while in this work we explore the other direction. We have the following two generic algorithmic applications of communication protocols: - Quantum Communication Protocols and Deterministic Approximate Counting. Our first connection is that a fast BQP communication protocol for a function f implies a fast deterministic additive approximate counting algorithm for a related pair counting problem. Applying known BQP communication protocols, we get fast deterministic additive approximate counting algorithms for Count-OV (#OV), Sparse Count-OV and Formula of SYM circuits. In particular, our approximate counting algorithm for #OV runs in near-linear time for all dimensions d = o(log^2 n). Previously, even no truly-subquadratic time algorithm was known for d = omega(log n). - Arthur-Merlin Communication Protocols and Faster Satisfying-Pair Algorithms. Our second connection is that a fast AM^{cc} protocol for a function f implies a faster-than-bruteforce algorithm for f-Satisfying-Pair. Using the classical Goldwasser-Sisper AM protocols for approximating set size, we obtain a new algorithm for approximate Max-IP_{n,c log n} in time n^{2 - 1/O(log c)}, matching the state-of-the-art algorithms in [Chen, CCC 2018]. We also apply our second connection to shed some light on long-standing open problems in communication complexity. We show that if the Longest Common Subsequence (LCS) problem admits a fast (computationally efficient) AM^{cc} protocol (polylog(n) complexity), then polynomial-size Formula-SAT admits a 2^{n - n^{1-delta}} time algorithm for any constant delta > 0, which is conjectured to be unlikely by a recent work [Abboud and Bringmann, ICALP 2018]. The same holds even for a fast (computationally efficient) PH^{cc} protocol

    Complexity Theory

    Get PDF
    Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness, and quantum computation. Many of the developements are related to diverse mathematical fields such as algebraic geometry, combinatorial number theory, probability theory, quantum mechanics, representation theory, and the theory of error-correcting codes
    corecore