501,352 research outputs found

    Communication Complexity of Discrete Fair Division

    Full text link
    We initiate the study of the communication complexity of fair division with indivisible goods. We focus on some of the most well-studied fairness notions (envy-freeness, proportionality, and approximations thereof) and valuation classes (submodular, subadditive and unrestricted). Within these parameters, our results completely resolve whether the communication complexity of computing a fair allocation (or determining that none exist) is polynomial or exponential (in the number of goods), for every combination of fairness notion, valuation class, and number of players, for both deterministic and randomized protocols.Comment: Accepted to SODA 201

    Worst-Case Optimal Algorithms for Parallel Query Processing

    Get PDF
    In this paper, we study the communication complexity for the problem of computing a conjunctive query on a large database in a parallel setting with pp servers. In contrast to previous work, where upper and lower bounds on the communication were specified for particular structures of data (either data without skew, or data with specific types of skew), in this work we focus on worst-case analysis of the communication cost. The goal is to find worst-case optimal parallel algorithms, similar to the work of [18] for sequential algorithms. We first show that for a single round we can obtain an optimal worst-case algorithm. The optimal load for a conjunctive query qq when all relations have size equal to MM is O(M/p1/ψ)O(M/p^{1/\psi^*}), where ψ\psi^* is a new query-related quantity called the edge quasi-packing number, which is different from both the edge packing number and edge cover number of the query hypergraph. For multiple rounds, we present algorithms that are optimal for several classes of queries. Finally, we show a surprising connection to the external memory model, which allows us to translate parallel algorithms to external memory algorithms. This technique allows us to recover (within a polylogarithmic factor) several recent results on the I/O complexity for computing join queries, and also obtain optimal algorithms for other classes of queries

    NP-hardness of circuit minimization for multi-output functions

    Get PDF
    Can we design efficient algorithms for finding fast algorithms? This question is captured by various circuit minimization problems, and algorithms for the corresponding tasks have significant practical applications. Following the work of Cook and Levin in the early 1970s, a central question is whether minimizing the circuit size of an explicitly given function is NP-complete. While this is known to hold in restricted models such as DNFs, making progress with respect to more expressive classes of circuits has been elusive. In this work, we establish the first NP-hardness result for circuit minimization of total functions in the setting of general (unrestricted) Boolean circuits. More precisely, we show that computing the minimum circuit size of a given multi-output Boolean function f : {0,1}^n ? {0,1}^m is NP-hard under many-one polynomial-time randomized reductions. Our argument builds on a simpler NP-hardness proof for the circuit minimization problem for (single-output) Boolean functions under an extended set of generators. Complementing these results, we investigate the computational hardness of minimizing communication. We establish that several variants of this problem are NP-hard under deterministic reductions. In particular, unless ? = ??, no polynomial-time computable function can approximate the deterministic two-party communication complexity of a partial Boolean function up to a polynomial. This has consequences for the class of structural results that one might hope to show about the communication complexity of partial functions

    Sign rank versus VC dimension

    Full text link
    This work studies the maximum possible sign rank of N×NN \times N sign matrices with a given VC dimension dd. For d=1d=1, this maximum is {three}. For d=2d=2, this maximum is Θ~(N1/2)\tilde{\Theta}(N^{1/2}). For d>2d >2, similar but slightly less accurate statements hold. {The lower bounds improve over previous ones by Ben-David et al., and the upper bounds are novel.} The lower bounds are obtained by probabilistic constructions, using a theorem of Warren in real algebraic topology. The upper bounds are obtained using a result of Welzl about spanning trees with low stabbing number, and using the moment curve. The upper bound technique is also used to: (i) provide estimates on the number of classes of a given VC dimension, and the number of maximum classes of a given VC dimension -- answering a question of Frankl from '89, and (ii) design an efficient algorithm that provides an O(N/log(N))O(N/\log(N)) multiplicative approximation for the sign rank. We also observe a general connection between sign rank and spectral gaps which is based on Forster's argument. Consider the N×NN \times N adjacency matrix of a Δ\Delta regular graph with a second eigenvalue of absolute value λ\lambda and ΔN/2\Delta \leq N/2. We show that the sign rank of the signed version of this matrix is at least Δ/λ\Delta/\lambda. We use this connection to prove the existence of a maximum class C{±1}NC\subseteq\{\pm 1\}^N with VC dimension 22 and sign rank Θ~(N1/2)\tilde{\Theta}(N^{1/2}). This answers a question of Ben-David et al.~regarding the sign rank of large VC classes. We also describe limitations of this approach, in the spirit of the Alon-Boppana theorem. We further describe connections to communication complexity, geometry, learning theory, and combinatorics.Comment: 33 pages. This is a revised version of the paper "Sign rank versus VC dimension". Additional results in this version: (i) Estimates on the number of maximum VC classes (answering a question of Frankl from '89). (ii) Estimates on the sign rank of large VC classes (answering a question of Ben-David et al. from '03). (iii) A discussion on the computational complexity of computing the sign-ran

    Placing Conditional Disclosure of Secrets in the Communication Complexity Universe

    Get PDF
    In the conditional disclosure of secrets (CDS) problem (Gertner et al., J. Comput. Syst. Sci., 2000) Alice and Bob, who hold n-bit inputs x and y respectively, wish to release a common secret z to Carol (who knows both x and y) if and only if the input (x,y) satisfies some predefined predicate f. Alice and Bob are allowed to send a single message to Carol which may depend on their inputs and some shared randomness, and the goal is to minimize the communication complexity while providing information-theoretic security. Despite the growing interest in this model, very few lower-bounds are known. In this paper, we relate the CDS complexity of a predicate f to its communication complexity under various communication games. For several basic predicates our results yield tight, or almost tight, lower-bounds of Omega(n) or Omega(n^{1-epsilon}), providing an exponential improvement over previous logarithmic lower-bounds. We also define new communication complexity classes that correspond to different variants of the CDS model and study the relations between them and their complements. Notably, we show that allowing for imperfect correctness can significantly reduce communication - a seemingly new phenomenon in the context of information-theoretic cryptography. Finally, our results show that proving explicit super-logarithmic lower-bounds for imperfect CDS protocols is a necessary step towards proving explicit lower-bounds against the class AM, or even AM cap coAM - a well known open problem in the theory of communication complexity. Thus imperfect CDS forms a new minimal class which is placed just beyond the boundaries of the "civilized" part of the communication complexity world for which explicit lower-bounds are known

    Performance of distributed mechanisms for flow admission in wireless adhoc networks

    Full text link
    Given a wireless network where some pairs of communication links interfere with each other, we study sufficient conditions for determining whether a given set of minimum bandwidth quality-of-service (QoS) requirements can be satisfied. We are especially interested in algorithms which have low communication overhead and low processing complexity. The interference in the network is modeled using a conflict graph whose vertices correspond to the communication links in the network. Two links are adjacent in this graph if and only if they interfere with each other due to being in the same vicinity and hence cannot be simultaneously active. The problem of scheduling the transmission of the various links is then essentially a fractional, weighted vertex coloring problem, for which upper bounds on the fractional chromatic number are sought using only localized information. We recall some distributed algorithms for this problem, and then assess their worst-case performance. Our results on this fundamental problem imply that for some well known classes of networks and interference models, the performance of these distributed algorithms is within a bounded factor away from that of an optimal, centralized algorithm. The performance bounds are simple expressions in terms of graph invariants. It is seen that the induced star number of a network plays an important role in the design and performance of such networks.Comment: 21 pages, submitted. Journal version of arXiv:0906.378

    Automated Mechanism Design

    Get PDF
    Mechanisms design has traditionally been a manual endeavor. In 2002, Conitzer and Sandholm introduced the automated mechanism design (AMD) approach, where the mechanism is computationally created for the specific problem instance at hand. This has several advantages: 1) it can yield better mechanisms than the ones known to date, 2) it applies beyond the problem classes studied manually to date, 3) it can circumvent seminal economic impossibility results that hold for classes of problems but not all instances, and 4) it shifts the burden of design from man to machine. In this talk I will overview our results on AMD to date. I will cover problem representations and the computational complexity of different variants of the design problem. Initial applications include revenue-maximizing combinatorial auctions and (combinatorial) public goods problems. Algorithms for AMD will be discussed. To reduce the computational complexity of designing optimal combinatorial auctions, I introduce an incentive compatible, individually rational subfamily called Virtual Valuations Combinatorial Auctions. The auction mechanism\u27s revenue can be boosted (started, for example, from the VCG) by hill-climbing in this subspace. I will also present computational complexity and communication complexity results that motivate multi-stage and non-truth-promoting mechanisms. Finally, I present our first steps toward automatically designing multi-stage mechanisms
    corecore