186 research outputs found

    On the Hardness of Signaling

    Full text link
    There has been a recent surge of interest in the role of information in strategic interactions. Much of this work seeks to understand how the realized equilibrium of a game is influenced by uncertainty in the environment and the information available to players in the game. Lurking beneath this literature is a fundamental, yet largely unexplored, algorithmic question: how should a "market maker" who is privy to additional information, and equipped with a specified objective, inform the players in the game? This is an informational analogue of the mechanism design question, and views the information structure of a game as a mathematical object to be designed, rather than an exogenous variable. We initiate a complexity-theoretic examination of the design of optimal information structures in general Bayesian games, a task often referred to as signaling. We focus on one of the simplest instantiations of the signaling question: Bayesian zero-sum games, and a principal who must choose an information structure maximizing the equilibrium payoff of one of the players. In this setting, we show that optimal signaling is computationally intractable, and in some cases hard to approximate, assuming that it is hard to recover a planted clique from an Erdos-Renyi random graph. This is despite the fact that equilibria in these games are computable in polynomial time, and therefore suggests that the hardness of optimal signaling is a distinct phenomenon from the hardness of equilibrium computation. Necessitated by the non-local nature of information structures, en-route to our results we prove an "amplification lemma" for the planted clique problem which may be of independent interest

    Mixture Selection, Mechanism Design, and Signaling

    Full text link
    We pose and study a fundamental algorithmic problem which we term mixture selection, arising as a building block in a number of game-theoretic applications: Given a function gg from the nn-dimensional hypercube to the bounded interval [−1,1][-1,1], and an n×mn \times m matrix AA with bounded entries, maximize g(Ax)g(Ax) over xx in the mm-dimensional simplex. This problem arises naturally when one seeks to design a lottery over items for sale in an auction, or craft the posterior beliefs for agents in a Bayesian game through the provision of information (a.k.a. signaling). We present an approximation algorithm for this problem when gg simultaneously satisfies two smoothness properties: Lipschitz continuity with respect to the L∞L^\infty norm, and noise stability. The latter notion, which we define and cater to our setting, controls the degree to which low-probability errors in the inputs of gg can impact its output. When gg is both O(1)O(1)-Lipschitz continuous and O(1)O(1)-stable, we obtain an (additive) PTAS for mixture selection. We also show that neither assumption suffices by itself for an additive PTAS, and both assumptions together do not suffice for an additive FPTAS. We apply our algorithm to different game-theoretic applications from mechanism design and optimal signaling. We make progress on a number of open problems suggested in prior work by easily reducing them to mixture selection: we resolve an important special case of the small-menu lottery design problem posed by Dughmi, Han, and Nisan; we resolve the problem of revenue-maximizing signaling in Bayesian second-price auctions posed by Emek et al. and Miltersen and Sheffet; we design a quasipolynomial-time approximation scheme for the optimal signaling problem in normal form games suggested by Dughmi; and we design an approximation algorithm for the optimal signaling problem in the voting model of Alonso and C\^{a}mara

    Optimal detection of sparse principal components in high dimension

    Full text link
    We perform a finite sample analysis of the detection levels for sparse principal components of a high-dimensional covariance matrix. Our minimax optimal test is based on a sparse eigenvalue statistic. Alas, computing this test is known to be NP-complete in general, and we describe a computationally efficient alternative test using convex relaxations. Our relaxation is also proved to detect sparse principal components at near optimal detection levels, and it performs well on simulated datasets. Moreover, using polynomial time reductions from theoretical computer science, we bring significant evidence that our results cannot be improved, thus revealing an inherent trade off between statistical and computational performance.Comment: Published in at http://dx.doi.org/10.1214/13-AOS1127 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Graph Algorithms and Applications

    Get PDF
    The mixture of data in real-life exhibits structure or connection property in nature. Typical data include biological data, communication network data, image data, etc. Graphs provide a natural way to represent and analyze these types of data and their relationships. Unfortunately, the related algorithms usually suffer from high computational complexity, since some of these problems are NP-hard. Therefore, in recent years, many graph models and optimization algorithms have been proposed to achieve a better balance between efficacy and efficiency. This book contains some papers reporting recent achievements regarding graph models, algorithms, and applications to problems in the real world, with some focus on optimization and computational complexity

    Detecting communities is Hard (And Counting Them is Even Harder)

    Get PDF
    We consider the algorithmic problem of community detection in networks. Given an undirected friendship graph G, a subset S of vertices is an (a,b)-community if: * Every member of the community is friends with an (a)-fraction of the community; and * every non-member is friends with at most a (b)-fraction of the community. [Arora, Ge, Sachdeva, Schoenebeck 2012] gave a quasi-polynomial time algorithm for enumerating all the (a,b)-communities for any constants a>b. Here, we prove that, assuming the Exponential Time Hypothesis (ETH), quasi-polynomial time is in fact necessary - and even for a much weaker approximation desideratum. Namely, distinguishing between: * G contains an (1,o(1))-community; and * G does not contain a (b,b+o(1))-community for any b. We also prove that counting the number of (1,o(1))-communities requires quasi-polynomial time assuming the weaker #ETH

    Average-case Hardness of RIP Certification

    Get PDF
    The restricted isometry property (RIP) for design matrices gives guarantees for optimal recovery in sparse linear models. It is of high interest in compressed sensing and statistical learning. This property is particularly important for computationally efficient recovery methods. As a consequence, even though it is in general NP-hard to check that RIP holds, there have been substantial efforts to find tractable proxies for it. These would allow the construction of RIP matrices and the polynomial-time verification of RIP given an arbitrary matrix. We consider the framework of average-case certifiers, that never wrongly declare that a matrix is RIP, while being often correct for random instances. While there are such functions which are tractable in a suboptimal parameter regime, we show that this is a computationally hard task in any better regime. Our results are based on a new, weaker assumption on the problem of detecting dense subgraphs
    • …
    corecore