1,142 research outputs found

    Designing Networks with Good Equilibria under Uncertainty

    Get PDF
    We consider the problem of designing network cost-sharing protocols with good equilibria under uncertainty. The underlying game is a multicast game in a rooted undirected graph with nonnegative edge costs. A set of k terminal vertices or players need to establish connectivity with the root. The social optimum is the Minimum Steiner Tree. We are interested in situations where the designer has incomplete information about the input. We propose two different models, the adversarial and the stochastic. In both models, the designer has prior knowledge of the underlying metric but the requested subset of the players is not known and is activated either in an adversarial manner (adversarial model) or is drawn from a known probability distribution (stochastic model). In the adversarial model, the designer's goal is to choose a single, universal protocol that has low Price of Anarchy (PoA) for all possible requested subsets of players. The main question we address is: to what extent can prior knowledge of the underlying metric help in the design? We first demonstrate that there exist graphs (outerplanar) where knowledge of the underlying metric can dramatically improve the performance of good network design. Then, in our main technical result, we show that there exist graph metrics, for which knowing the underlying metric does not help and any universal protocol has PoA of Ω(logk)\Omega(\log k), which is tight. We attack this problem by developing new techniques that employ powerful tools from extremal combinatorics, and more specifically Ramsey Theory in high dimensional hypercubes. Then we switch to the stochastic model, where each player is independently activated. We show that there exists a randomized ordered protocol that achieves constant PoA. By using standard derandomization techniques, we produce a deterministic ordered protocol with constant PoA.Comment: This version has additional results about stochastic inpu

    Decoherence in Discrete Quantum Walks

    Full text link
    We present an introduction to coined quantum walks on regular graphs, which have been developed in the past few years as an alternative to quantum Fourier transforms for underpinning algorithms for quantum computation. We then describe our results on the effects of decoherence on these quantum walks on a line, cycle and hypercube. We find high sensitivity to decoherence, increasing with the number of steps in the walk, as the particle is becoming more delocalised with each step. However, the effect of a small amount of decoherence can be to enhance the properties of the quantum walk that are desirable for the development of quantum algorithms, such as fast mixing times to uniform distributions.Comment: 15 pages, Springer LNP latex style, submitted to Proceedings of DICE 200

    On the relation between Differential Privacy and Quantitative Information Flow

    Get PDF
    Differential privacy is a notion that has emerged in the community of statistical databases, as a response to the problem of protecting the privacy of the database's participants when performing statistical queries. The idea is that a randomized query satisfies differential privacy if the likelihood of obtaining a certain answer for a database xx is not too different from the likelihood of obtaining the same answer on adjacent databases, i.e. databases which differ from xx for only one individual. Information flow is an area of Security concerned with the problem of controlling the leakage of confidential information in programs and protocols. Nowadays, one of the most established approaches to quantify and to reason about leakage is based on the R\'enyi min entropy version of information theory. In this paper, we analyze critically the notion of differential privacy in light of the conceptual framework provided by the R\'enyi min information theory. We show that there is a close relation between differential privacy and leakage, due to the graph symmetries induced by the adjacency relation. Furthermore, we consider the utility of the randomized answer, which measures its expected degree of accuracy. We focus on certain kinds of utility functions called "binary", which have a close correspondence with the R\'enyi min mutual information. Again, it turns out that there can be a tight correspondence between differential privacy and utility, depending on the symmetries induced by the adjacency relation and by the query. Depending on these symmetries we can also build an optimal-utility randomization mechanism while preserving the required level of differential privacy. Our main contribution is a study of the kind of structures that can be induced by the adjacency relation and the query, and how to use them to derive bounds on the leakage and achieve the optimal utility

    A survey of statistical network models

    Full text link
    Networks are ubiquitous in science and have become a focal point for discussion in everyday life. Formal statistical models for the analysis of network data have emerged as a major topic of interest in diverse areas of study, and most of these involve a form of graphical representation. Probability models on graphs date back to 1959. Along with empirical studies in social psychology and sociology from the 1960s, these early works generated an active network community and a substantial literature in the 1970s. This effort moved into the statistical literature in the late 1970s and 1980s, and the past decade has seen a burgeoning network literature in statistical physics and computer science. The growth of the World Wide Web and the emergence of online networking communities such as Facebook, MySpace, and LinkedIn, and a host of more specialized professional network communities has intensified interest in the study of networks and network data. Our goal in this review is to provide the reader with an entry point to this burgeoning literature. We begin with an overview of the historical development of statistical network modeling and then we introduce a number of examples that have been studied in the network literature. Our subsequent discussion focuses on a number of prominent static and dynamic network models and their interconnections. We emphasize formal model descriptions, and pay special attention to the interpretation of parameters and their estimation. We end with a description of some open problems and challenges for machine learning and statistics.Comment: 96 pages, 14 figures, 333 reference

    High-dimensional asymptotics for percolation of Gaussian free field level sets

    Full text link
    We consider the Gaussian free field on Zd\mathbb{Z}^d, dd greater or equal to 33, and prove that the critical density for percolation of its level sets behaves like 1/d1+o(1)1/d^{1 + o(1)} as dd tends to infinity. Our proof gives the principal asymptotic behavior of the corresponding critical level h(d)h_*(d). Moreover, it shows that a related parameter h(d)h(d)h_{**}(d) \geq h_*(d) introduced by Rodriguez and Sznitman in arXiv:1202.5172 is in fact asymptotically equivalent to h(d)h_*(d).Comment: 39 pages, 2 figure

    Ising spin glass models versus Ising models: an effective mapping at high temperature III. Rigorous formulation and detailed proof for general graphs

    Full text link
    Recently, it has been shown that, when the dimension of a graph turns out to be infinite dimensional in a broad sense, the upper critical surface and the corresponding critical behavior of an arbitrary Ising spin glass model defined over such a graph, can be exactly mapped on the critical surface and behavior of a non random Ising model. A graph can be infinite dimensional in a strict sense, like the fully connected graph, or in a broad sense, as happens on a Bethe lattice and in many random graphs. In this paper, we firstly introduce our definition of dimensionality which is compared to the standard definition and readily applied to test the infinite dimensionality of a large class of graphs which, remarkably enough, includes even graphs where the tree-like approximation (or, in other words, the Bethe-Peierls approach), in general, may be wrong. Then, we derive a detailed proof of the mapping for all the graphs satisfying this condition. As a byproduct, the mapping provides immediately a very general Nishimori law.Comment: 25 pages, 5 figures, made statements in Sec. 10 cleare

    Effect of connectivity in an associative memory model

    Get PDF
    AbstractWe investigate how geometric properties translate into functional properties in sparse networks of computing elements. Specifically, we determine how the eigenvalues of the interconnection graph (which in turn reflect connectivity properties) relate to the quantities, number of items stored, amount of error-correction, radius of attraction, and rate of convergence, in an associative memory model consisting of a sparse network of threshold elements or neurons

    Reliability Analysis of the Hypercube Architecture.

    Get PDF
    This dissertation presents improved techniques for analyzing network-connected (NCF), 2-connected (2CF), task-based (TBF), and subcube (SF) functionality measures in a hypercube multiprocessor with faulty processing elements (PE) and/or communication elements (CE). These measures help study system-level fault tolerance issues and relate to various application modes in the hypercube. Solutions discussed in the text fall into probabilistic and deterministic models. The probabilistic measure assumes a stochastic graph of the hypercube where PE\u27s and/or CE\u27s may fail with certain probabilities, while the deterministic model considers that some system components are already failed and aims to determine the system functionality. For probabilistic model, MIL-HDBK-217F is used to predict PE and CE failure rates for an Intel iPSC system. First, a technique called CAREL is presented. A proof of its correctness is included in an appendix. Using the shelling ordering concept, CAREL is shown to solve the exact probabilistic NCF measure for a hypercube in time polynomial in the number of spanning trees. However, this number increases exponentially in the hypercube dimension. This dissertation, then, aims to more efficiently obtain lower and upper bounds on the measures. Algorithms, presented in the text, generate tighter bounds than had been obtained previously and run in time polynomial in the cube dimension. The proposed algorithms for probabilistic 2CF measure consider PE and/or CE failures. In attempting to evaluate deterministic measures, a hybrid method for fault tolerant broadcasting in the hypercube is proposed. This method combines the favorable features of redundant and non-redundant techniques. A generalized result on the deterministic TBF measure for the hypercube is then described. Two distributed algorithms are proposed to identify the largest operational subcubes in a hypercube C\sb{n} with faulty PE\u27s. Method 1, called LOS1, requires a list of faulty components and utilizes the CMB operator of CAREL to solve the problem. In case the number of unavailable nodes (faulty or busy) increases, an alternative distributed approach, called LOS2, processes m available nodes in O(mn) time. The proposed techniques are simple and efficient

    Property testing for distributions on partially ordered sets

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.Includes bibliographical references (p. 24).We survey the results of Rubinfeld, Batu et al. ([2], [3]) on testing distributions for monotonicity, and testing distributions known to be monotone for uniformity. We extend some of their results to new partial orders, and provide evidence for some new conjectural lower bounds. Our results apply to various partial orders: bipartite graphs, lines,, trees, grids, and hypercubes.by Punyashloka Biswal.M.Eng
    corecore