36 research outputs found

    Fast construction on a restricted budget

    Full text link
    We introduce a model of a controlled random graph process. In this model, the edges of the complete graph KnK_n are ordered randomly and then revealed, one by one, to a player called Builder. He must decide, immediately and irrevocably, whether to purchase each observed edge. The observation time is bounded by parameter tt, and the total budget of purchased edges is bounded by parameter bb. Builder's goal is to devise a strategy that, with high probability, allows him to construct a graph of purchased edges possessing a target graph property P\mathcal{P}, all within the limitations of observation time and total budget. We show the following: (a) Builder has a strategy to achieve minimum degree kk at the hitting time for this property by purchasing at most cknc_kn edges for an explicit ck<kc_k<k; and a strategy to achieve it (slightly) after the threshold for minimum degree kk by purchasing at most (1+ε)kn/2(1+\varepsilon)kn/2 edges (which is optimal); (b) Builder has a strategy to create a Hamilton cycle if either t(1+ε)nlogn/2t\ge(1+\varepsilon)n\log{n}/2 and bCnb\ge Cn, or tCnlognt\ge Cn\log{n} and b(1+ε)nb\ge(1+\varepsilon)n, for some C=C(ε)C=C(\varepsilon); similar results hold for perfect matching; (c) Builder has a strategy to create a copy of a given kk-vertex tree if tb{(n/t)k2,1}t\ge b\gg\{(n/t)^{k-2},1\}, and this is optimal; and (d) For =2k+1\ell=2k+1 or =2k+2\ell=2k+2, Builder has a strategy to create a copy of a cycle of length \ell if bmax{nk+2/tk+1,n/t}b\gg\max\{n^{k+2}/t^{k+1},n/\sqrt{t}\}, and this is optimal.Comment: 20 pages, 2 figure

    Avoiding small subgraphs in Achlioptas processes

    Full text link
    For a fixed integer r, consider the following random process. At each round, one is presented with r random edges from the edge set of the complete graph on n vertices, and is asked to choose one of them. The selected edges are collected into a graph, which thus grows at the rate of one edge per round. This is a natural generalization of what is known in the literature as an Achlioptas process (the original version has r=2), which has been studied by many researchers, mainly in the context of delaying or accelerating the appearance of the giant component. In this paper, we investigate the small subgraph problem for Achlioptas processes. That is, given a fixed graph H, we study whether there is an online algorithm that substantially delays or accelerates a typical appearance of H, compared to its threshold of appearance in the random graph G(n, M). It is easy to see that one cannot accelerate the appearance of any fixed graph by more than the constant factor r, so we concentrate on the task of avoiding H. We determine thresholds for the avoidance of all cycles C_t, cliques K_t, and complete bipartite graphs K_{t,t}, in every Achlioptas process with parameter r >= 2.Comment: 43 pages; reorganized and shortene

    Memoryless Algorithms for the Generalized kk-server Problem on Uniform Metrics

    Full text link
    We consider the generalized kk-server problem on uniform metrics. We study the power of memoryless algorithms and show tight bounds of Θ(k!)\Theta(k!) on their competitive ratio. In particular we show that the \textit{Harmonic Algorithm} achieves this competitive ratio and provide matching lower bounds. This improves the 22k\approx 2^{2^k} doubly-exponential bound of Chiplunkar and Vishwanathan for the more general setting of uniform metrics with different weights

    A Local Stochastic Algorithm for Separation in Heterogeneous Self-Organizing Particle Systems

    Get PDF
    We present and rigorously analyze the behavior of a distributed, stochastic algorithm for separation and integration in self-organizing particle systems, an abstraction of programmable matter. Such systems are composed of individual computational particles with limited memory, strictly local communication abilities, and modest computational power. We consider heterogeneous particle systems of two different colors and prove that these systems can collectively separate into different color classes or integrate, indifferent to color. We accomplish both behaviors with the same fully distributed, local, stochastic algorithm. Achieving separation or integration depends only on a single global parameter determining whether particles prefer to be next to other particles of the same color or not; this parameter is meant to represent external, environmental influences on the particle system. The algorithm is a generalization of a previous distributed, stochastic algorithm for compression (PODC \u2716) that can be viewed as a special case of separation where all particles have the same color. It is significantly more challenging to prove that the desired behavior is achieved in the heterogeneous setting, however, even in the bichromatic case we focus on. This requires combining several new techniques, including the cluster expansion from statistical physics, a new variant of the bridging argument of Miracle, Pascoe and Randall (RANDOM \u2711), the high-temperature expansion of the Ising model, and careful probabilistic arguments

    Online Paging with Heterogeneous Cache Slots

    Get PDF

    Community detection and stochastic block models: recent developments

    Full text link
    The stochastic block model (SBM) is a random graph model with planted clusters. It is widely employed as a canonical model to study clustering and community detection, and provides generally a fertile ground to study the statistical and computational tradeoffs that arise in network and data sciences. This note surveys the recent developments that establish the fundamental limits for community detection in the SBM, both with respect to information-theoretic and computational thresholds, and for various recovery requirements such as exact, partial and weak recovery (a.k.a., detection). The main results discussed are the phase transitions for exact recovery at the Chernoff-Hellinger threshold, the phase transition for weak recovery at the Kesten-Stigum threshold, the optimal distortion-SNR tradeoff for partial recovery, the learning of the SBM parameters and the gap between information-theoretic and computational thresholds. The note also covers some of the algorithms developed in the quest of achieving the limits, in particular two-round algorithms via graph-splitting, semi-definite programming, linearized belief propagation, classical and nonbacktracking spectral methods. A few open problems are also discussed

    Polarization and Spatial Coupling:Two Techniques to Boost Performance

    Get PDF
    During the last two decades we have witnessed considerable activity in building bridges between the fields of information theory/communications, computer science, and statistical physics. This is due to the realization that many fundamental concepts and notions in these fields are in fact related and that each field can benefit from the insight and techniques developed in the others. For instance, the notion of channel capacity in information theory, threshold phenomena in computer science, and phase transitions in statistical physics are all expressions of the same concept. Therefore, it would be beneficial to develop a common framework that unifies these notions and that could help to leverage knowledge in one field to make progress in the others. A particularly striking example is the celebrated belief propagation algorithm. It was independently invented in each of these fields but for very different purposes. The realization of the commonality has benefited each of the areas. We investigate polarization and spatial coupling: two techniques that were originally invented in the context of channel coding (communications) thus resulting for the first time in efficient capacity-achieving codes for a wide range of channels. As we will discuss, both techniques play a fundamental role also in computer science and statistical physics and so these two techniques can be seen as further fundamental building blocks that unite all three areas. We demonstrate applications of these techniques, as well as the fundamental phenomena they provide. In more detail, this thesis consists of two parts. In the first part, we consider the technique of polarization and its resultant class of channel codes, called polar codes. Our main focus is the analysis and improvement of the behavior of polarization towards the most significant aspects of modern channel-coding theory: scaling laws, universality, and complexity (quantization). For each of these aspects, we derive fundamental laws that govern the behavior of polarization and polar codes. Even though we concentrate on applications in communications, the analysis that we provide is general and can be carried over to applications of polarization in computer science and statistical physics. As we will show, our investigations confirm some of the inherent strengths of polar codes such as their robustness with respect to quantization. But they also make clear in which aspects further improvement of polar codes is needed. For example, we will explain that the scaling behavior of polar codes is quite slow compared to the optimal one. Hence, further research is required in order to enhance the scaling behavior of polar codes towards optimality. In the second part of this thesis, we investigate spatial coupling. By now, there exists already a considerable literature on spatial coupling in the realm of information theory and communications. We therefore investigate mainly the impact of spatial coupling on the fields of statistical physics and computer science. We consider two well-known models. The first is the Curie-Weiss model that provides us with the simplest model for understanding the mechanism of spatial coupling in the perspective of statistical physics. Many fundamental features of spatial coupling can be simply explained here. In particular, we will show how the well-known Maxwell construction in statistical physics manifests itself through spatial coupling. We then focus on a much richer class of graphical models called constraint satisfaction problems (CSP) (e.g., K-SAT and Q-COL). These models are central to computer science. We follow a general framework: First, we introduce interpolation procedures for proving that the coupled and standard (un-coupled) models are fundamentally related, in that their static properties (such as their SAT/UNSAT threshold) are the same. We then use tools from spin glass theory (cavity method) to demonstrate the so-called phenomenon of threshold saturation in these coupled models. Finally, we present the algorithmic implications and argue that all these features provide a new avenue for obtaining better, provable, algorithmic lower bounds on static thresholds of the individual standard CSP models. We consider simple decimation algorithms (e.g., the unit clause propagation algorithm) for the coupled CSP models and provide a machinery to analyze these algorithms. These analyses enable us to observe that the algorithmic thresholds on the coupled model are significantly improved over the standard model. For some models (e.g., 3-SAT, 3-COL), these coupled algorithmic thresholds surpass the best lower bounds on the SAT/UNSAT threshold in the literature and provide us with a new lower bound. We conclude by pointing out that although we only considered some specific graphical models, our results are of general nature hence applicable to a broad set of models. In particular, a main contribution of this thesis is to firmly establish both polarization, as well as spatial coupling, in the common toolbox of information theory/communication, statistical physics, and computer science
    corecore