72 research outputs found

    From Balls and Bins to Points and Vertices

    Get PDF
    Given a graph G = (V, E) with |V| = n, we consider the following problem. Place m = n points on the vertices of G independently and uniformly at random. Once the points are placed, relocate them using a bijection from the points to the vertices that minimizes the maximum distance between the random place of the points and their target vertices. We look for an upper bound on this maximum relocation distance that holds with high probability (over the initial placements of the points). For general graphs and in the case m ≤ n, we prove the #P -hardness of the problem and that the maximum relocation distance is O(√n) with high probability. We present a Fully Polynomial Randomized Approximation Scheme when the input graph admits a polynomial-size family of witness cuts while for trees we provide a 2-approximation algorithm. Many applications concern the variation in which m = (1 − ǫ)n for some 0 < ǫ < 1. We provide several bounds for the maximum relocation distance according to different graph topologies

    Budgeted Dominating Sets in Uncertain Graphs

    Get PDF
    We study the Budgeted Dominating Set (BDS) problem on uncertain graphs, namely, graphs with a probability distribution p associated with the edges, such that an edge e exists in the graph with probability p(e). The input to the problem consists of a vertex-weighted uncertain graph ? = (V, E, p, ?) and an integer budget (or solution size) k, and the objective is to compute a vertex set S of size k that maximizes the expected total domination (or total weight) of vertices in the closed neighborhood of S. We refer to the problem as the Probabilistic Budgeted Dominating Set (PBDS) problem. In this article, we present the following results on the complexity of the PBDS problem. 1) We show that the PBDS problem is NP-complete even when restricted to uncertain trees of diameter at most four. This is in sharp contrast with the well-known fact that the BDS problem is solvable in polynomial time in trees. We further show that PBDS is ?[1]-hard for the budget parameter k, and under the Exponential time hypothesis it cannot be solved in n^o(k) time. 2) We show that if one is willing to settle for (1-?) approximation, then there exists a PTAS for PBDS on trees. Moreover, for the scenario of uniform edge-probabilities, the problem can be solved optimally in polynomial time. 3) We consider the parameterized complexity of the PBDS problem, and show that Uni-PBDS (where all edge probabilities are identical) is ?[1]-hard for the parameter pathwidth. On the other hand, we show that it is FPT in the combined parameters of the budget k and the treewidth. 4) Finally, we extend some of our parameterized results to planar and apex-minor-free graphs. Our first hardness proof (Thm. 1) makes use of the new problem of k-Subset ?-? Maximization (k-SPM), which we believe is of independent interest. We prove its NP-hardness by a reduction from the well-known k-SUM problem, presenting a close relationship between the two problems

    Improving Network Reliability: Analysis, Methodology, and Algorithms

    Get PDF
    The reliability of networking and communication systems is vital for the nation's economy and security. Optical and cellular networks have become a critical infrastructure and are indispensable in emergency situations. This dissertation outlines methods for analyzing such infrastructures in the presence of catastrophic failures, such as a hurricane, as well as accidental failures of one or more components. Additionally, it presents a method for protecting against the loss of a single link in a multicast network along with a technique that enables wireless clients to efficiently recover lost data sent by their source through collaborative information exchange. Analysis of a network's reliability during a natural disaster can be assessed by simulating the conditions in which it is expected to perform. This dissertation conducts the analysis of a cellular infrastructure in the aftermath of a hurricane through Monte-Carlo sampling and presents alternative topologies which reduce resulting loss of calls. While previous research on restoration mechanisms for large-scale networks has mostly focused on handling the failures of single network elements, this dissertation examines the sampling methods used for simulating multiple failures. We present a quick method of nding a lower bound on a network's data loss through enumeration of possible cuts as well as an efficient method of nding a tighter lower bound through genetic algorithms leveraging the niching technique. Mitigation of data losses in a multicast network can be achieved by adding redundancy and employing advanced coding techniques. By using Maximum Rank Distance (MRD) codes at the source, a provider can create a parity packet which is e ectively linearly independent from the source packets such that all packets may be transmitted through the network using the network coding technique. This allows all sinks to recover all of the original data even with the failure of an edge within the network. Furthermore, this dissertation presents a method that allows a group of wireless clients to cooperatively recover from erasures (e.g., due to failures) by using the index coding techniques

    Exponential-Time Algorithms and Complexity of NP-Hard Graph Problems

    Get PDF

    Algebraic Approaches to Stochastic Optimization

    Get PDF
    The dissertation presents algebraic approaches to the shortest path and maximum flow problems in stochastic networks. The goal of the stochastic shortest path problem is to find the distribution of the shortest path length, while the goal of the stochastic maximum flow problem is to find the distribution of the maximum flow value. In stochastic networks it is common to model arc values (lengths, capacities) as random variables. In this dissertation, we model arc values with discrete non-negative random variables and shows how each arc value can be represented as a polynomial. We then define two algebraic operations and use these operations to develop both exact and approximating algorithms for each problem in acyclic networks. Using majorization concepts, we show that the approximating algorithms produce bounds on the distribution of interest; we obtain both lower and upper bounding distributions. We also obtain bounds on the expected shortest path length and expected maximum flow value. In addition, we used fixed-point iteration techniques to extend these approaches to general networks. Finally, we present a modified version of the Quine-McCluskey method for simplification of Boolean expressions in order to simplify polynomials used in our work

    Quantization in acquisition and computation networks

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 151-165).In modern systems, it is often desirable to extract relevant information from large amounts of data collected at different spatial locations. Applications include sensor networks, wearable health-monitoring devices and a variety of other systems for inference. Several existing source coding techniques, such as Slepian-Wolf and Wyner-Ziv coding, achieve asymptotic compression optimality in distributed systems. However, these techniques are rarely used in sensor networks because of decoding complexity and prohibitively long code length. Moreover, the fundamental limits that arise from existing techniques are intractable to describe for a complicated network topology or when the objective of the system is to perform some computation on the data rather than to reproduce the data. This thesis bridges the technological gap between the needs of real-world systems and the optimistic bounds derived from asymptotic analysis. Specifically, we characterize fundamental trade-offs when the desired computation is incorporated into the compression design and the code length is one. To obtain both performance guarantees and achievable schemes, we use high-resolution quantization theory, which is complementary to the Shannon-theoretic analyses previously used to study distributed systems. We account for varied network topologies, such as those where sensors are allowed to collaborate or the communication links are heterogeneous. In these settings, a small amount of intersensor communication can provide a significant improvement in compression performance. As a result, this work suggests new compression principles and network design for modern distributed systems. Although the ideas in the thesis are motivated by current and future sensor network implementations, the framework applies to a wide range of signal processing questions. We draw connections between the fidelity criteria studied in the thesis and distortion measures used in perceptual coding. As a consequence, we determine the optimal quantizer for expected relative error (ERE), a measure that is widely useful but is often neglected in the source coding community. We further demonstrate that applying the ERE criterion to psychophysical models can explain the Weber-Fechner law, a longstanding hypothesis of how humans perceive the external world. Our results are consistent with the hypothesis that human perception is Bayesian optimal for information acquisition conditioned on limited cognitive resources, thereby supporting the notion that the brain is efficient at acquisition and adaptation.by John Z. Sun.Ph.D

    Survivability in layered networks

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 195-204).In layered networks, a single failure at the lower (physical) layer may cause multiple failures at the upper (logical) layer. As a result, traditional schemes that protect against single failures may not be effective in layered networks. This thesis studies the problem of maximizing network survivability in the layered setting, with a focus on optimizing the embedding of the logical network onto the physical network. In the first part of the thesis, we start with an investigation of the fundamental properties of layered networks, and show that basic network connectivity structures, such as cuts, paths and spanning trees, exhibit fundamentally different characteristics from their single-layer counterparts. This leads to our development of a new crosslayer survivability metric that properly quantifies the resilience of the layered network against physical failures. Using this new metric, we design algorithms to embed the logical network onto the physical network based on multi-commodity flows, to maximize the cross-layer survivability. In the second part of the thesis, we extend our model to a random failure setting and study the cross-layer reliability of the networks, defined to be the probability that the upper layer network stays connected under the random failure events. We generalize the classical polynomial expression for network reliability to the layered setting. Using Monte-Carlo techniques, we develop efficient algorithms to compute an approximate polynomial expression for reliability, as a function of the link failure probability. The construction of the polynomial eliminates the need to resample when the cross-layer reliability under different link failure probabilities is assessed. Furthermore, the polynomial expression provides important insight into the connection between the link failure probability, the cross-layer reliability and the structure of a layered network. We show that in general the optimal embedding depends on the link failure probability, and characterize the properties of embeddings that maximize the reliability under different failure probability regimes. Based on these results, we propose new iterative approaches to improve the reliability of the layered networks. We demonstrate via extensive simulations that these new approaches result in embeddings with significantly higher reliability than existing algorithms.by Kayi Lee.Ph.D
    corecore