36,302 research outputs found

    Fundamentals of Large Sensor Networks: Connectivity, Capacity, Clocks and Computation

    Full text link
    Sensor networks potentially feature large numbers of nodes that can sense their environment over time, communicate with each other over a wireless network, and process information. They differ from data networks in that the network as a whole may be designed for a specific application. We study the theoretical foundations of such large scale sensor networks, addressing four fundamental issues- connectivity, capacity, clocks and function computation. To begin with, a sensor network must be connected so that information can indeed be exchanged between nodes. The connectivity graph of an ad-hoc network is modeled as a random graph and the critical range for asymptotic connectivity is determined, as well as the critical number of neighbors that a node needs to connect to. Next, given connectivity, we address the issue of how much data can be transported over the sensor network. We present fundamental bounds on capacity under several models, as well as architectural implications for how wireless communication should be organized. Temporal information is important both for the applications of sensor networks as well as their operation.We present fundamental bounds on the synchronizability of clocks in networks, and also present and analyze algorithms for clock synchronization. Finally we turn to the issue of gathering relevant information, that sensor networks are designed to do. One needs to study optimal strategies for in-network aggregation of data, in order to reliably compute a composite function of sensor measurements, as well as the complexity of doing so. We address the issue of how such computation can be performed efficiently in a sensor network and the algorithms for doing so, for some classes of functions.Comment: 10 pages, 3 figures, Submitted to the Proceedings of the IEE

    Strategic Network Formation with Attack and Immunization

    Full text link
    Strategic network formation arises where agents receive benefit from connections to other agents, but also incur costs for forming links. We consider a new network formation game that incorporates an adversarial attack, as well as immunization against attack. An agent's benefit is the expected size of her connected component post-attack, and agents may also choose to immunize themselves from attack at some additional cost. Our framework is a stylized model of settings where reachability rather than centrality is the primary concern and vertices vulnerable to attacks may reduce risk via costly measures. In the reachability benefit model without attack or immunization, the set of equilibria is the empty graph and any tree. The introduction of attack and immunization changes the game dramatically; new equilibrium topologies emerge, some more sparse and some more dense than trees. We show that, under a mild assumption on the adversary, every equilibrium network with nn agents contains at most 2n42n-4 edges for n4n\geq 4. So despite permitting topologies denser than trees, the amount of overbuilding is limited. We also show that attack and immunization don't significantly erode social welfare: every non-trivial equilibrium with respect to several adversaries has welfare at least as that of any equilibrium in the attack-free model. We complement our theory with simulations demonstrating fast convergence of a new bounded rationality dynamic which generalizes linkstable best response but is considerably more powerful in our game. The simulations further elucidate the wide variety of asymmetric equilibria and demonstrate topological consequences of the dynamics e.g. heavy-tailed degree distributions. Finally, we report on a behavioral experiment on our game with over 100 participants, where despite the complexity of the game, the resulting network was surprisingly close to equilibrium.Comment: The short version of this paper appears in the proceedings of WINE-1

    Random graphs from a block-stable class

    Full text link
    A class of graphs is called block-stable when a graph is in the class if and only if each of its blocks is. We show that, as for trees, for most nn-vertex graphs in such a class, each vertex is in at most (1+o(1))logn/loglogn(1+o(1)) \log n / \log\log n blocks, and each path passes through at most 5(nlogn)1/25 (n \log n)^{1/2} blocks. These results extend to `weakly block-stable' classes of graphs

    Core percolation in random graphs: a critical phenomena analysis

    Full text link
    We study both numerically and analytically what happens to a random graph of average connectivity "alpha" when its leaves and their neighbors are removed iteratively up to the point when no leaf remains. The remnant is made of isolated vertices plus an induced subgraph we call the "core". In the thermodynamic limit of an infinite random graph, we compute analytically the dynamics of leaf removal, the number of isolated vertices and the number of vertices and edges in the core. We show that a second order phase transition occurs at "alpha = e = 2.718...": below the transition, the core is small but above the transition, it occupies a finite fraction of the initial graph. The finite size scaling properties are then studied numerically in detail in the critical region, and we propose a consistent set of critical exponents, which does not coincide with the set of standard percolation exponents for this model. We clarify several aspects in combinatorial optimization and spectral properties of the adjacency matrix of random graphs. Key words: random graphs, leaf removal, core percolation, critical exponents, combinatorial optimization, finite size scaling, Monte-Carlo.Comment: 15 pages, 9 figures (color eps) [v2: published text with a new Title and addition of an appendix, a ref. and a fig.
    corecore