8 research outputs found

    On Topological Properties of Wireless Sensor Networks under the q-Composite Key Predistribution Scheme with On/Off Channels

    Full text link
    The q-composite key predistribution scheme [1] is used prevalently for secure communications in large-scale wireless sensor networks (WSNs). Prior work [2]-[4] explores topological properties of WSNs employing the q-composite scheme for q = 1 with unreliable communication links modeled as independent on/off channels. In this paper, we investigate topological properties related to the node degree in WSNs operating under the q-composite scheme and the on/off channel model. Our results apply to general q and are stronger than those reported for the node degree in prior work even for the case of q being 1. Specifically, we show that the number of nodes with certain degree asymptotically converges in distribution to a Poisson random variable, present the asymptotic probability distribution for the minimum degree of the network, and establish the asymptotically exact probability for the property that the minimum degree is at least an arbitrary value. Numerical experiments confirm the validity of our analytical findings.Comment: Best Student Paper Finalist in IEEE International Symposium on Information Theory (ISIT) 201

    A New Framework for Network Disruption

    Get PDF
    Traditional network disruption approaches focus on disconnecting or lengthening paths in the network. We present a new framework for network disruption that attempts to reroute flow through critical vertices via vertex deletion, under the assumption that this will render those vertices vulnerable to future attacks. We define the load on a critical vertex to be the number of paths in the network that must flow through the vertex. We present graph-theoretic and computational techniques to maximize this load, firstly by removing either a single vertex from the network, secondly by removing a subset of vertices.Comment: Submitted for peer review on September 13, 201

    Using The Software Adapter To Connect Legacy Simulation Models To The Rti

    Get PDF
    The establishment of a network of persistent shared simulations depends on the presence of a robust standard for communicating state information between those simulations. The High Level Architecture (HLA) can serve as the basis for such a standard. While the HLA is architecture, not software, use of Run Time Infrastructure (RTI) software is required to support operations of a federation execution. The integration of RTI with existing simulation models is complex and requires a lot of expertise. This thesis implements a less complex and effective interaction between a legacy simulation model and RTI using a middleware tool known as Distributed Manufacturing Simulation (DMS) adapter. Shuttle Model, an Arena based discrete-event simulation model for shuttle operations, is connected to the RTI using the DMS adapter. The adapter provides a set of functions that are to be incorporated within the Shuttle Model, in a procedural manner, in order to connect to RTI. This thesis presents the procedure when the Shuttle Model connects to the RTI, to communicate with the Scrub Model for approval of its shuttle\u27s launch

    Analyzing The Community Structure Of Web-like Networks: Models And Algorithms

    Get PDF
    This dissertation investigates the community structure of web-like networks (i.e., large, random, real-life networks such as the World Wide Web and the Internet). Recently, it has been shown that many such networks have a locally dense and globally sparse structure with certain small, dense subgraphs occurring much more frequently than they do in the classical Erdös-Rényi random graphs. This peculiarity--which is commonly referred to as community structure--has been observed in seemingly unrelated networks such as the Web, email networks, citation networks, biological networks, etc. The pervasiveness of this phenomenon has led many researchers to believe that such cohesive groups of nodes might represent meaningful entities. For example, in the Web such tightly-knit groups of nodes might represent pages with a common topic, geographical location, etc., while in the neural networks they might represent evolved computational units. The notion of community has emerged in an effort to formalize the empirical observation of the locally dense globally sparse structure of web-like networks. In the broadest sense, a community in a web-like network is defined as a group of nodes that induces a dense subgraph which is sparsely linked with the rest of the network. Due to a wide array of envisioned applications, ranging from crawlers and search engines to network security and network compression, there has recently been a widespread interest in finding efficient community-mining algorithms. In this dissertation, the community structure of web-like networks is investigated by a combination of analytical and computational techniques: First, we consider the problem of modeling the web-like networks. In the recent years, many new random graph models have been proposed to account for some recently discovered properties of web-like networks that distinguish them from the classical random graphs. The vast majority of these random graph models take into account only the addition of new nodes and edges. Yet, several empirical observations indicate that deletion of nodes and edges occurs frequently in web-like networks. Inspired by such observations, we propose and analyze two dynamic random graph models that combine node and edge addition with a uniform and a preferential deletion of nodes, respectively. In both cases, we find that the random graphs generated by such models follow power-law degree distributions (in agreement with the degree distribution of many web-like networks). Second, we analyze the expected density of certain small subgraphs--such as defensive alliances on three and four nodes--in various random graphs models. Our findings show that while in the binomial random graph the expected density of such subgraphs is very close to zero, in some dynamic random graph models it is much larger. These findings converge with our results obtained by computing the number of communities in some Web crawls. Next, we investigate the computational complexity of the community-mining problem under various definitions of community. Assuming the definition of community as a global defensive alliance, or a global offensive alliance we prove--using transformations from the dominating set problem--that finding optimal communities is an NP-complete problem. These and other similar complexity results coupled with the fact that many web-like networks are huge, indicate that it is unlikely that fast, exact sequential algorithms for mining communities may be found. To handle this difficulty we adopt an algorithmic definition of community and a simpler version of the community-mining problem, namely: find the largest community to which a given set of seed nodes belong. We propose several greedy algorithms for this problem: The first proposed algorithm starts out with a set of seed nodes--the initial community--and then repeatedly selects some nodes from community\u27s neighborhood and pulls them in the community. In each step, the algorithm uses clustering coefficient--a parameter that measures the fraction of the neighbors of a node that are neighbors themselves--to decide which nodes from the neighborhood should be pulled in the community. This algorithm has time complexity of order , where denotes the number of nodes visited by the algorithm and is the maximum degree encountered. Thus, assuming a power-law degree distribution this algorithm is expected to run in near-linear time. The proposed algorithm achieved good accuracy when tested on some real and computer-generated networks: The fraction of community nodes classified correctly is generally above 80% and often above 90% . A second algorithm based on a generalized clustering coefficient, where not only the first neighborhood is taken into account but also the second, the third, etc., is also proposed. This algorithm achieves a better accuracy than the first one but also runs slower. Finally, a randomized version of the second algorithm which improves the time complexity without affecting the accuracy significantly, is proposed. The main target application of the proposed algorithms is focused crawling--the selective search for web pages that are relevant to a pre-defined topic

    Adversarial Deletion in a Scale Free Random Graph Process

    No full text
    We study a dynamically evolving random graph which adds vertices and edges using preferential attachment and is “attacked by an adversary”. At time t, we add a new vertex xt and m random edges incident with xt, where m is constant. The neighbors of xt are chosen with probability proportional to degree. After adding the edges, the adversary is allowed to delete vertices. The only constraint on the adversarial deletions is that the total number of vertices deleted by time n must be no larger than δn, where δ is a constant. We show that if δ is sufficiently small and m is sufficiently large then with high probability at time n the generated graph has a component of size at least n/30
    corecore