3,191 research outputs found

    Self-similarity of complex networks

    Full text link
    Complex networks have been studied extensively due to their relevance to many real systems as diverse as the World-Wide-Web (WWW), the Internet, energy landscapes, biological and social networks \cite{ab-review,mendes,vespignani,newman,amaral}. A large number of real networks are called ``scale-free'' because they show a power-law distribution of the number of links per node \cite{ab-review,barabasi1999,faloutsos}. However, it is widely believed that complex networks are not {\it length-scale} invariant or self-similar. This conclusion originates from the ``small-world'' property of these networks, which implies that the number of nodes increases exponentially with the ``diameter'' of the network \cite{erdos,bollobas,milgram,watts}, rather than the power-law relation expected for a self-similar structure. Nevertheless, here we present a novel approach to the analysis of such networks, revealing that their structure is indeed self-similar. This result is achieved by the application of a renormalization procedure which coarse-grains the system into boxes containing nodes within a given "size". Concurrently, we identify a power-law relation between the number of boxes needed to cover the network and the size of the box defining a finite self-similar exponent. These fundamental properties, which are shown for the WWW, social, cellular and protein-protein interaction networks, help to understand the emergence of the scale-free property in complex networks. They suggest a common self-organization dynamics of diverse networks at different scales into a critical state and in turn bring together previously unrelated fields: the statistical physics of complex networks with renormalization group, fractals and critical phenomena.Comment: 28 pages, 12 figures, more informations at http://www.jamlab.or

    Using The Software Adapter To Connect Legacy Simulation Models To The Rti

    Get PDF
    The establishment of a network of persistent shared simulations depends on the presence of a robust standard for communicating state information between those simulations. The High Level Architecture (HLA) can serve as the basis for such a standard. While the HLA is architecture, not software, use of Run Time Infrastructure (RTI) software is required to support operations of a federation execution. The integration of RTI with existing simulation models is complex and requires a lot of expertise. This thesis implements a less complex and effective interaction between a legacy simulation model and RTI using a middleware tool known as Distributed Manufacturing Simulation (DMS) adapter. Shuttle Model, an Arena based discrete-event simulation model for shuttle operations, is connected to the RTI using the DMS adapter. The adapter provides a set of functions that are to be incorporated within the Shuttle Model, in a procedural manner, in order to connect to RTI. This thesis presents the procedure when the Shuttle Model connects to the RTI, to communicate with the Scrub Model for approval of its shuttle\u27s launch

    Analyzing The Community Structure Of Web-like Networks: Models And Algorithms

    Get PDF
    This dissertation investigates the community structure of web-like networks (i.e., large, random, real-life networks such as the World Wide Web and the Internet). Recently, it has been shown that many such networks have a locally dense and globally sparse structure with certain small, dense subgraphs occurring much more frequently than they do in the classical Erdös-Rényi random graphs. This peculiarity--which is commonly referred to as community structure--has been observed in seemingly unrelated networks such as the Web, email networks, citation networks, biological networks, etc. The pervasiveness of this phenomenon has led many researchers to believe that such cohesive groups of nodes might represent meaningful entities. For example, in the Web such tightly-knit groups of nodes might represent pages with a common topic, geographical location, etc., while in the neural networks they might represent evolved computational units. The notion of community has emerged in an effort to formalize the empirical observation of the locally dense globally sparse structure of web-like networks. In the broadest sense, a community in a web-like network is defined as a group of nodes that induces a dense subgraph which is sparsely linked with the rest of the network. Due to a wide array of envisioned applications, ranging from crawlers and search engines to network security and network compression, there has recently been a widespread interest in finding efficient community-mining algorithms. In this dissertation, the community structure of web-like networks is investigated by a combination of analytical and computational techniques: First, we consider the problem of modeling the web-like networks. In the recent years, many new random graph models have been proposed to account for some recently discovered properties of web-like networks that distinguish them from the classical random graphs. The vast majority of these random graph models take into account only the addition of new nodes and edges. Yet, several empirical observations indicate that deletion of nodes and edges occurs frequently in web-like networks. Inspired by such observations, we propose and analyze two dynamic random graph models that combine node and edge addition with a uniform and a preferential deletion of nodes, respectively. In both cases, we find that the random graphs generated by such models follow power-law degree distributions (in agreement with the degree distribution of many web-like networks). Second, we analyze the expected density of certain small subgraphs--such as defensive alliances on three and four nodes--in various random graphs models. Our findings show that while in the binomial random graph the expected density of such subgraphs is very close to zero, in some dynamic random graph models it is much larger. These findings converge with our results obtained by computing the number of communities in some Web crawls. Next, we investigate the computational complexity of the community-mining problem under various definitions of community. Assuming the definition of community as a global defensive alliance, or a global offensive alliance we prove--using transformations from the dominating set problem--that finding optimal communities is an NP-complete problem. These and other similar complexity results coupled with the fact that many web-like networks are huge, indicate that it is unlikely that fast, exact sequential algorithms for mining communities may be found. To handle this difficulty we adopt an algorithmic definition of community and a simpler version of the community-mining problem, namely: find the largest community to which a given set of seed nodes belong. We propose several greedy algorithms for this problem: The first proposed algorithm starts out with a set of seed nodes--the initial community--and then repeatedly selects some nodes from community\u27s neighborhood and pulls them in the community. In each step, the algorithm uses clustering coefficient--a parameter that measures the fraction of the neighbors of a node that are neighbors themselves--to decide which nodes from the neighborhood should be pulled in the community. This algorithm has time complexity of order , where denotes the number of nodes visited by the algorithm and is the maximum degree encountered. Thus, assuming a power-law degree distribution this algorithm is expected to run in near-linear time. The proposed algorithm achieved good accuracy when tested on some real and computer-generated networks: The fraction of community nodes classified correctly is generally above 80% and often above 90% . A second algorithm based on a generalized clustering coefficient, where not only the first neighborhood is taken into account but also the second, the third, etc., is also proposed. This algorithm achieves a better accuracy than the first one but also runs slower. Finally, a randomized version of the second algorithm which improves the time complexity without affecting the accuracy significantly, is proposed. The main target application of the proposed algorithms is focused crawling--the selective search for web pages that are relevant to a pre-defined topic

    Determining the dynamics of collaboration in EU Framework Programmes under a network perspective

    Get PDF
    Collaborative networks gained attention in the field of economics of innovation in the recent past. One of the main interests concerns the temporal analysis of such networks, both in a scientific and in a European policy context. At the European level indeed, the objective is to promote strong and durable partnerships among research institutions and with industry, going beyond the usual project-based cooperation. The purpose of this study is to investigate these long-lasting collaborative relationships between the organizations that received funds by all the first eight European Framework Programmes (EU FPs). EU FPs are multi-annual programmes providing funds mainly to EU member states, but also to associate countries, in order to promote long-term investments in several areas. Considering participations in European projects funded by all the first eight EU FPs gives us the possibility to analyze the dynamics of collaborations in the context of European research projects over a long-time span. In more detail, we adopt a novel approach to model the dynamics of participation in EU FPs by means of Social Network Analysis (SNA) and statistics tools. The main objective is to estimate the probabilities of moving from one position to another - in terms of centrality measures - across different FPs, and to understand if the position within subsequent collaborative research networks is affected by a certain path dependency. Our results confirm the existence of a path dependency, in the sense that participating in previous FPs provides a competitive advantage to organizations due to several network benefits, such as growing experience, competencies, and popularity. Phenomena of "preferential attachment" are also evident. Finally, we find that the estimated probability transition matrices are able to highlight relevant events that affected the European Union and its strategies in the field of research, which are the Treaty of Maastricht and the adoption of the European Research Area (ERA)

    The role of grammar in transition-probabilities of subsequent words in English text

    Get PDF
    Sentence formation is a highly structured, history-dependent, and sample-space reducing (SSR) process. While the first word in a sentence can be chosen from the entire vocabulary, typically, the freedom of choosing subsequent words gets more and more constrained by grammar and context, as the sentence progresses. This sample-space reducing property offers a natural explanation of Zipf’s law in word frequencies, however, it fails to capture the structure of the word-to-word transition probability matrices of English text. Here we adopt the view that grammatical constraints (such as subject–predicate–object) locally re-order the word order in sentences that are sampled by the word generation process. We demonstrate that superimposing grammatical structure–as a local word re-ordering (permutation) process–on a sample-space reducing word generation process is sufficient to explain both, word frequencies and word-to-word transition probabilities. We compare the performance of the grammatically ordered SSR model in reproducing several test statistics of real texts with other text generation models, such as the Bernoulli model, the Simon model, and the random typewriting model

    visone - Software for the Analysis and Visualization of Social Networks

    Get PDF
    We present the software tool visone which combines graph-theoretic methods for the analysis of social networks with tailored means of visualization. Our main contribution is the design of novel graph-layout algorithms which accurately reflect computed analyses results in well-arranged drawings of the networks under consideration. Besides this, we give a detailed description of the design of the software tool and the provided analysis methods
    • …
    corecore