781 research outputs found

    APPLICATION OF GROUP TESTING FOR ANALYZING NOISY NETWORKS

    Get PDF
    My dissertation focuses on developing scalable algorithms for analyzing large complex networks and evaluating how the results alter with changes to the network. Network analysis has become a ubiquitous and very effective tool in big data analysis, particularly for understanding the mechanisms of complex systems that arise in diverse disciplines such as cybersecurity [83], biology [15], sociology [5], and epidemiology [7]. However, data from real-world systems are inherently noisy because they are influenced by fluctuations in experiments, subjective interpretation of data, and limitation of computing resources. Therefore, the corresponding networks are also approximate. This research addresses these issues of obtaining accurate results from large noisy networks efficiently. My dissertation has four main components. The first component consists of developing efficient and scalable algorithms for centrality computations that produce reliable results on noisy networks. Two novel contributions I made in this area are the development of a group testing [16] based algorithm for identification of high centrality vertices which is extremely faster than current methods, and an algorithm for computing the betweenness centrality of a specific vertex. The second component consists of developing quantitative metrics to measure how different noise models affect the analysis results. We implemented a uniform perturbation model based on random addition/ deletion of edges of a network. To quantify the stability of a network we investigated the effect that perturbations have on the top-k ranked vertices and the local structure properties of the top ranked vertices. The third component consists of developing efficient software for network analysis. I have been part of the development of a software package, ESSENS (Extensible, Scalable Software for Evolving NetworkS) [76], that effectively supports our algorithms on large networks. The fourth component is a literature review of the various noise models that researchers have applied to networks and the methods they have used to quantify the stability, sensitivity, robustness, and reliability of networks. These four aspects together will lead to efficient, accurate, and highly scalable algorithms for analyzing noisy networks

    Learning Reputation in an Authorship Network

    Full text link
    The problem of searching for experts in a given academic field is hugely important in both industry and academia. We study exactly this issue with respect to a database of authors and their publications. The idea is to use Latent Semantic Indexing (LSI) and Latent Dirichlet Allocation (LDA) to perform topic modelling in order to find authors who have worked in a query field. We then construct a coauthorship graph and motivate the use of influence maximisation and a variety of graph centrality measures to obtain a ranked list of experts. The ranked lists are further improved using a Markov Chain-based rank aggregation approach. The complete method is readily scalable to large datasets. To demonstrate the efficacy of the approach we report on an extensive set of computational simulations using the Arnetminer dataset. An improvement in mean average precision is demonstrated over the baseline case of simply using the order of authors found by the topic models

    Observer Placement for Source Localization: The Effect of Budgets and Transmission Variance

    Get PDF
    When an epidemic spreads in a network, a key question is where was its source, i.e., the node that started the epidemic. If we know the time at which various nodes were infected, we can attempt to use this information in order to identify the source. However, maintaining observer nodes that can provide their infection time may be costly, and we may have a budget kk on the number of observer nodes we can maintain. Moreover, some nodes are more informative than others due to their location in the network. Hence, a pertinent question arises: Which nodes should we select as observers in order to maximize the probability that we can accurately identify the source? Inspired by the simple setting in which the node-to-node delays in the transmission of the epidemic are deterministic, we develop a principled approach for addressing the problem even when transmission delays are random. We show that the optimal observer-placement differs depending on the variance of the transmission delays and propose approaches in both low- and high-variance settings. We validate our methods by comparing them against state-of-the-art observer-placements and show that, in both settings, our approach identifies the source with higher accuracy.Comment: Accepted for presentation at the 54th Annual Allerton Conference on Communication, Control, and Computin

    Recurrence-based time series analysis by means of complex network methods

    Full text link
    Complex networks are an important paradigm of modern complex systems sciences which allows quantitatively assessing the structural properties of systems composed of different interacting entities. During the last years, intensive efforts have been spent on applying network-based concepts also for the analysis of dynamically relevant higher-order statistical properties of time series. Notably, many corresponding approaches are closely related with the concept of recurrence in phase space. In this paper, we review recent methodological advances in time series analysis based on complex networks, with a special emphasis on methods founded on recurrence plots. The potentials and limitations of the individual methods are discussed and illustrated for paradigmatic examples of dynamical systems as well as for real-world time series. Complex network measures are shown to provide information about structural features of dynamical systems that are complementary to those characterized by other methods of time series analysis and, hence, substantially enrich the knowledge gathered from other existing (linear as well as nonlinear) approaches.Comment: To be published in International Journal of Bifurcation and Chaos (2011

    Graph Theory and Networks in Biology

    Get PDF
    In this paper, we present a survey of the use of graph theoretical techniques in Biology. In particular, we discuss recent work on identifying and modelling the structure of bio-molecular networks, as well as the application of centrality measures to interaction networks and research on the hierarchical structure of such networks and network motifs. Work on the link between structural network properties and dynamics is also described, with emphasis on synchronization and disease propagation.Comment: 52 pages, 5 figures, Survey Pape

    Kirchhoff Index As a Measure of Edge Centrality in Weighted Networks: Nearly Linear Time Algorithms

    Full text link
    Most previous work of centralities focuses on metrics of vertex importance and methods for identifying powerful vertices, while related work for edges is much lesser, especially for weighted networks, due to the computational challenge. In this paper, we propose to use the well-known Kirchhoff index as the measure of edge centrality in weighted networks, called θ\theta-Kirchhoff edge centrality. The Kirchhoff index of a network is defined as the sum of effective resistances over all vertex pairs. The centrality of an edge ee is reflected in the increase of Kirchhoff index of the network when the edge ee is partially deactivated, characterized by a parameter θ\theta. We define two equivalent measures for θ\theta-Kirchhoff edge centrality. Both are global metrics and have a better discriminating power than commonly used measures, based on local or partial structural information of networks, e.g. edge betweenness and spanning edge centrality. Despite the strong advantages of Kirchhoff index as a centrality measure and its wide applications, computing the exact value of Kirchhoff edge centrality for each edge in a graph is computationally demanding. To solve this problem, for each of the θ\theta-Kirchhoff edge centrality metrics, we present an efficient algorithm to compute its ϵ\epsilon-approximation for all the mm edges in nearly linear time in mm. The proposed θ\theta-Kirchhoff edge centrality is the first global metric of edge importance that can be provably approximated in nearly-linear time. Moreover, according to the θ\theta-Kirchhoff edge centrality, we present a θ\theta-Kirchhoff vertex centrality measure, as well as a fast algorithm that can compute ϵ\epsilon-approximate Kirchhoff vertex centrality for all the nn vertices in nearly linear time in mm
    corecore