8,273 research outputs found

    Inferring Regulatory Networks by Combining Perturbation Screens and Steady State Gene Expression Profiles

    Full text link
    Reconstructing transcriptional regulatory networks is an important task in functional genomics. Data obtained from experiments that perturb genes by knockouts or RNA interference contain useful information for addressing this reconstruction problem. However, such data can be limited in size and/or are expensive to acquire. On the other hand, observational data of the organism in steady state (e.g. wild-type) are more readily available, but their informational content is inadequate for the task at hand. We develop a computational approach to appropriately utilize both data sources for estimating a regulatory network. The proposed approach is based on a three-step algorithm to estimate the underlying directed but cyclic network, that uses as input both perturbation screens and steady state gene expression data. In the first step, the algorithm determines causal orderings of the genes that are consistent with the perturbation data, by combining an exhaustive search method with a fast heuristic that in turn couples a Monte Carlo technique with a fast search algorithm. In the second step, for each obtained causal ordering, a regulatory network is estimated using a penalized likelihood based method, while in the third step a consensus network is constructed from the highest scored ones. Extensive computational experiments show that the algorithm performs well in reconstructing the underlying network and clearly outperforms competing approaches that rely only on a single data source. Further, it is established that the algorithm produces a consistent estimate of the regulatory network.Comment: 24 pages, 4 figures, 6 table

    Scalable Approximation Algorithm for Network Immunization

    Get PDF
    The problem of identifying important players in a given network is of pivotal importance for viral marketing, public health management, network security and various other fields of social network analysis. In this work we find the most important vertices in a graph G = (V;E) to immunize so as the chances of an epidemic outbreak is minimized. This problem is directly relevant to minimizing the impact of a contagion spread (e.g. flu virus, computer virus and rumor) in a graph (e.g. social network, computer network) with a limited budget (e.g. the number of available vaccines, antivirus software, filters). It is well known that this problem is computationally intractable (it is NP-hard). In this work we reformulate the problem as a budgeted combinational optimization problem and use techniques from spectral graph theory to design an efficient greedy algorithm to find a subset of vertices to be immunized. We show that our algorithm takes less time compared to the state of the art algorithm. Thus our algorithm is scalable to networks of much larger sizes than best known solutions proposed earlier. We also give analytical bounds on the quality of our algorithm. Furthermore, we evaluate the efficacy of our algorithm on a number of real world networks and demonstrate that the empirical performance of algorithm supplements the theoretical bounds we present, both in terms of approximation guarantees and computational efficiency

    Robustness of scale-free spatial networks

    Get PDF
    A growing family of random graphs is called robust if it retains a giant component after percolation with arbitrary positive retention probability. We study robustness for graphs, in which new vertices are given a spatial position on the dd-dimensional torus and are connected to existing vertices with a probability favouring short spatial distances and high degrees. In this model of a scale-free network with clustering we can independently tune the power law exponent τ\tau of the degree distribution and the rate δd\delta d at which the connection probability decreases with the distance of two vertices. We show that the network is robust if τ<2+1/δ\tau<2+1/\delta, but fails to be robust if τ>3\tau>3. In the case of one-dimensional space we also show that the network is not robust if τ<2+1/(δ1)\tau<2+1/(\delta-1). This implies that robustness of a scale-free network depends not only on its power-law exponent but also on its clustering features. Other than the classical models of scale-free networks our model is not locally tree-like, and hence we need to develop novel methods for its study, including, for example, a surprising application of the BK-inequality.Comment: 34 pages, 4 figure

    Greedy methods for approximate graph matching with applications for social network analysis

    Get PDF
    In this thesis, we study greedy algorithms for approximate sub-graph matching with attributed graphs. Such algorithms find one or multiple copies of a sub-graph pattern from a bigger data graph through approximate matching. One intended application of sub-graph matching method is in Social Network Analysis for detecting potential terrorist groups from known terrorist activity patterns. We propose a new method for approximate sub-graph matching which utilizes degree information to reduce the search space within the incremental greedy search framework. In addition, we have introduced the notion of a “seed” in incremental greedy method that aims to find a good initial partial match. Simulated data based on terrorist profiles database is used in our experiments that compare the computational efficiency and matching accuracy of various methods. The experiment results suggest that with increasing size of the data graph, the efficiency advantage of degree-based method becomes more significant, while degree-based method remains as accurate as incremental greedy. Using a “seed” significantly improves matching accuracy (at the cost of decreased efficiency) when the attribute values in the graphs are deceptively noisy. We have also investigated a method that allows to expand a matched sub-graph from the data graph to include those nodes strongly connected to the current match

    Nash Networks with Heterogeneous Agents

    Get PDF
    A non-cooperative model of network formation is developed. Agents form links with others based on the cost of the link and its assessed benefit. Link formation is one-sided, i.e., agents can initiate links with other agents with- out their consent, provided the agent forming the link makes the appropriate investment. Information flw is two-way. The model builds on the work of Bala and Goyal, but allows for agent heterogeneity. Whereas they permit links to fail with a certain common probability, in our model the probability of failure can be different for different links. We investigate Nash networks that exhibit connectedness and super-connectedness. We provide an explicit characterization of certain star networks. Efficiency and Pareto-optimality issues are discussed through examples. We explore alternative model specifications to address potential shortcomings.
    corecore