18 research outputs found

    Evolutionary conservation and over-representation of functionally enriched network patterns in the yeast regulatory network

    Get PDF
    BACKGROUND: Localized network patterns are assumed to represent an optimal design principle in different biological networks. A widely used method for identifying functional components in biological networks is looking for network motifs – over-represented network patterns. A number of recent studies have undermined the claim that these over-represented patterns are indicative of optimal design principles and question whether localized network patterns are indeed of functional significance. This paper examines the functional significance of regulatory network patterns via their biological annotation and evolutionary conservation. RESULTS: We enumerate all 3-node network patterns in the regulatory network of the yeast S. cerevisiae and examine the biological GO annotation and evolutionary conservation of their constituent genes. Specific 3-node patterns are found to be functionally enriched in different exogenous cellular conditions and thus may represent significant functional components. These functionally enriched patterns are composed mainly of recently evolved genes suggesting that there is no evolutionary pressure acting to preserve such functionally enriched patterns. No correlation is found between over-representation of network patterns and functional enrichment. CONCLUSION: The findings of functional enrichment support the view that network patterns constitute an important design principle in regulatory networks. However, the wildly used method of over-representation for detecting motifs is not suitable for identifying functionally enriched patterns

    More data means less inference: A pseudo-max approach to structured learning

    Get PDF
    The problem of learning to predict structured labels is of key importance in many applications. However, for general graph structure both learning and inference in this setting are intractable. Here we show that it is possible to circumvent this difficulty when the input distribution is rich enough via a method similar in spirit to pseudo-likelihood. We show how our new method achieves consistency, and illustrate empirically that it indeed performs as well as exact methods when sufficiently large training sets are used.United States-Israel Binational Science Foundation (Grant 2008303)Google (Firm) (Research Grant)Google (Firm) (PhD Fellowship

    Learning efficiently with approximate inference via dual losses

    Get PDF
    Many structured prediction tasks involve complex models where inference is computationally intractable, but where it can be well approximated using a linear programming relaxation. Previous approaches for learning for structured prediction (e.g., cutting- plane, subgradient methods, perceptron) repeatedly make predictions for some of the data points. These approaches are computationally demanding because each prediction involves solving a linear program to optimality. We present a scalable algorithm for learning for structured prediction. The main idea is to instead solve the dual of the structured prediction loss. We formulate the learning task as a convex minimization over both the weights and the dual variables corresponding to each data point. As a result, we can begin to optimize the weights even before completely solving any of the individual prediction problems. We show how the dual variables can be efficiently optimized using coordinate descent. Our algorithm is competitive with state-of-the-art methods such as stochastic subgradient and cutting-plane
    corecore