1,306 research outputs found

    Application of new probabilistic graphical models in the genetic regulatory networks studies

    Get PDF
    This paper introduces two new probabilistic graphical models for reconstruction of genetic regulatory networks using DNA microarray data. One is an Independence Graph (IG) model with either a forward or a backward search algorithm and the other one is a Gaussian Network (GN) model with a novel greedy search method. The performances of both models were evaluated on four MAPK pathways in yeast and three simulated data sets. Generally, an IG model provides a sparse graph but a GN model produces a dense graph where more information about gene-gene interactions is preserved. Additionally, we found two key limitations in the prediction of genetic regulatory networks using DNA microarray data, the first is the sufficiency of sample size and the second is the complexity of network structures may not be captured without additional data at the protein level. Those limitations are present in all prediction methods which used only DNA microarray data.Comment: 38 pages, 3 figure

    Theoretically Efficient Parallel Graph Algorithms Can Be Fast and Scalable

    Full text link
    There has been significant recent interest in parallel graph processing due to the need to quickly analyze the large graphs available today. Many graph codes have been designed for distributed memory or external memory. However, today even the largest publicly-available real-world graph (the Hyperlink Web graph with over 3.5 billion vertices and 128 billion edges) can fit in the memory of a single commodity multicore server. Nevertheless, most experimental work in the literature report results on much smaller graphs, and the ones for the Hyperlink graph use distributed or external memory. Therefore, it is natural to ask whether we can efficiently solve a broad class of graph problems on this graph in memory. This paper shows that theoretically-efficient parallel graph algorithms can scale to the largest publicly-available graphs using a single machine with a terabyte of RAM, processing them in minutes. We give implementations of theoretically-efficient parallel algorithms for 20 important graph problems. We also present the optimizations and techniques that we used in our implementations, which were crucial in enabling us to process these large graphs quickly. We show that the running times of our implementations outperform existing state-of-the-art implementations on the largest real-world graphs. For many of the problems that we consider, this is the first time they have been solved on graphs at this scale. We have made the implementations developed in this work publicly-available as the Graph-Based Benchmark Suite (GBBS).Comment: This is the full version of the paper appearing in the ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 201

    Free energy landscapes, dynamics and the edge of chaos in mean-field models of spin glasses

    Full text link
    Metastable states in Ising spin-glass models are studied by finding iterative solutions of mean-field equations for the local magnetizations. Two different equations are studied: the TAP equations which are exact for the SK model, and the simpler `naive-mean-field' (NMF) equations. The free-energy landscapes that emerge are very different. For the TAP equations, the numerical studies confirm the analytical results of Aspelmeier et al., which predict that TAP states consist of close pairs of minima and index-one (one unstable direction) saddle points, while for the NMF equations saddle points with large indices are found. For TAP the barrier height between a minimum and its nearby saddle point scales as (f-f_0)^{-1/3} where f is the free energy per spin of the solution and f_0 is the equilibrium free energy per spin. This means that for `pure states', for which f-f_0 is of order 1/N, the barriers scale as N^{1/3}, but between states for which f-f_0 is of order one the barriers are finite and also small so such metastable states will be of limited physical significance. For the NMF equations there are saddles of index K and we can demonstrate that their complexity Sigma_K scales as a function of K/N. We have also employed an iterative scheme with a free parameter that can be adjusted to bring the system of equations close to the `edge of chaos'. Both for the TAP and NME equations it is possible with this approach to find metastable states whose free energy per spin is close to f_0. As N increases, it becomes harder and harder to find solutions near the edge of chaos, but nevertheless the results which can be obtained are competitive with those achieved by more time-consuming computing methods and suggest that this method may be of general utility.Comment: 13 page

    Assessing Simulations of Imperial Dynamics and Conflict in the Ancient World

    Get PDF
    The development of models to capture large-scale dynamics in human history is one of the core contributions of cliodynamics. Most often, these models are assessed by their predictive capability on some macro-scale and aggregated measure and compared to manually curated historical data. In this report, we consider the model from Turchin et al. (2013), where the evaluation is done on the prediction of "imperial density": the relative frequency with which a geographical area belonged to large-scale polities over a certain time window. We implement the model and release both code and data for reproducibility. We then assess its behaviour against three historical data sets: the relative size of simulated polities vs historical ones; the spatial correlation of simulated imperial density with historical population density; the spatial correlation of simulated conflict vs historical conflict. At the global level, we show good agreement with population density (R2<0.75R^2 < 0.75), and some agreement with historical conflict in Europe (R2<0.42R^2 < 0.42). The model instead fails to reproduce the historical shape of individual polities. Finally, we tweak the model to behave greedily by having polities preferentially attacking weaker neighbours. Results significantly degrade, suggesting that random attacks are a key trait of the original model. We conclude by proposing a way forward by matching the probabilistic imperial strength from simulations to inferred networked communities from real settlement data

    A Framework for Reinforcement Learning and Planning

    Full text link
    Sequential decision making, commonly formalized as Markov Decision Process optimization, is a key challenge in artificial intelligence. Two successful approaches to MDP optimization are planning and reinforcement learning. Both research fields largely have their own research communities. However, if both research fields solve the same problem, then we should be able to disentangle the common factors in their solution approaches. Therefore, this paper presents a unifying framework for reinforcement learning and planning (FRAP), which identifies the underlying dimensions on which any planning or learning algorithm has to decide. At the end of the paper, we compare - in a single table - a variety of well-known planning, model-free and model-based RL algorithms along the dimensions of our framework, illustrating the validity of the framework. Altogether, FRAP provides deeper insight into the algorithmic space of planning and reinforcement learning, and also suggests new approaches to integration of both fields
    • …
    corecore