1,499 research outputs found

    Implicit Decomposition for Write-Efficient Connectivity Algorithms

    Full text link
    The future of main memory appears to lie in the direction of new technologies that provide strong capacity-to-performance ratios, but have write operations that are much more expensive than reads in terms of latency, bandwidth, and energy. Motivated by this trend, we propose sequential and parallel algorithms to solve graph connectivity problems using significantly fewer writes than conventional algorithms. Our primary algorithmic tool is the construction of an o(n)o(n)-sized "implicit decomposition" of a bounded-degree graph GG on nn nodes, which combined with read-only access to GG enables fast answers to connectivity and biconnectivity queries on GG. The construction breaks the linear-write "barrier", resulting in costs that are asymptotically lower than conventional algorithms while adding only a modest cost to querying time. For general non-sparse graphs on mm edges, we also provide the first o(m)o(m) writes and O(m)O(m) operations parallel algorithms for connectivity and biconnectivity. These algorithms provide insight into how applications can efficiently process computations on large graphs in systems with read-write asymmetry

    Randomized Search of Graphs in Log Space and Probabilistic Computation

    Full text link
    Reingold has shown that L = SL, that s-t connectivity in a poly-mixing digraph is complete for promise-RL, and that s-t connectivity for a poly-mixing out-regular digraph with known stationary distribution is in L. Several properties that bound the mixing times of random walks on digraphs have been identified, including the digraph conductance and the digraph spectral expansion. However, rapidly mixing digraphs can still have exponential cover time, thus it is important to specifically identify structural properties of digraphs that effect cover times. We examine the complexity of random walks on a basic parameterized family of unbalanced digraphs called Strong Chains (which model weakly symmetric logspace computations), and a special family of Strong Chains called Harps. We show that the worst case hitting times of Strong Chain families vary smoothly with the number of asymmetric vertices and identify the necessary condition for non-polynomial cover time. This analysis also yields bounds on the cover times of general digraphs. Next we relate random walks on graphs to the random walks that arise in Monte Carlo methods applied to optimization problems. We introduce the notion of the asymmetric states of Markov chains and use this definition to obtain some results about Markov chains. We also obtain some results on the mixing times for Markov Chain Monte Carlo Methods. Finally, we consider the question of whether a single long random walk or many short walks is a better strategy for exploration. These are walks which reset to the start after a fixed number of steps. We exhibit digraph families for which a few short walks are far superior to a single long walk. We introduce an iterative deepening random search. We use this strategy estimate the cover time for poly-mixing subgraphs. Finally we discuss complexity theoretic implications and future work

    Learning the Structure and Parameters of Large-Population Graphical Games from Behavioral Data

    Full text link
    We consider learning, from strictly behavioral data, the structure and parameters of linear influence games (LIGs), a class of parametric graphical games introduced by Irfan and Ortiz (2014). LIGs facilitate causal strategic inference (CSI): Making inferences from causal interventions on stable behavior in strategic settings. Applications include the identification of the most influential individuals in large (social) networks. Such tasks can also support policy-making analysis. Motivated by the computational work on LIGs, we cast the learning problem as maximum-likelihood estimation (MLE) of a generative model defined by pure-strategy Nash equilibria (PSNE). Our simple formulation uncovers the fundamental interplay between goodness-of-fit and model complexity: good models capture equilibrium behavior within the data while controlling the true number of equilibria, including those unobserved. We provide a generalization bound establishing the sample complexity for MLE in our framework. We propose several algorithms including convex loss minimization (CLM) and sigmoidal approximations. We prove that the number of exact PSNE in LIGs is small, with high probability; thus, CLM is sound. We illustrate our approach on synthetic data and real-world U.S. congressional voting records. We briefly discuss our learning framework's generality and potential applicability to general graphical games.Comment: Journal of Machine Learning Research. (accepted, pending publication.) Last conference version: submitted March 30, 2012 to UAI 2012. First conference version: entitled, Learning Influence Games, initially submitted on June 1, 2010 to NIPS 201
    • …
    corecore