556 research outputs found

    Sparsity-Certifying Graph Decompositions

    Get PDF
    We describe a new algorithm, the (k, ℓ)-pebble game with colors, and use it to obtain a characterization of the family of (k, ℓ)-sparse graphs and algorithmic solutions to a family of problems concerning tree decompositions of graphs. Special instances of sparse graphs appear in rigidity theory and have received increased attention in recent years. In particular, our colored pebbles generalize and strengthen the previous results of Lee and Streinu [12] and give a new proof of the Tutte-Nash-Williams characterization of arboricity. We also present a new decomposition that certifies sparsity based on the (k, ℓ)-pebble game with colors. Our work also exposes connections between pebble game algorithms and previous sparse graph algorithms by Gabow [5], Gabow and Westermann [6] and Hendrickson [9]

    Sparse Hypergraphs and Pebble Game Algorithms

    Get PDF
    A hypergraph G=(V,E) is (k,ℓ)-sparse if no subset Vâ€Č⊂V spans more than k|Vâ€Č|−ℓ hyperedges. We characterize (k,ℓ)-sparse hypergraphs in terms of graph theoretic, matroidal and algorithmic properties. We extend several well-known theorems of Haas, LovĂĄsz, Nash-Williams, Tutte, and White and Whiteley, linking arboricity of graphs to certain counts on the number of edges. We also address the problem of finding lower-dimensional representations of sparse hypergraphs, and identify a critical behavior in terms of the sparsity parameters k and ℓ. Our constructions extend the pebble games of Lee and Streinu [A. Lee, I. Streinu, Pebble game algorithms and sparse graphs, Discrete Math. 308 (8) (2008) 1425–1437] from graphs to hypergraphs

    Graded Sparse Graphs and Matroids

    Get PDF
    Sparse graphs and their associated matroids play an important role in rigidity theory, where they capture the combinatorics of generically rigid structures. We define a new family called {\bf graded sparse graphs}, arising from generically pinned (completely immobilized) bar-and-joint frameworks and prove that they also form matroids. We address five problems on graded sparse graphs: {\bf Decision}, {\bf Extraction}, {\bf Components}, {\bf Optimization}, and {\bf Extension}. We extend our {\bf pebble game algorithms} to solve them.Comment: 9 pages, 1 figure; improved presentation and fixed typo

    Algorithms for detecting dependencies and rigid subsystems for CAD

    Get PDF
    Geometric constraint systems underly popular Computer Aided Design soft- ware. Automated approaches for detecting dependencies in a design are critical for developing robust solvers and providing informative user feedback, and we provide algorithms for two types of dependencies. First, we give a pebble game algorithm for detecting generic dependencies. Then, we focus on identifying the "special positions" of a design in which generically independent constraints become dependent. We present combinatorial algorithms for identifying subgraphs associated to factors of a particular polynomial, whose vanishing indicates a special position and resulting dependency. Further factoring in the Grassmann- Cayley algebra may allow a geometric interpretation giving conditions (e.g., "these two lines being parallel cause a dependency") determining the special position.Comment: 37 pages, 14 figures (v2 is an expanded version of an AGD'14 abstract based on v1

    On Characterizing the Data Movement Complexity of Computational DAGs for Parallel Execution

    Get PDF
    Technology trends are making the cost of data movement increasingly dominant, both in terms of energy and time, over the cost of performing arithmetic operations in computer systems. The fundamental ratio of aggregate data movement bandwidth to the total computational power (also referred to the machine balance parameter) in parallel computer systems is decreasing. It is there- fore of considerable importance to characterize the inherent data movement requirements of parallel algorithms, so that the minimal architectural balance parameters required to support it on future systems can be well understood. In this paper, we develop an extension of the well-known red-blue pebble game to develop lower bounds on the data movement complexity for the parallel execution of computational directed acyclic graphs (CDAGs) on parallel systems. We model multi-node multi-core parallel systems, with the total physical memory distributed across the nodes (that are connected through some interconnection network) and in a multi-level shared cache hierarchy for processors within a node. We also develop new techniques for lower bound characterization of non-homogeneous CDAGs. We demonstrate the use of the methodology by analyzing the CDAGs of several numerical algorithms, to develop lower bounds on data movement for their parallel execution

    An Incidence Geometry approach to Dictionary Learning

    Full text link
    We study the Dictionary Learning (aka Sparse Coding) problem of obtaining a sparse representation of data points, by learning \emph{dictionary vectors} upon which the data points can be written as sparse linear combinations. We view this problem from a geometry perspective as the spanning set of a subspace arrangement, and focus on understanding the case when the underlying hypergraph of the subspace arrangement is specified. For this Fitted Dictionary Learning problem, we completely characterize the combinatorics of the associated subspace arrangements (i.e.\ their underlying hypergraphs). Specifically, a combinatorial rigidity-type theorem is proven for a type of geometric incidence system. The theorem characterizes the hypergraphs of subspace arrangements that generically yield (a) at least one dictionary (b) a locally unique dictionary (i.e.\ at most a finite number of isolated dictionaries) of the specified size. We are unaware of prior application of combinatorial rigidity techniques in the setting of Dictionary Learning, or even in machine learning. We also provide a systematic classification of problems related to Dictionary Learning together with various algorithms, their assumptions and performance
    • 

    corecore