439,755 research outputs found

    The Linear Complexity of a Graph

    Get PDF
    The linear complexity of a matrix is a measure of the number of additions, subtractions, and scalar multiplications required to multiply that matrix and an arbitrary vector. In this paper, we define the linear complexity of a graph to be the linear complexity of any one of its associated adjacency matrices. We then compute or give upper bounds for the linear complexity of several classes of graphs

    Constraint Complexity of Realizations of Linear Codes on Arbitrary Graphs

    Full text link
    A graphical realization of a linear code C consists of an assignment of the coordinates of C to the vertices of a graph, along with a specification of linear state spaces and linear ``local constraint'' codes to be associated with the edges and vertices, respectively, of the graph. The \k-complexity of a graphical realization is defined to be the largest dimension of any of its local constraint codes. \k-complexity is a reasonable measure of the computational complexity of a sum-product decoding algorithm specified by a graphical realization. The main focus of this paper is on the following problem: given a linear code C and a graph G, how small can the \k-complexity of a realization of C on G be? As useful tools for attacking this problem, we introduce the Vertex-Cut Bound, and the notion of ``vc-treewidth'' for a graph, which is closely related to the well-known graph-theoretic notion of treewidth. Using these tools, we derive tight lower bounds on the \k-complexity of any realization of C on G. Our bounds enable us to conclude that good error-correcting codes can have low-complexity realizations only on graphs with large vc-treewidth. Along the way, we also prove the interesting result that the ratio of the \k-complexity of the best conventional trellis realization of a length-n code C to the \k-complexity of the best cycle-free realization of C grows at most logarithmically with codelength n. Such a logarithmic growth rate is, in fact, achievable.Comment: Submitted to IEEE Transactions on Information Theor

    Learning flexible representations of stochastic processes on graphs

    Full text link
    Graph convolutional networks adapt the architecture of convolutional neural networks to learn rich representations of data supported on arbitrary graphs by replacing the convolution operations of convolutional neural networks with graph-dependent linear operations. However, these graph-dependent linear operations are developed for scalar functions supported on undirected graphs. We propose a class of linear operations for stochastic (time-varying) processes on directed (or undirected) graphs to be used in graph convolutional networks. We propose a parameterization of such linear operations using functional calculus to achieve arbitrarily low learning complexity. The proposed approach is shown to model richer behaviors and display greater flexibility in learning representations than product graph methods

    Optimal Morphs of Convex Drawings

    Get PDF
    We give an algorithm to compute a morph between any two convex drawings of the same plane graph. The morph preserves the convexity of the drawing at any time instant and moves each vertex along a piecewise linear curve with linear complexity. The linear bound is asymptotically optimal in the worst case.Comment: To appear in SoCG 201

    Large-scale inference and graph theoretical analysis of gene-regulatory networks in B. stubtilis

    Full text link
    We present the methods and results of a two-stage modeling process that generates candidate gene-regulatory networks of the bacterium B. subtilis from experimentally obtained, yet mathematically underdetermined microchip array data. By employing a computational, linear correlative procedure to generate these networks, and by analyzing the networks from a graph theoretical perspective, we are able to verify the biological viability of our inferred networks, and we demonstrate that our networks' graph theoretical properties are remarkably similar to those of other biological systems. In addition, by comparing our inferred networks to those of a previous, noisier implementation of the linear inference process [17], we are able to identify trends in graph theoretical behavior that occur both in our networks as well as in their perturbed counterparts. These commonalities in behavior at multiple levels of complexity allow us to ascertain the level of complexity to which our process is robust to noise.Comment: 22 pages, 4 figures, accepted for publication in Physica A (2006
    • …
    corecore