7,393 research outputs found

    Largest sparse subgraphs of random graphs

    Get PDF
    For the Erd\H{o}s-R\'enyi random graph G(n,p), we give a precise asymptotic formula for the size of a largest vertex subset in G(n,p) that induces a subgraph with average degree at most t, provided that p = p(n) is not too small and t = t(n) is not too large. In the case of fixed t and p, we find that this value is asymptotically almost surely concentrated on at most two explicitly given points. This generalises a result on the independence number of random graphs. For both the upper and lower bounds, we rely on large deviations inequalities for the binomial distribution.Comment: 15 page

    Spectral and Dynamical Properties in Classes of Sparse Networks with Mesoscopic Inhomogeneities

    Full text link
    We study structure, eigenvalue spectra and diffusion dynamics in a wide class of networks with subgraphs (modules) at mesoscopic scale. The networks are grown within the model with three parameters controlling the number of modules, their internal structure as scale-free and correlated subgraphs, and the topology of connecting network. Within the exhaustive spectral analysis for both the adjacency matrix and the normalized Laplacian matrix we identify the spectral properties which characterize the mesoscopic structure of sparse cyclic graphs and trees. The minimally connected nodes, clustering, and the average connectivity affect the central part of the spectrum. The number of distinct modules leads to an extra peak at the lower part of the Laplacian spectrum in cyclic graphs. Such a peak does not occur in the case of topologically distinct tree-subgraphs connected on a tree. Whereas the associated eigenvectors remain localized on the subgraphs both in trees and cyclic graphs. We also find a characteristic pattern of periodic localization along the chains on the tree for the eigenvector components associated with the largest eigenvalue equal 2 of the Laplacian. We corroborate the results with simulations of the random walk on several types of networks. Our results for the distribution of return-time of the walk to the origin (autocorrelator) agree well with recent analytical solution for trees, and it appear to be independent on their mesoscopic and global structure. For the cyclic graphs we find new results with twice larger stretching exponent of the tail of the distribution, which is virtually independent on the size of cycles. The modularity and clustering contribute to a power-law decay at short return times

    On rigidity, orientability and cores of random graphs with sliders

    Get PDF
    Suppose that you add rigid bars between points in the plane, and suppose that a constant fraction qq of the points moves freely in the whole plane; the remaining fraction is constrained to move on fixed lines called sliders. When does a giant rigid cluster emerge? Under a genericity condition, the answer only depends on the graph formed by the points (vertices) and the bars (edges). We find for the random graph G∈G(n,c/n)G \in \mathcal{G}(n,c/n) the threshold value of cc for the appearance of a linear-sized rigid component as a function of qq, generalizing results of Kasiviswanathan et al. We show that this appearance of a giant component undergoes a continuous transition for q≤1/2q \leq 1/2 and a discontinuous transition for q>1/2q > 1/2. In our proofs, we introduce a generalized notion of orientability interpolating between 1- and 2-orientability, of cores interpolating between 2-core and 3-core, and of extended cores interpolating between 2+1-core and 3+2-core; we find the precise expressions for the respective thresholds and the sizes of the different cores above the threshold. In particular, this proves a conjecture of Kasiviswanathan et al. about the size of the 3+2-core. We also derive some structural properties of rigidity with sliders (matroid and decomposition into components) which can be of independent interest.Comment: 32 pages, 1 figur

    Combinatorial theorems relative to a random set

    Get PDF
    We describe recent advances in the study of random analogues of combinatorial theorems.Comment: 26 pages. Submitted to Proceedings of the ICM 201

    Syntactic Separation of Subset Satisfiability Problems

    Get PDF
    Variants of the Exponential Time Hypothesis (ETH) have been used to derive lower bounds on the time complexity for certain problems, so that the hardness results match long-standing algorithmic results. In this paper, we consider a syntactically defined class of problems, and give conditions for when problems in this class require strongly exponential time to approximate to within a factor of (1-epsilon) for some constant epsilon > 0, assuming the Gap Exponential Time Hypothesis (Gap-ETH), versus when they admit a PTAS. Our class includes a rich set of problems from additive combinatorics, computational geometry, and graph theory. Our hardness results also match the best known algorithmic results for these problems

    Sparse Learning over Infinite Subgraph Features

    Full text link
    We present a supervised-learning algorithm from graph data (a set of graphs) for arbitrary twice-differentiable loss functions and sparse linear models over all possible subgraph features. To date, it has been shown that under all possible subgraph features, several types of sparse learning, such as Adaboost, LPBoost, LARS/LASSO, and sparse PLS regression, can be performed. Particularly emphasis is placed on simultaneous learning of relevant features from an infinite set of candidates. We first generalize techniques used in all these preceding studies to derive an unifying bounding technique for arbitrary separable functions. We then carefully use this bounding to make block coordinate gradient descent feasible over infinite subgraph features, resulting in a fast converging algorithm that can solve a wider class of sparse learning problems over graph data. We also empirically study the differences from the existing approaches in convergence property, selected subgraph features, and search-space sizes. We further discuss several unnoticed issues in sparse learning over all possible subgraph features.Comment: 42 pages, 24 figures, 4 table
    • …
    corecore