25,505 research outputs found

    Labeling of graphs, sumset of squares of units modulo n and resonance varieties of matroids

    Get PDF
    This thesis investigates problems in a number of different areas of graph theory and its applications in other areas of mathematics. Motivated by the 1-2-3-Conjecture, we consider the closed distinguishing number of a graph G, denoted by dis[G]. We provide new upper bounds for dis[G] by using the Combinatorial Nullstellensatz. We prove that it is NP-complete to decide for a given planar subcubic graph G, whether dis[G] = 2. We show that for each integer t there is a bipartite graph G such that dis[G] \u3e t. Then some polynomial time algorithms and NP-hardness results for the problem of partitioning the edges of a graph into regular and/or locally irregular subgraphs are presented. We then move on to consider Johnson graphs to find resonance varieties of some classes of sparse paving matroids. The last application we consider is in number theory, where we find the number of solutions of the equation x21 + _ _ _ + x2 k = c, where c 2 Zn, and xi are all units in the ring Zn. Our approach is combinatorial using spectral graph theory

    Discovery of low-dimensional structure in high-dimensional inference problems

    Full text link
    Many learning and inference problems involve high-dimensional data such as images, video or genomic data, which cannot be processed efficiently using conventional methods due to their dimensionality. However, high-dimensional data often exhibit an inherent low-dimensional structure, for instance they can often be represented sparsely in some basis or domain. The discovery of an underlying low-dimensional structure is important to develop more robust and efficient analysis and processing algorithms. The first part of the dissertation investigates the statistical complexity of sparse recovery problems, including sparse linear and nonlinear regression models, feature selection and graph estimation. We present a framework that unifies sparse recovery problems and construct an analogy to channel coding in classical information theory. We perform an information-theoretic analysis to derive bounds on the number of samples required to reliably recover sparsity patterns independent of any specific recovery algorithm. In particular, we show that sample complexity can be tightly characterized using a mutual information formula similar to channel coding results. Next, we derive major extensions to this framework, including dependent input variables and a lower bound for sequential adaptive recovery schemes, which helps determine whether adaptivity provides performance gains. We compute statistical complexity bounds for various sparse recovery problems, showing our analysis improves upon the existing bounds and leads to intuitive results for new applications. In the second part, we investigate methods for improving the computational complexity of subgraph detection in graph-structured data, where we aim to discover anomalous patterns present in a connected subgraph of a given graph. This problem arises in many applications such as detection of network intrusions, community detection, detection of anomalous events in surveillance videos or disease outbreaks. Since optimization over connected subgraphs is a combinatorial and computationally difficult problem, we propose a convex relaxation that offers a principled approach to incorporating connectivity and conductance constraints on candidate subgraphs. We develop a novel nearly-linear time algorithm to solve the relaxed problem, establish convergence and consistency guarantees and demonstrate its feasibility and performance with experiments on real networks

    On combinatorial optimisation in analysis of protein-protein interaction and protein folding networks

    Get PDF
    Abstract: Protein-protein interaction networks and protein folding networks represent prominent research topics at the intersection of bioinformatics and network science. In this paper, we present a study of these networks from combinatorial optimisation point of view. Using a combination of classical heuristics and stochastic optimisation techniques, we were able to identify several interesting combinatorial properties of biological networks of the COSIN project. We obtained optimal or near-optimal solutions to maximum clique and chromatic number problems for these networks. We also explore patterns of both non-overlapping and overlapping cliques in these networks. Optimal or near-optimal solutions to partitioning of these networks into non-overlapping cliques and to maximum independent set problem were discovered. Maximal cliques are explored by enumerative techniques. Domination in these networks is briefly studied, too. Applications and extensions of our findings are discussed

    A Faster Combinatorial Algorithm for Maximum Bipartite Matching

    Full text link
    The maximum bipartite matching problem is among the most fundamental and well-studied problems in combinatorial optimization. A beautiful and celebrated combinatorial algorithm of Hopcroft and Karp (1973) shows that maximum bipartite matching can be solved in O(mn)O(m \sqrt{n}) time on a graph with nn vertices and mm edges. For the case of very dense graphs, a fast matrix multiplication based approach gives a running time of O(n2.371)O(n^{2.371}). These results represented the fastest known algorithms for the problem until 2013, when Madry introduced a new approach based on continuous techniques achieving much faster runtime in sparse graphs. This line of research has culminated in a spectacular recent breakthrough due to Chen et al. (2022) that gives an m1+o(1)m^{1+o(1)} time algorithm for maximum bipartite matching (and more generally, for min cost flows). This raises a natural question: are continuous techniques essential to obtaining fast algorithms for the bipartite matching problem? Our work makes progress on this question by presenting a new, purely combinatorial algorithm for bipartite matching, that runs in O~(m1/3n5/3)\tilde{O}(m^{1/3}n^{5/3}) time, and hence outperforms both Hopcroft-Karp and the fast matrix multiplication based algorithms on moderately dense graphs. Using a standard reduction, we also obtain an O~(m1/3n5/3)\tilde{O}(m^{1/3}n^{5/3}) time deterministic algorithm for maximum vertex-capacitated ss-tt flow in directed graphs when all vertex capacities are identical

    Combinatorial approach to the interpolation method and scaling limits in sparse random graphs

    Full text link
    We establish the existence of free energy limits for several combinatorial models on Erd\"{o}s-R\'{e}nyi graph G(N,⌊cN⌋)\mathbb {G}(N,\lfloor cN\rfloor) and random rr-regular graph G(N,r)\mathbb {G}(N,r). For a variety of models, including independent sets, MAX-CUT, coloring and K-SAT, we prove that the free energy both at a positive and zero temperature, appropriately rescaled, converges to a limit as the size of the underlying graph diverges to infinity. In the zero temperature case, this is interpreted as the existence of the scaling limit for the corresponding combinatorial optimization problem. For example, as a special case we prove that the size of a largest independent set in these graphs, normalized by the number of nodes converges to a limit w.h.p. This resolves an open problem which was proposed by Aldous (Some open problems) as one of his six favorite open problems. It was also mentioned as an open problem in several other places: Conjecture 2.20 in Wormald [In Surveys in Combinatorics, 1999 (Canterbury) (1999) 239-298 Cambridge Univ. Press]; Bollob\'{a}s and Riordan [Random Structures Algorithms 39 (2011) 1-38]; Janson and Thomason [Combin. Probab. Comput. 17 (2008) 259-264] and Aldous and Steele [In Probability on Discrete Structures (2004) 1-72 Springer].Comment: Published in at http://dx.doi.org/10.1214/12-AOP816 the Annals of Probability (http://www.imstat.org/aop/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Structured Sparsity: Discrete and Convex approaches

    Full text link
    Compressive sensing (CS) exploits sparsity to recover sparse or compressible signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity is also used to enhance interpretability in machine learning and statistics applications: While the ambient dimension is vast in modern data analysis problems, the relevant information therein typically resides in a much lower dimensional space. However, many solutions proposed nowadays do not leverage the true underlying structure. Recent results in CS extend the simple sparsity idea to more sophisticated {\em structured} sparsity models, which describe the interdependency between the nonzero components of a signal, allowing to increase the interpretability of the results and lead to better recovery performance. In order to better understand the impact of structured sparsity, in this chapter we analyze the connections between the discrete models and their convex relaxations, highlighting their relative advantages. We start with the general group sparse model and then elaborate on two important special cases: the dispersive and the hierarchical models. For each, we present the models in their discrete nature, discuss how to solve the ensuing discrete problems and then describe convex relaxations. We also consider more general structures as defined by set functions and present their convex proxies. Further, we discuss efficient optimization solutions for structured sparsity problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure

    Combinatorial theorems relative to a random set

    Get PDF
    We describe recent advances in the study of random analogues of combinatorial theorems.Comment: 26 pages. Submitted to Proceedings of the ICM 201
    • …
    corecore