46 research outputs found

    Three ways to cover a graph

    Full text link
    We consider the problem of covering an input graph HH with graphs from a fixed covering class GG. The classical covering number of HH with respect to GG is the minimum number of graphs from GG needed to cover the edges of HH without covering non-edges of HH. We introduce a unifying notion of three covering parameters with respect to GG, two of which are novel concepts only considered in special cases before: the local and the folded covering number. Each parameter measures "how far'' HH is from GG in a different way. Whereas the folded covering number has been investigated thoroughly for some covering classes, e.g., interval graphs and planar graphs, the local covering number has received little attention. We provide new bounds on each covering number with respect to the following covering classes: linear forests, star forests, caterpillar forests, and interval graphs. The classical graph parameters that result this way are interval number, track number, linear arboricity, star arboricity, and caterpillar arboricity. As input graphs we consider graphs of bounded degeneracy, bounded degree, bounded tree-width or bounded simple tree-width, as well as outerplanar, planar bipartite, and planar graphs. For several pairs of an input class and a covering class we determine exactly the maximum ordinary, local, and folded covering number of an input graph with respect to that covering class.Comment: 20 pages, 4 figure

    Polynomial growth of concept lattices, canonical bases and generators:: extremal set theory in Formal Concept Analysis

    Get PDF
    We prove that there exist three distinct, comprehensive classes of (formal) contexts with polynomially many concepts. Namely: contexts which are nowhere dense, of bounded breadth or highly convex. Already present in G. Birkhoff's classic monograph is the notion of breadth of a lattice; it equals the number of atoms of a largest boolean suborder. Even though it is natural to define the breadth of a context as being that of its concept lattice, this idea had not been exploited before. We do this and establish many equivalences. Amongst them, it is shown that the breadth of a context equals the size of its largest minimal generator, its largest contranominal-scale subcontext, as well as the Vapnik-Chervonenkis dimension of both its system of extents and of intents. The polynomiality of the aforementioned classes is proven via upper bounds (also known as majorants) for the number of maximal bipartite cliques in bipartite graphs. These are results obtained by various authors in the last decades. The fact that they yield statements about formal contexts is a reward for investigating how two established fields interact, specifically Formal Concept Analysis (FCA) and graph theory. We improve considerably the breadth bound. Such improvement is twofold: besides giving a much tighter expression, we prove that it limits the number of minimal generators. This is strictly more general than upper bounding the quantity of concepts. Indeed, it automatically implies a bound on these, as well as on the number of proper premises. A corollary is that this improved result is a bound for the number of implications in the canonical basis too. With respect to the quantity of concepts, this sharper majorant is shown to be best possible. Such fact is established by constructing contexts whose concept lattices exhibit exactly that many elements. These structures are termed, respectively, extremal contexts and extremal lattices. The usual procedure of taking the standard context allows one to work interchangeably with either one of these two extremal structures. Extremal lattices are equivalently defined as finite lattices which have as many elements as possible, under the condition that they obey two upper limits: one for its number of join-irreducibles, other for its breadth. Subsequently, these structures are characterized in two ways. Our first characterization is done using the lattice perspective. Initially, we construct extremal lattices by the iterated operation of finding smaller, extremal subsemilattices and duplicating their elements. Then, it is shown that every extremal lattice must be obtained through a recursive application of this construction principle. A byproduct of this contribution is that extremal lattices are always meet-distributive. Despite the fact that this approach is revealing, the vicinity of its findings contains unanswered combinatorial questions which are relevant. Most notably, the number of meet-irreducibles of extremal lattices escapes from control when this construction is conducted. Aiming to get a grip on the number of meet-irreducibles, we succeed at proving an alternative characterization of these structures. This second approach is based on implication logic, and exposes an interesting link between number of proper premises, pseudo-extents and concepts. A guiding idea in this scenario is to use implications to construct lattices. It turns out that constructing extremal structures with this method is simpler, in the sense that a recursive application of the construction principle is not needed. Moreover, we obtain with ease a general, explicit formula for the Whitney numbers of extremal lattices. This reveals that they are unimodal, too. Like the first, this second construction method is shown to be characteristic. A particular case of the construction is able to force - with precision - a high number of (in the sense of "exponentially many'') meet-irreducibles. Such occasional explosion of meet-irreducibles motivates a generalization of the notion of extremal lattices. This is done by means of considering a more refined partition of the class of all finite lattices. In this finer-grained setting, each extremal class consists of lattices with bounded breadth, number of join irreducibles and meet-irreducibles as well. The generalized problem of finding the maximum number of concepts reveals itself to be challenging. Instead of attempting to classify these structures completely, we pose questions inspired by Turán's seminal result in extremal combinatorics. Most prominently: do extremal lattices (in this more general sense) have the maximum permitted breadth? We show a general statement in this setting: for every choice of limits (breadth, number of join-irreducibles and meet-irreducibles), we produce some extremal lattice with the maximum permitted breadth. The tools which underpin all the intuitions in this scenario are hypergraphs and exact set covers. In a rather unexpected, but interesting turn of events, we obtain for free a simple and interesting theorem about the general existence of "rich'' subcontexts. Precisely: every context contains an object/attribute pair which, after removed, results in a context with at least half the original number of concepts

    Graph Decompositions

    Get PDF

    Extremal Problems For Transversals In Graphs With Bounded Degree

    Get PDF
    We introduce and discuss generalizations of the problem of independent transversals. Given a graph property {\user1{\mathcal{R}}} , we investigate whether any graph of maximum degree at most d with a vertex partition into classes of size at least p admits a transversal having property {\user1{\mathcal{R}}} . In this paper we study this problem for the following properties {\user1{\mathcal{R}}} : "acyclic”, "H-free”, and "having connected components of order at most r”. We strengthen a result of [13]. We prove that if the vertex set of a d-regular graph is partitioned into classes of size d+⌞d/r⌟, then it is possible to select a transversal inducing vertex disjoint trees on at most r vertices. Our approach applies appropriate triangulations of the simplex and Sperner's Lemma. We also establish some limitations on the power of this topological method. We give constructions of vertex-partitioned graphs admitting no independent transversals that partially settles an old question of Bollobás, Erdős and Szemerédi. An extension of this construction provides vertex-partitioned graphs with small degree such that every transversal contains a fixed graph H as a subgraph. Finally, we pose several open question

    Decomposing cubic graphs into isomorphic linear forests

    Full text link
    A common problem in graph colouring seeks to decompose the edge set of a given graph into few similar and simple subgraphs, under certain divisibility conditions. In 1987 Wormald conjectured that the edges of every cubic graph on 4n4n vertices can be partitioned into two isomorphic linear forests. We prove this conjecture for large connected cubic graphs. Our proof uses a wide range of probabilistic tools in conjunction with intricate structural analysis, and introduces a variety of local recolouring techniques.Comment: 49 pages, many figure

    Graph Theory

    Get PDF
    Graph theory is a rapidly developing area of mathematics. Recent years have seen the development of deep theories, and the increasing importance of methods from other parts of mathematics. The workshop on Graph Theory brought together together a broad range of researchers to discuss some of the major new developments. There were three central themes, each of which has seen striking recent progress: the structure of graphs with forbidden subgraphs; graph minor theory; and applications of the entropy compression method. The workshop featured major talks on current work in these areas, as well as presentations of recent breakthroughs and connections to other areas. There was a particularly exciting selection of longer talks, including presentations on the structure of graphs with forbidden induced subgraphs, embedding simply connected 2-complexes in 3-space, and an announcement of the solution of the well-known Oberwolfach Problem

    Dependent k-Set Packing on Polynomoids

    Get PDF
    Specialized hereditary systems, e.g., matroids, are known to have many applications in algorithm design. We define a new notion called d-polynomoid as a hereditary system (E, ? ? 2^E) so that every two maximal sets in ? have less than d elements in common. We study the problem that, given a d-polynomoid (E, ?), asks if the ground set E contains ? disjoint k-subsets that are not in ?, and obtain a complexity trichotomy result for all pairs of k ? 1 and d ? 0. Our algorithmic result yields a sufficient and necessary condition that decides whether each hypergraph in some classes of r-uniform hypergraphs has a perfect matching, which has a number of algorithmic applications
    corecore