120 research outputs found

    Maximum matching width: new characterizations and a fast algorithm for dominating set

    Get PDF
    We give alternative definitions for maximum matching width, e.g. a graph GG has mmw(G)k\operatorname{mmw}(G) \leq k if and only if it is a subgraph of a chordal graph HH and for every maximal clique XX of HH there exists A,B,CXA,B,C \subseteq X with ABC=XA \cup B \cup C=X and A,B,Ck|A|,|B|,|C| \leq k such that any subset of XX that is a minimal separator of HH is a subset of either A,BA, B or CC. Treewidth and branchwidth have alternative definitions through intersections of subtrees, where treewidth focuses on nodes and branchwidth focuses on edges. We show that mm-width combines both aspects, focusing on nodes and on edges. Based on this we prove that given a graph GG and a branch decomposition of mm-width kk we can solve Dominating Set in time O(8k)O^*({8^k}), thereby beating O(3tw(G))O^*(3^{\operatorname{tw}(G)}) whenever tw(G)>log38×k1.893k\operatorname{tw}(G) > \log_3{8} \times k \approx 1.893 k. Note that mmw(G)tw(G)+13mmw(G)\operatorname{mmw}(G) \leq \operatorname{tw}(G)+1 \leq 3 \operatorname{mmw}(G) and these inequalities are tight. Given only the graph GG and using the best known algorithms to find decompositions, maximum matching width will be better for solving Dominating Set whenever tw(G)>1.549×mmw(G)\operatorname{tw}(G) > 1.549 \times \operatorname{mmw}(G)

    Finding branch-decompositions of matroids, hypergraphs, and more

    Full text link
    Given nn subspaces of a finite-dimensional vector space over a fixed finite field F\mathcal F, we wish to find a "branch-decomposition" of these subspaces of width at most kk, that is a subcubic tree TT with nn leaves mapped bijectively to the subspaces such that for every edge ee of TT, the sum of subspaces associated with leaves in one component of TeT-e and the sum of subspaces associated with leaves in the other component have the intersection of dimension at most kk. This problem includes the problems of computing branch-width of F\mathcal F-represented matroids, rank-width of graphs, branch-width of hypergraphs, and carving-width of graphs. We present a fixed-parameter algorithm to construct such a branch-decomposition of width at most kk, if it exists, for input subspaces of a finite-dimensional vector space over F\mathcal F. Our algorithm is analogous to the algorithm of Bodlaender and Kloks (1996) on tree-width of graphs. To extend their framework to branch-decompositions of vector spaces, we developed highly generic tools for branch-decompositions on vector spaces. The only known previous fixed-parameter algorithm for branch-width of F\mathcal F-represented matroids was due to Hlin\v{e}n\'y and Oum (2008) that runs in time O(n3)O(n^3) where nn is the number of elements of the input F\mathcal F-represented matroid. But their method is highly indirect. Their algorithm uses the non-trivial fact by Geelen et al. (2003) that the number of forbidden minors is finite and uses the algorithm of Hlin\v{e}n\'y (2005) on checking monadic second-order formulas on F\mathcal F-represented matroids of small branch-width. Our result does not depend on such a fact and is completely self-contained, and yet matches their asymptotic running time for each fixed kk.Comment: 73 pages, 10 figure

    Finding Branch-Decompositions of Matroids, Hypergraphs, and More

    Get PDF

    Deciding whether there are infinitely many prime graphs with forbidden induced subgraphs

    Get PDF
    A homogeneous set of a graph G is a set X of vertices such that 2≤|X|V(G)| and no vertex in V(G)−X has both a neighbor and a non-neighbor in X. A graph is prime if it has no homogeneous set. We present an algorithm to decide whether a class of graphs given by a finite set of forbidden induced subgraphs contains infinitely many non-isomorphic prime graphs

    Scaling Law for Recommendation Models: Towards General-purpose User Representations

    Full text link
    Recent advancement of large-scale pretrained models such as BERT, GPT-3, CLIP, and Gopher, has shown astonishing achievements across various task domains. Unlike vision recognition and language models, studies on general-purpose user representation at scale still remain underexplored. Here we explore the possibility of general-purpose user representation learning by training a universal user encoder at large scales. We demonstrate that the scaling law is present in user representation learning areas, where the training error scales as a power-law with the amount of computation. Our Contrastive Learning User Encoder (CLUE), optimizes task-agnostic objectives, and the resulting user embeddings stretch our expectation of what is possible to do in various downstream tasks. CLUE also shows great transferability to other domains and companies, as performances on an online experiment shows significant improvements in Click-Through-Rate (CTR). Furthermore, we also investigate how the model performance is influenced by the scale factors, such as training data size, model capacity, sequence length, and batch size. Finally, we discuss the broader impacts of CLUE in general.Comment: Accepted at AAAI 2023. This version includes the technical appendi

    Deformable Graph Transformer

    Full text link
    Transformer-based models have recently shown success in representation learning on graph-structured data beyond natural language processing and computer vision. However, the success is limited to small-scale graphs due to the drawbacks of full dot-product attention on graphs such as the quadratic complexity with respect to the number of nodes and message aggregation from enormous irrelevant nodes. To address these issues, we propose Deformable Graph Transformer (DGT) that performs sparse attention via dynamically sampled relevant nodes for efficiently handling large-scale graphs with a linear complexity in the number of nodes. Specifically, our framework first constructs multiple node sequences with various criteria to consider both structural and semantic proximity. Then, combining with our learnable Katz Positional Encodings, the sparse attention is applied to the node sequences for learning node representations with a significantly reduced computational cost. Extensive experiments demonstrate that our DGT achieves state-of-the-art performance on 7 graph benchmark datasets with 2.5 - 449 times less computational cost compared to transformer-based graph models with full attention.Comment: 16 pages, 3 figure
    corecore