7 research outputs found

    Nearly-linear monotone paths in edge-ordered graphs

    Get PDF
    How long a monotone path can one always find in any edge-ordering of the complete graph Kn? This appealing question was first asked by Chv´atal and Koml´os in 1971, and has since attracted the attention of many researchers, inspiring a variety of related problems. The prevailing conjecture is that one can always find a monotone path of linear length, but until now the best known lower bound was n 2/3−o(1). In this paper we almost close this gap, proving that any edge-ordering of the complete graph contains a monotone path of length n 1−o(1

    Nearly-linear monotone paths in edge-ordered graphs

    Get PDF
    How long a monotone path can one always find in any edge-ordering of the complete graph Kn? This appealing question was first asked by Chvátal and Komlós in 1971, and has since attracted the attention of many researchers, inspiring a variety of related problems. The prevailing conjecture is that one can always find a monotone path of linear length, but until now the best known lower bound was n^2/3−o(1). In this paper we almost close this gap, proving that any edge-ordering of the complete graph contains a monotone path of length n^1−o(1)

    Inverse Graphs Associated with Finite Groups

    Get PDF

    Inverse Graphs Associated with Finite Groups

    Get PDF

    A theory of relation learning and cross-domain generalization

    Get PDF
    People readily generalize knowledge to novel domains and stimuli. We present a theory, instantiated in a computational model, based on the idea that cross-domain generalization in humans is a case of analogical inference over structured (i.e., symbolic) relational representations. The model is an extension of the LISA and DORA models of relational inference and learning. The resulting model learns both the content and format (i.e., structure) of relational representations from non-relational inputs without supervision, when augmented with the capacity for reinforcement learning, leverages these representations to learn individual domains, and then generalizes to new domains on the first exposure (i.e., zero-shot learning) via analogical inference. We demonstrate the capacity of the model to learn structured relational representations from a variety of simple visual stimuli, and to perform cross-domain generalization between video games (Breakout and Pong) and between several psychological tasks. We demonstrate that the model's trajectory closely mirrors the trajectory of children as they learn about relations, accounting for phenomena from the literature on the development of children's reasoning and analogy making. The model's ability to generalize between domains demonstrates the flexibility afforded by representing domains in terms of their underlying relational structure, rather than simply in terms of the statistical relations between their inputs and outputs.Comment: Includes supplemental materia
    corecore