194 research outputs found

    On Frankl and Furedi's conjecture for 3-uniform hypergraphs

    Full text link
    The Lagrangian of a hypergraph has been a useful tool in hypergraph extremal problems. In most applications, we need an upper bound for the Lagrangian of a hypergraph. Frankl and Furedi in \cite{FF} conjectured that the rr-graph with mm edges formed by taking the first mm sets in the colex ordering of N(r){\mathbb N}^{(r)} has the largest Lagrangian of all rr-graphs with mm edges. In this paper, we give some partial results for this conjecture.Comment: 19 pages, 1 figure. arXiv admin note: substantial text overlap with arXiv:1211.650

    On hypergraph Lagrangians

    Full text link
    It is conjectured by Frankl and F\"uredi that the rr-uniform hypergraph with mm edges formed by taking the first mm sets in the colex ordering of N(r){\mathbb N}^{(r)} has the largest Lagrangian of all rr-uniform hypergraphs with mm edges in \cite{FF}. Motzkin and Straus' theorem confirms this conjecture when r=2r=2. For r=3r=3, it is shown by Talbot in \cite{T} that this conjecture is true when mm is in certain ranges. In this paper, we explore the connection between the clique number and Lagrangians for rr-uniform hypergraphs. As an implication of this connection, we prove that the rr-uniform hypergraph with mm edges formed by taking the first mm sets in the colex ordering of N(r){\mathbb N}^{(r)} has the largest Lagrangian of all rr-uniform graphs with tt vertices and mm edges satisfying (tβˆ’1r)≀m≀(tβˆ’1r)+(tβˆ’2rβˆ’1)βˆ’[(2rβˆ’6)Γ—2rβˆ’1+2rβˆ’3+(rβˆ’4)(2rβˆ’7)βˆ’1]((tβˆ’2rβˆ’2)βˆ’1){t-1\choose r}\leq m \leq {t-1\choose r}+ {t-2\choose r-1}-[(2r-6)\times2^{r-1}+2^{r-3}+(r-4)(2r-7)-1]({t-2\choose r-2}-1) for rβ‰₯4.r\geq 4.Comment: 10 pages. arXiv admin note: substantial text overlap with arXiv:1312.7529, arXiv:1211.7057, arXiv:1211.6508, arXiv:1311.140

    Organically Structured Carbon Nanotubes for Fluorescence

    Get PDF

    Parameter-Efficient Tuning Makes a Good Classification Head

    Full text link
    In recent years, pretrained models revolutionized the paradigm of natural language understanding (NLU), where we append a randomly initialized classification head after the pretrained backbone, e.g. BERT, and finetune the whole model. As the pretrained backbone makes a major contribution to the improvement, we naturally expect a good pretrained classification head can also benefit the training. However, the final-layer output of the backbone, i.e. the input of the classification head, will change greatly during finetuning, making the usual head-only pretraining (LP-FT) ineffective. In this paper, we find that parameter-efficient tuning makes a good classification head, with which we can simply replace the randomly initialized heads for a stable performance gain. Our experiments demonstrate that the classification head jointly pretrained with parameter-efficient tuning consistently improves the performance on 9 tasks in GLUE and SuperGLUE.Comment: Accepted as a long paper to EMNLP 2022 Main Conferenc
    • …
    corecore