194 research outputs found
On Frankl and Furedi's conjecture for 3-uniform hypergraphs
The Lagrangian of a hypergraph has been a useful tool in hypergraph extremal
problems. In most applications, we need an upper bound for the Lagrangian of a
hypergraph. Frankl and Furedi in \cite{FF} conjectured that the -graph with
edges formed by taking the first sets in the colex ordering of
has the largest Lagrangian of all -graphs with
edges. In this paper, we give some partial results for this conjecture.Comment: 19 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1211.650
On hypergraph Lagrangians
It is conjectured by Frankl and F\"uredi that the -uniform hypergraph with
edges formed by taking the first sets in the colex ordering of
has the largest Lagrangian of all -uniform hypergraphs
with edges in \cite{FF}. Motzkin and Straus' theorem confirms this
conjecture when . For , it is shown by Talbot in \cite{T} that this
conjecture is true when is in certain ranges. In this paper, we explore the
connection between the clique number and Lagrangians for -uniform
hypergraphs. As an implication of this connection, we prove that the
-uniform hypergraph with edges formed by taking the first sets in
the colex ordering of has the largest Lagrangian of all
-uniform graphs with vertices and edges satisfying for
Comment: 10 pages. arXiv admin note: substantial text overlap with
arXiv:1312.7529, arXiv:1211.7057, arXiv:1211.6508, arXiv:1311.140
Parameter-Efficient Tuning Makes a Good Classification Head
In recent years, pretrained models revolutionized the paradigm of natural
language understanding (NLU), where we append a randomly initialized
classification head after the pretrained backbone, e.g. BERT, and finetune the
whole model. As the pretrained backbone makes a major contribution to the
improvement, we naturally expect a good pretrained classification head can also
benefit the training. However, the final-layer output of the backbone, i.e. the
input of the classification head, will change greatly during finetuning, making
the usual head-only pretraining (LP-FT) ineffective. In this paper, we find
that parameter-efficient tuning makes a good classification head, with which we
can simply replace the randomly initialized heads for a stable performance
gain. Our experiments demonstrate that the classification head jointly
pretrained with parameter-efficient tuning consistently improves the
performance on 9 tasks in GLUE and SuperGLUE.Comment: Accepted as a long paper to EMNLP 2022 Main Conferenc
- β¦