17 research outputs found
On the Friedlander-Nadirashvili invariants of surfaces
Let be a closed smooth manifold. In 1999, L. Friedlander and N.
Nadirashvili introduced a new differential invariant using the first
normalized nonzero eigenvalue of the Lalpace-Beltrami operator of a
Riemannian metric . They defined it taking the supremum of this quantity
over all Riemannian metrics in each conformal class, and then taking the
infimum over all conformal classes. By analogy we use -th eigenvalues of
to define the invariants indexed by positive integers .
In the present paper the values of these invariants on surfaces are
investigated. We show that unless is a
non-orientable surface of even genus. For orientable surfaces and this
was earlier shown by R. Petrides. In fact L. Friedlander and N. Nadirashvili
suggested that for any surface different from
. We show that, surprisingly enough, this is not true for
non-orientable surfaces of even genus, for such surfaces one has
. We also discuss the connection between the
Friedlander-Nadirashvili invariants and the theory of cobordisms, and
conjecture that is a cobordism invariant.Comment: 34 pages, 2 figure
On the FriedlanderāNadirashvili invariants of surfaces
Let M be a closed smooth manifold. In 1999, Friedlander and Nadirashvili introduced a new differential invariant Iā (M) using the first normalized nonzero eigenvalue of the LalpaceāBeltrami operator Ī_g of a Riemannian metric g. They defined it taking the supremum of this quantity over all Riemannian metrics in each conformal class, and then taking the infimum over all conformal classes. By analogy we use k-th eigenvalues of Ī_g to define the invariants I_k(M) indexed by positive integers k. In the present paper the values of these invariants on surfaces are investigated. We show that I_k(M) = Ik(SĀ²) unless M is a non-orientable surface of even genus. For orientable surfaces and k = 1 this was earlier shown by Petrides. In fact Friedlander and Nadirashvili suggested that Iā(M) = Iā (SĀ²) for any surface M different from RPĀ². We show that, surprisingly enough, this is not true for non-orientable surfaces of even genus, for such surfaces one has I_k(M) > I_k(SĀ²). We also discuss the connection between the FriedlanderāNadirashvili invariants and the theory of cobordisms, and conjecture that I_k(M) is a cobordism invariant
Discourse-Aware Soft Prompting for Text Generation
Current efficient fine-tuning methods (e.g., adapters, prefix-tuning, etc.)
have optimized conditional text generation via training a small set of extra
parameters of the neural language model, while freezing the rest for
efficiency. While showing strong performance on some generation tasks, they
don't generalize across all generation tasks. We show that soft-prompt based
conditional text generation can be improved with simple and efficient methods
that simulate modeling the discourse structure of human written text. We
investigate two design choices: First, we apply \textit{hierarchical blocking}
on the prefix parameters to simulate a higher-level discourse structure of
human written text. Second, we apply \textit{attention sparsity} on the prefix
parameters at different layers of the network and learn sparse transformations
on the softmax-function. We show that structured design of prefix parameters
yields more coherent, faithful and relevant generations than the baseline
prefix-tuning on all generation tasks
UniK-QA: Unified Representations of Structured and Unstructured Knowledge for Open-Domain Question Answering
We study open-domain question answering with structured, unstructured and
semi-structured knowledge sources, including text, tables, lists and knowledge
bases. Departing from prior work, we propose a unifying approach that
homogenizes all sources by reducing them to text and applies the
retriever-reader model which has so far been limited to text sources only. Our
approach greatly improves the results on knowledge-base QA tasks by 11 points,
compared to latest graph-based methods. More importantly, we demonstrate that
our unified knowledge (UniK-QA) model is a simple and yet effective way to
combine heterogeneous sources of knowledge, advancing the state-of-the-art
results on two popular question answering benchmarks, NaturalQuestions and
WebQuestions, by 3.5 and 2.6 points, respectively
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
Large pre-trained language models have been shown to store factual knowledge
in their parameters, and achieve state-of-the-art results when fine-tuned on
downstream NLP tasks. However, their ability to access and precisely manipulate
knowledge is still limited, and hence on knowledge-intensive tasks, their
performance lags behind task-specific architectures. Additionally, providing
provenance for their decisions and updating their world knowledge remain open
research problems. Pre-trained models with a differentiable access mechanism to
explicit non-parametric memory can overcome this issue, but have so far been
only investigated for extractive downstream tasks. We explore a general-purpose
fine-tuning recipe for retrieval-augmented generation (RAG) -- models which
combine pre-trained parametric and non-parametric memory for language
generation. We introduce RAG models where the parametric memory is a
pre-trained seq2seq model and the non-parametric memory is a dense vector index
of Wikipedia, accessed with a pre-trained neural retriever. We compare two RAG
formulations, one which conditions on the same retrieved passages across the
whole generated sequence, the other can use different passages per token. We
fine-tune and evaluate our models on a wide range of knowledge-intensive NLP
tasks and set the state-of-the-art on three open domain QA tasks, outperforming
parametric seq2seq models and task-specific retrieve-and-extract architectures.
For language generation tasks, we find that RAG models generate more specific,
diverse and factual language than a state-of-the-art parametric-only seq2seq
baseline.Comment: Accepted at NeurIPS 202