17 research outputs found

    On the Friedlander-Nadirashvili invariants of surfaces

    Get PDF
    Let MM be a closed smooth manifold. In 1999, L. Friedlander and N. Nadirashvili introduced a new differential invariant I1(M)I_1(M) using the first normalized nonzero eigenvalue of the Lalpace-Beltrami operator Ī”g\Delta_g of a Riemannian metric gg. They defined it taking the supremum of this quantity over all Riemannian metrics in each conformal class, and then taking the infimum over all conformal classes. By analogy we use kk-th eigenvalues of Ī”g\Delta_g to define the invariants Ik(M)I_k(M) indexed by positive integers kk. In the present paper the values of these invariants on surfaces are investigated. We show that Ik(M)=Ik(S2)I_k(M)=I_k(\mathbb{S}^2) unless MM is a non-orientable surface of even genus. For orientable surfaces and k=1k=1 this was earlier shown by R. Petrides. In fact L. Friedlander and N. Nadirashvili suggested that I1(M)=I1(S2)I_1(M)=I_1(\mathbb{S}^2) for any surface MM different from RP2\mathbb{RP}^2. We show that, surprisingly enough, this is not true for non-orientable surfaces of even genus, for such surfaces one has Ik(M)>Ik(S2)I_k(M)>I_k(\mathbb{S}^2). We also discuss the connection between the Friedlander-Nadirashvili invariants and the theory of cobordisms, and conjecture that Ik(M)I_k(M) is a cobordism invariant.Comment: 34 pages, 2 figure

    On the Friedlanderā€“Nadirashvili invariants of surfaces

    Get PDF
    Let M be a closed smooth manifold. In 1999, Friedlander and Nadirashvili introduced a new differential invariant Iā‚ (M) using the first normalized nonzero eigenvalue of the Lalpaceā€“Beltrami operator Ī”_g of a Riemannian metric g. They defined it taking the supremum of this quantity over all Riemannian metrics in each conformal class, and then taking the infimum over all conformal classes. By analogy we use k-th eigenvalues of Ī”_g to define the invariants I_k(M) indexed by positive integers k. In the present paper the values of these invariants on surfaces are investigated. We show that I_k(M) = Ik(SĀ²) unless M is a non-orientable surface of even genus. For orientable surfaces and k = 1 this was earlier shown by Petrides. In fact Friedlander and Nadirashvili suggested that Iā‚(M) = Iā‚ (SĀ²) for any surface M different from RPĀ². We show that, surprisingly enough, this is not true for non-orientable surfaces of even genus, for such surfaces one has I_k(M) > I_k(SĀ²). We also discuss the connection between the Friedlanderā€“Nadirashvili invariants and the theory of cobordisms, and conjecture that I_k(M) is a cobordism invariant

    Discourse-Aware Soft Prompting for Text Generation

    Full text link
    Current efficient fine-tuning methods (e.g., adapters, prefix-tuning, etc.) have optimized conditional text generation via training a small set of extra parameters of the neural language model, while freezing the rest for efficiency. While showing strong performance on some generation tasks, they don't generalize across all generation tasks. We show that soft-prompt based conditional text generation can be improved with simple and efficient methods that simulate modeling the discourse structure of human written text. We investigate two design choices: First, we apply \textit{hierarchical blocking} on the prefix parameters to simulate a higher-level discourse structure of human written text. Second, we apply \textit{attention sparsity} on the prefix parameters at different layers of the network and learn sparse transformations on the softmax-function. We show that structured design of prefix parameters yields more coherent, faithful and relevant generations than the baseline prefix-tuning on all generation tasks

    UniK-QA: Unified Representations of Structured and Unstructured Knowledge for Open-Domain Question Answering

    Full text link
    We study open-domain question answering with structured, unstructured and semi-structured knowledge sources, including text, tables, lists and knowledge bases. Departing from prior work, we propose a unifying approach that homogenizes all sources by reducing them to text and applies the retriever-reader model which has so far been limited to text sources only. Our approach greatly improves the results on knowledge-base QA tasks by 11 points, compared to latest graph-based methods. More importantly, we demonstrate that our unified knowledge (UniK-QA) model is a simple and yet effective way to combine heterogeneous sources of knowledge, advancing the state-of-the-art results on two popular question answering benchmarks, NaturalQuestions and WebQuestions, by 3.5 and 2.6 points, respectively

    Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks

    Get PDF
    Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks. However, their ability to access and precisely manipulate knowledge is still limited, and hence on knowledge-intensive tasks, their performance lags behind task-specific architectures. Additionally, providing provenance for their decisions and updating their world knowledge remain open research problems. Pre-trained models with a differentiable access mechanism to explicit non-parametric memory can overcome this issue, but have so far been only investigated for extractive downstream tasks. We explore a general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) -- models which combine pre-trained parametric and non-parametric memory for language generation. We introduce RAG models where the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. We compare two RAG formulations, one which conditions on the same retrieved passages across the whole generated sequence, the other can use different passages per token. We fine-tune and evaluate our models on a wide range of knowledge-intensive NLP tasks and set the state-of-the-art on three open domain QA tasks, outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. For language generation tasks, we find that RAG models generate more specific, diverse and factual language than a state-of-the-art parametric-only seq2seq baseline.Comment: Accepted at NeurIPS 202
    corecore