550 research outputs found

    A direct proof that (3)\ell_\infty^{(3)} has generalized roundness zero

    Full text link
    Metric spaces of generalized roundness zero have interesting non-embedding properties. For instance, we note that no metric space of generalized roundness zero is isometric to any metric subspace of any LpL_{p}-space for which 0<p20 < p \leq 2. Lennard, Tonge and Weston gave an indirect proof that (3)\ell_{\infty}^{(3)} has generalized roundness zero by appealing to highly non-trivial isometric embedding theorems of Bretagnolle Dacunha-Castelle and Krivine, and Misiewicz. In this paper we give a direct proof that (3)\ell_{\infty}^{(3)} has generalized roundness zero. This provides insight into the combinatorial geometry of (3)\ell_{\infty}^{(3)} that causes the generalized roundness inequalities to fail. We complete the paper by noting a characterization of real quasi-normed spaces of generalized roundness zero.Comment: The first version of this paper had the title "The generalized roundness of (3)\ell_\infty^{(3)} revisited". This version includes some minor modifications of the text and corrections to several typographic error

    Language Models that Seek for Knowledge: Modular Search & Generation for Dialogue and Prompt Completion

    Full text link
    Language models (LMs) have recently been shown to generate more factual responses by employing modularity (Zhou et al., 2021) in combination with retrieval (Adolphs et al., 2021). We extend the recent approach of Adolphs et al. (2021) to include internet search as a module. Our SeeKeR (Search engine->Knowledge->Response) method thus applies a single LM to three modular tasks in succession: search, generating knowledge, and generating a final response. We show that, when using SeeKeR as a dialogue model, it outperforms the state-of-the-art model BlenderBot 2 (Chen et al., 2021) on open-domain knowledge-grounded conversations for the same number of parameters, in terms of consistency, knowledge and per-turn engagingness. SeeKeR applied to topical prompt completions as a standard language model outperforms GPT2 (Radford et al., 2019) and GPT3 (Brown et al., 2020) in terms of factuality and topicality, despite GPT3 being a vastly larger model. Our code and models are made publicly available
    corecore