4,549 research outputs found

    Towards Ontologically Grounded and Language-Agnostic Knowledge Graphs

    Full text link
    Knowledge graphs (KGs) have become the standard technology for the representation of factual information in applications such as recommendation engines, search, and question-answering systems. However, the continual updating of KGs, as well as the integration of KGs from different domains and KGs in different languages, remains to be a major challenge. What we suggest here is that by a reification of abstract objects and by acknowledging the ontological distinction between concepts and types, we arrive at an ontologically grounded and language-agnostic representation that can alleviate the difficulties in KG integration.Comment: 7 pages, conference pape

    Towards Explainable and Language-Agnostic LLMs: Symbolic Reverse Engineering of Language at Scale

    Full text link
    Large language models (LLMs) have achieved a milestone that undenia-bly changed many held beliefs in artificial intelligence (AI). However, there remains many limitations of these LLMs when it comes to true language understanding, limitations that are a byproduct of the under-lying architecture of deep neural networks. Moreover, and due to their subsymbolic nature, whatever knowledge these models acquire about how language works will always be buried in billions of microfeatures (weights), none of which is meaningful on its own, making such models hopelessly unexplainable. To address these limitations, we suggest com-bining the strength of symbolic representations with what we believe to be the key to the success of LLMs, namely a successful bottom-up re-verse engineering of language at scale. As such we argue for a bottom-up reverse engineering of language in a symbolic setting. Hints on what this project amounts to have been suggested by several authors, and we discuss in some detail here how this project could be accomplished.Comment: Draft, preprin

    Stochastic LLMs do not Understand Language: Towards Symbolic, Explainable and Ontologically Based LLMs

    Full text link
    In our opinion the exuberance surrounding the relative success of data-driven large language models (LLMs) is slightly misguided and for several reasons (i) LLMs cannot be relied upon for factual information since for LLMs all ingested text (factual or non-factual) was created equal; (ii) due to their subsymbolic na-ture, whatever 'knowledge' these models acquire about language will always be buried in billions of microfeatures (weights), none of which is meaningful on its own; and (iii) LLMs will often fail to make the correct inferences in several linguistic contexts (e.g., nominal compounds, copredication, quantifier scope ambi-guities, intensional contexts. Since we believe the relative success of data-driven large language models (LLMs) is not a reflection on the symbolic vs. subsymbol-ic debate but a reflection on applying the successful strategy of a bottom-up reverse engineering of language at scale, we suggest in this paper applying the effective bottom-up strategy in a symbolic setting resulting in symbolic, explainable, and ontologically grounded language models.Comment: 17 page

    The use of world knowledge in resolving semantic ambiguities

    Get PDF
    This thesis report investigates a central problem to natural language processing, namely the problem of semantic ambiguity. The type of semantic ambiguities that are considered are those that are generally resolved by speakers of a given language by relying on common knowledge. Typical of this is the problem of pronoun resolution. In this report we investigate the thesis that the semantic theory of Richard Montague can be extended to accommodate the use of world knowledge. We propose an extension to Montague\u27s notion of contexts of use, and to the meaning representation. Meanings are represented as complex structures containing several features including the denotation. The method uses these structures to build contexts, and discourse structures that are then used in the dialogue to resolve certain types of ambiguities. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis1991 .S222. Source: Masters Abstracts International, Volume: 31-01, page: 0352. Thesis (M.C.Sc.)--University of Windsor (Canada), 1991
    • …
    corecore