268 research outputs found

    MultiSubs: A Large-scale Multimodal and Multilingual Dataset

    Full text link
    This paper introduces a large-scale multimodal and multilingual dataset that aims to facilitate research on grounding words to images in their contextual usage in language. The dataset consists of images selected to unambiguously illustrate concepts expressed in sentences from movie subtitles. The dataset is a valuable resource as (i) the images are aligned to text fragments rather than whole sentences; (ii) multiple images are possible for a text fragment and a sentence; (iii) the sentences are free-form and real-world like; (iv) the parallel texts are multilingual. We set up a fill-in-the-blank game for humans to evaluate the quality of the automatic image selection process of our dataset. We show the utility of the dataset on two automatic tasks: (i) fill-in-the blank; (ii) lexical translation. Results of the human evaluation and automatic models demonstrate that images can be a useful complement to the textual context. The dataset will benefit research on visual grounding of words especially in the context of free-form sentences, and can be obtained from https://doi.org/10.5281/zenodo.5034604 under a Creative Commons licence.Comment: Manuscript update: (i) Added links to the dataset and evaluation toolkit; (ii) Section 6.1.4: Added random and n-gram baselines to the fill-in-the-blank task, and added further discussion at the end of the section; (iii) Section 6.2.3: Further elaboration on the ALI metric; (iv) Section 6.2.4: Corrected results for the lexical translation task (Table 8), and updated the discussions accordingl

    Advanced Semantics for Commonsense Knowledge Extraction

    Get PDF
    Commonsense knowledge (CSK) about concepts and their properties is useful for AI applications such as robust chatbots. Prior works like ConceptNet, TupleKB and others compiled large CSK collections, but are restricted in their expressiveness to subject-predicate-object (SPO) triples with simple concepts for S and monolithic strings for P and O. Also, these projects have either prioritized precision or recall, but hardly reconcile these complementary goals. This paper presents a methodology, called Ascent, to automatically build a large-scale knowledge base (KB) of CSK assertions, with advanced expressiveness and both better precision and recall than prior works. Ascent goes beyond triples by capturing composite concepts with subgroups and aspects, and by refining assertions with semantic facets. The latter are important to express temporal and spatial validity of assertions and further qualifiers. Ascent combines open information extraction with judicious cleaning using language models. Intrinsic evaluation shows the superior size and quality of the Ascent KB, and an extrinsic evaluation for QA-support tasks underlines the benefits of Ascent

    Advanced Semantics for Commonsense Knowledge Extraction

    Get PDF
    Commonsense knowledge (CSK) about concepts and their properties is useful for AI applications such as robust chatbots. Prior works like ConceptNet, TupleKB and others compiled large CSK collections, but are restricted in their expressiveness to subject-predicate-object (SPO) triples with simple concepts for S and monolithic strings for P and O. Also, these projects have either prioritized precision or recall, but hardly reconcile these complementary goals. This paper presents a methodology, called Ascent, to automatically build a large-scale knowledge base (KB) of CSK assertions, with advanced expressiveness and both better precision and recall than prior works. Ascent goes beyond triples by capturing composite concepts with subgroups and aspects, and by refining assertions with semantic facets. The latter are important to express temporal and spatial validity of assertions and further qualifiers. Ascent combines open information extraction with judicious cleaning using language models. Intrinsic evaluation shows the superior size and quality of the Ascent KB, and an extrinsic evaluation for QA-support tasks underlines the benefits of Ascent.Comment: Web interface available at https://ascent.mpi-inf.mpg.d

    English WordNet Taxonomic Random Walk Pseudo-Corpora

    Get PDF
    This is a resource description paper that describes the creation and properties of a set of pseudo-corpora generated artificially from a random walk over the English WordNet taxonomy. Our WordNet taxonomic random walk implementation allows the exploration of different random walk hyperparameters and the generation of a variety of different pseudo-corpora. We find that different combinations of the walk’s hyperparameters result in varying statistical properties of the generated pseudo-corpora. We have published a total of 81 pseudo-corpora that we have used in our previous research, but have not exhausted all possible combinations of hyperparameters, which is why we have also published a codebase that allows the generation of additional WordNet taxonomic pseudo-corpora as needed. Ultimately, such pseudo-corpora can be used to train taxonomic word embeddings, as a way of transferring taxonomic knowledge into a word embedding space

    Uni- and Multimodal and Structured Representations for Modeling Frame Semantics

    Get PDF
    Language is the most complex kind of shared knowledge evolved by humankind and it is the foundation of communication between humans. At the same time, one of the most challenging problems in Artificial Intelligence is to grasp the meaning conveyed by language. Humans use language to communicate knowledge and information about the world and to exchange their thoughts. In order to understand the meaning of words in a sentence, single words are interpreted in the context of the sentence and of the situation together with a large background of commonsense knowledge and experience in the world. The research field of Natural Language Processing aims at automatically understanding language as humans do naturally. In this thesis, the overall challenge of understanding meaning in language by capturing world knowledge is examined from the two branches of (a) knowledge about situations and actions as expressed in texts and (b) structured relational knowledge as stored in knowledge bases. Both branches can be studied with different kinds of vector representations, so-called embeddings, for operationalizing different aspects of knowledge: textual, structured, and visual or multimodal embeddings. This poses the challenge of determining the suitability of different embeddings for automatic language understanding with respect to the two branches. To approach these challenges, we choose to closely rely upon the lexical-semantic knowledge base FrameNet. It addresses both branches of capturing world knowledge whilst taking into account the linguistic theory of frame semantics which orients on human language understanding. FrameNet provides frames, which are categories for knowledge of meaning, and frame-to-frame relations, which are structured meta-knowledge of interactions between frames. These frames and relations are central to the tasks of Frame Identification and Frame-to-Frame Relation Prediction. Concerning branch (a), the task of Frame Identification was introduced to advance the understanding of context knowledge about situations, actions and participants. The task is to label predicates with frames in order to identify the meaning of the predicate in the context of the sentence. We use textual embeddings to model the semantics of words in the sentential context and develop a state-of-the-art system for Frame Identification. Our Frame Identification system can be used to automatically annotate frames on English or German texts. Furthermore, in our multimodal approach to Frame Identification, we combine textual embeddings for words with visual embeddings for entities depicted on images. We find that visual information is especially useful in difficult settings with rare frames. To further advance the performance of the multimodal approach, we suggest to develop embeddings for verbs specifically that incorporate multimodal information. Concerning branch (b), we introduce the task of Frame-to-Frame Relation Prediction to advance the understanding of relational knowledge of interactions between frames. The task is to label connections between frames with relations in order to complete the meta-knowledge stored in FrameNet. We train textual and structured embeddings for frames and explore the limitations of textual frame embeddings with respect to recovering relations between frames. Moreover, we contrast textual frame embeddings versus structured frame embeddings and develop the first system for Frame-to-Frame Relation Prediction. We find that textual and structured frame embeddings differ with respect to predicting relations; thus when applied as features in the context of further tasks, they can provide different kinds of frame knowledge. Our structured prediction system can be used to generate recommendations for annotations with relations. To further advance the performance of Frame-to-Frame Relation Prediction and also of the induction of new frames and relations, we suggest to develop approaches that incorporate visual information. The two kinds of frame knowledge from both branches, our Frame Identification system and our pre-trained frame embeddings, are combined in an extrinsic evaluation in the context of higher-level applications. Across these applications, we see a trend that frame knowledge is particularly beneficial in ambiguous and short sentences. Taken together, in this thesis, we approach semantic language understanding from the two branches of knowledge about situations and actions and structured relational knowledge and investigate different embeddings for textual, structured and multimodal language understanding
    corecore