332 research outputs found

    Table and Image Generation for Investigating Knowledge of Entities in Pre-trained Vision and Language Models

    Full text link
    In this paper, we propose a table and image generation task to verify how the knowledge about entities acquired from natural language is retained in Vision & Language (V & L) models. This task consists of two parts: the first is to generate a table containing knowledge about an entity and its related image, and the second is to generate an image from an entity with a caption and a table containing related knowledge of the entity. In both tasks, the model must know the entities used to perform the generation properly. We created the Wikipedia Table and Image Generation (WikiTIG) dataset from about 200,000 infoboxes in English Wikipedia articles to perform the proposed tasks. We evaluated the performance on the tasks with respect to the above research question using the V & L model OFA, which has achieved state-of-the-art results in multiple tasks. Experimental results show that OFA forgets part of its entity knowledge by pre-training as a complement to improve the performance of image related tasks.Comment: Accepted at ACL 202

    Does Pre-trained Language Model Actually Infer Unseen Links in Knowledge Graph Completion?

    Full text link
    Knowledge graphs (KGs) consist of links that describe relationships between entities. Due to the difficulty of manually enumerating all relationships between entities, automatically completing them is essential for KGs. Knowledge Graph Completion (KGC) is a task that infers unseen relationships between entities in a KG. Traditional embedding-based KGC methods, such as RESCAL, TransE, DistMult, ComplEx, RotatE, HAKE, HousE, etc., infer missing links using only the knowledge from training data. In contrast, the recent Pre-trained Language Model (PLM)-based KGC utilizes knowledge obtained during pre-training. Therefore, PLM-based KGC can estimate missing links between entities by reusing memorized knowledge from pre-training without inference. This approach is problematic because building KGC models aims to infer unseen links between entities. However, conventional evaluations in KGC do not consider inference and memorization abilities separately. Thus, a PLM-based KGC method, which achieves high performance in current KGC evaluations, may be ineffective in practical applications. To address this issue, we analyze whether PLM-based KGC methods make inferences or merely access memorized knowledge. For this purpose, we propose a method for constructing synthetic datasets specified in this analysis and conclude that PLMs acquire the inference abilities required for KGC through pre-training, even though the performance improvements mostly come from textual information of entities and relations.Comment: 15 pages, 10 figure

    Model-based Subsampling for Knowledge Graph Completion

    Full text link
    Subsampling is effective in Knowledge Graph Embedding (KGE) for reducing overfitting caused by the sparsity in Knowledge Graph (KG) datasets. However, current subsampling approaches consider only frequencies of queries that consist of entities and their relations. Thus, the existing subsampling potentially underestimates the appearance probabilities of infrequent queries even if the frequencies of their entities or relations are high. To address this problem, we propose Model-based Subsampling (MBS) and Mixed Subsampling (MIX) to estimate their appearance probabilities through predictions of KGE models. Evaluation results on datasets FB15k-237, WN18RR, and YAGO3-10 showed that our proposed subsampling methods actually improved the KG completion performances for popular KGE models, RotatE, TransE, HAKE, ComplEx, and DistMult.Comment: Accepted by AACL 2023; 9 pages, 3 figures, 5 table

    Bioluminescence Microscopy: Design and Applications

    Get PDF
    Bioluminescence imaging by microscopy is performed using an ultra-low-light imaging camera. Although imaging devices such as sensor and camera have been greatly improved over time, such improvements have not been attained commercially which are available for microscopes now. We previously optimized the optical system of a microscope for bioluminescence imaging using a short-focal-length imaging lens and evaluated this system with a conventional color charge-coupled device camera. Here, we describe the concept of bioluminescence microscope design using a short-focal-length imaging lens and some representative applications, including intracellular calcium imaging, imaging of clock gene promoter assays, and three-dimensional reconstruction of Drosophila larva. This system facilitates the acquisition of bioluminescence images of single live cells using luciferase, which is similar to fluorescence microscopy using a fluorescent protein
    • …
    corecore