1,798 research outputs found

    Syntactically Look-Ahead Attention Network for Sentence Compression

    Full text link
    Sentence compression is the task of compressing a long sentence into a short one by deleting redundant words. In sequence-to-sequence (Seq2Seq) based models, the decoder unidirectionally decides to retain or delete words. Thus, it cannot usually explicitly capture the relationships between decoded words and unseen words that will be decoded in the future time steps. Therefore, to avoid generating ungrammatical sentences, the decoder sometimes drops important words in compressing sentences. To solve this problem, we propose a novel Seq2Seq model, syntactically look-ahead attention network (SLAHAN), that can generate informative summaries by explicitly tracking both dependency parent and child words during decoding and capturing important words that will be decoded in the future. The results of the automatic evaluation on the Google sentence compression dataset showed that SLAHAN achieved the best kept-token-based-F1, ROUGE-1, ROUGE-2 and ROUGE-L scores of 85.5, 79.3, 71.3 and 79.1, respectively. SLAHAN also improved the summarization performance on longer sentences. Furthermore, in the human evaluation, SLAHAN improved informativeness without losing readability.Comment: AAAI 202

    Table and Image Generation for Investigating Knowledge of Entities in Pre-trained Vision and Language Models

    Full text link
    In this paper, we propose a table and image generation task to verify how the knowledge about entities acquired from natural language is retained in Vision & Language (V & L) models. This task consists of two parts: the first is to generate a table containing knowledge about an entity and its related image, and the second is to generate an image from an entity with a caption and a table containing related knowledge of the entity. In both tasks, the model must know the entities used to perform the generation properly. We created the Wikipedia Table and Image Generation (WikiTIG) dataset from about 200,000 infoboxes in English Wikipedia articles to perform the proposed tasks. We evaluated the performance on the tasks with respect to the above research question using the V & L model OFA, which has achieved state-of-the-art results in multiple tasks. Experimental results show that OFA forgets part of its entity knowledge by pre-training as a complement to improve the performance of image related tasks.Comment: Accepted at ACL 202

    Model-based Subsampling for Knowledge Graph Completion

    Full text link
    Subsampling is effective in Knowledge Graph Embedding (KGE) for reducing overfitting caused by the sparsity in Knowledge Graph (KG) datasets. However, current subsampling approaches consider only frequencies of queries that consist of entities and their relations. Thus, the existing subsampling potentially underestimates the appearance probabilities of infrequent queries even if the frequencies of their entities or relations are high. To address this problem, we propose Model-based Subsampling (MBS) and Mixed Subsampling (MIX) to estimate their appearance probabilities through predictions of KGE models. Evaluation results on datasets FB15k-237, WN18RR, and YAGO3-10 showed that our proposed subsampling methods actually improved the KG completion performances for popular KGE models, RotatE, TransE, HAKE, ComplEx, and DistMult.Comment: Accepted by AACL 2023; 9 pages, 3 figures, 5 table

    Does Pre-trained Language Model Actually Infer Unseen Links in Knowledge Graph Completion?

    Full text link
    Knowledge graphs (KGs) consist of links that describe relationships between entities. Due to the difficulty of manually enumerating all relationships between entities, automatically completing them is essential for KGs. Knowledge Graph Completion (KGC) is a task that infers unseen relationships between entities in a KG. Traditional embedding-based KGC methods, such as RESCAL, TransE, DistMult, ComplEx, RotatE, HAKE, HousE, etc., infer missing links using only the knowledge from training data. In contrast, the recent Pre-trained Language Model (PLM)-based KGC utilizes knowledge obtained during pre-training. Therefore, PLM-based KGC can estimate missing links between entities by reusing memorized knowledge from pre-training without inference. This approach is problematic because building KGC models aims to infer unseen links between entities. However, conventional evaluations in KGC do not consider inference and memorization abilities separately. Thus, a PLM-based KGC method, which achieves high performance in current KGC evaluations, may be ineffective in practical applications. To address this issue, we analyze whether PLM-based KGC methods make inferences or merely access memorized knowledge. For this purpose, we propose a method for constructing synthetic datasets specified in this analysis and conclude that PLMs acquire the inference abilities required for KGC through pre-training, even though the performance improvements mostly come from textual information of entities and relations.Comment: 15 pages, 10 figure

    Emittance measurements and operation optimization for ECR ion sources

    Get PDF
    Electron Cyclotron Resonance (ECR) ion sources supply a broad range of ions for post acceleration in cyclotrons. Here, an effort to improve the beam transfer from RIKEN's 18 GHz superconducting ECR ion source (SC ECRIS) to the Low Energy Beam Transfer (LEBT) line and an optimization of the performance of the ion source is presented. Simulation studies have shown that less than 20% of the beam is currently transferred. The first goal is to measure the transverse beam emittance in real time. The emittance monitor designed and fabricated for this purpose utilizes a pepper pot plate followed by a transparent scintillator and a CMOS camera for image capture. The second goal is to investigate on dependencies between beam emittance and various operating parameters. To this extent, modifications of the ion source took place, as well as a measurement of the magnetic field inside the ion source. In this contribution the design details of the instrument and a description of the algorithm are presented as well as a typical emittance measurement

    半井桃水の小説「胡砂吹く風」

    Get PDF
    Nakarai Tosui was one of the most popular newspaper novelists in Tokyo. He became a novel journalist (shosetsukisha) for the Tokyo Asahi Shinbun in 1888. Tosui wrote The Wind Blowing Yellow Sand for Asahi Shinbun from November 1890 to April 1891.The main stage of the novel is the Korean peninsula, and the hero is a half Japanese half Korean boy. Before becoming a novel journalist for the Tokyo Asahi Shinbun, Tosui worked from 1880 to 1887 as a correspondent for the Osaka Asahi Shinbun in Pusan.Tosui put into his novel The Wind Blowing Yellow Sand all of his knowledge about Korean culture and all of his experience as a Korean specialist journalist. He wove into his novel many incidents that took place between Japan and Korea in the late 19th century. The hero Hayashi Masamoto becomes friend with the progressive aristocrats in Seoul, and finally succeeds in concluding the East Asian three countries\u27 alliance and prevents the interference of the Russian empire with Korea. The novel was enjoyed by newspaper readers, and it was published in the form of a book in two volumes, in 1893.Tosui was asked to write a sequel to The Wind Blowing Yellow Sand at the time of the Sino-Japanese War in 1894, but this novel was suspended, presumably because Tosui expressed his ideal of the harmonious coexistence of three East Asian countries in The Wind Blowing Yellow Sand, and the sequel went against readers\u27 taste.After the failure of The Wind Blowing Yellow Sand and its sequel, Tosui wrote very little about Korea. Japan\u27s policy toward Asia was now moving in a completely opposite direction to Tosui\u27s ideal, towards the invasion by force of Asian countries
    corecore