333 research outputs found

    宇宙条約6条第一文・第二文の成立(一)

    Get PDF

    Freeze-Drying of Copper-Oxide-Based Porous Bulk Catalysts for Diesel Soot Combustion

    Get PDF
    Porous copper oxide particles were prepared by freeze-drying to serve as bulk catalysts for diesel soot combustion. Frozen particles of aqueous copper sulfate solution mixed with other metal sulfate solutions were dried in a vacuum chamber, and the dried particles were calcined into metal oxide particles. The open porosity induced by sublimation of ice crystals in the freeze-dried particles was retained during calcination and subsequent sintering. These porous Particles were directly utilized as bulk catalyst packed in a diesel soot trap. TG-DTA thermal analysis was employed to evaluate the catalyst activity in oxidation of diesel soot. In order to achieve intimate contact between catalyst dnd soot, the oxide particles were impregnated with metal chlorides, which are expected to form liquid phase during soot combustion. Among the prepared catalysts, CuO/Co_3O_4 with KC1/LiCl exhibited noticeable activity and durability, ex-hibiting an ignition temperature of 300℃ in soot combustion

    宇宙条約6条第一文・第二文の成立(二)

    Get PDF

    宇宙条約6条第一文前段の趣旨及び目的

    Get PDF

    Descartes: Generating Short Descriptions of Wikipedia Articles

    Full text link
    Wikipedia is one of the richest knowledge sources on the Web today. In order to facilitate navigating, searching, and maintaining its content, Wikipedia's guidelines state that all articles should be annotated with a so-called short description indicating the article's topic (e.g., the short description of beer is "Alcoholic drink made from fermented cereal grains"). Nonetheless, a large fraction of articles (ranging from 10.2% in Dutch to 99.7% in Kazakh) have no short description yet, with detrimental effects for millions of Wikipedia users. Motivated by this problem, we introduce the novel task of automatically generating short descriptions for Wikipedia articles and propose Descartes, a multilingual model for tackling it. Descartes integrates three sources of information to generate an article description in a target language: the text of the article in all its language versions, the already-existing descriptions (if any) of the article in other languages, and semantic type information obtained from a knowledge graph. We evaluate a Descartes model trained for handling 25 languages simultaneously, showing that it beats baselines (including a strong translation-based baseline) and performs on par with monolingual models tailored for specific languages. A human evaluation on three languages further shows that the quality of Descartes's descriptions is largely indistinguishable from that of human-written descriptions; e.g., 91.3% of our English descriptions (vs. 92.1% of human-written descriptions) pass the bar for inclusion in Wikipedia, suggesting that Descartes is ready for production, with the potential to support human editors in filling a major gap in today's Wikipedia across languages

    宇宙条約6条第一文・第二文の成立(三・完)

    Get PDF

    Exploiting Asymmetry for Synthetic Training Data Generation: SynthIE and the Case of Information Extraction

    Full text link
    Large language models (LLMs) have great potential for synthetic data generation. This work shows that useful data can be synthetically generated even for tasks that cannot be solved directly by LLMs: for problems with structured outputs, it is possible to prompt an LLM to perform the task in the reverse direction, by generating plausible input text for a target output structure. Leveraging this asymmetry in task difficulty makes it possible to produce large-scale, high-quality data for complex tasks. We demonstrate the effectiveness of this approach on closed information extraction, where collecting ground-truth data is challenging, and no satisfactory dataset exists to date. We synthetically generate a dataset of 1.8M data points, establish its superior quality compared to existing datasets in a human evaluation, and use it to finetune small models (220M and 770M parameters), termed SynthIE, that outperform the prior state of the art (with equal model size) by a substantial margin of 57 absolute points in micro-F1 and 79 points in macro-F1. Code, data, and models are available at https://github.com/epfl-dlab/SynthIE.Comment: Accepted at EMNLP 202

    My war, your war: understanding conflict in Africa and the Middle East through fiction film: Hotel Rwanda and The Kingdom

    Get PDF
    ABSTRACT This research will focus on how we understand conflict through fiction film. The thesis will analyse the two case studies Hotel Rwanda, Terry George, (2004) and The Kingdom, (Peter Berg, 2007), by focusing on three areas of study, namely, globalisation, fictional narratives, and how we remember conflict. The discussion begins with globalisation with reference to narrative content and the economic and distributive authority of Hollywood. This will be linked to film as a commodity and how popular culture (through fiction film) intersects with the ‘real’, historical world and promotes ideological perceptions of the events. Through an analysis of the narrative structure, this research shall investigate how each narrative creates ‘preferred’ readings around ethnic groups and how it assumes a truthful depiction of its referents. The discussion shall focus on how the Classic Hollywood narrative, voice and rhetoric emerge within the two films. The investigation will also examine how the films are located within memory of conflict and how they create ‘othering’ through their representation and ‘voice’. This message provides a framework within the global environment. The research will show that although the films are fictional, their global message is very much the same as to what is emerging within global media regarding mainstream as opposed to the marginalised ‘other’, whether this relates to Cultural Imperialism, fantasy others, mythical others or cultural and political associations of others
    corecore