115,554 research outputs found

    Book Review: “ASIA ON TOUR: Exploring the rise of Asian tourism”

    Get PDF
    A review of the book "Asia on Tour: Exploring the Rise of Asian Tourism," edited by Tim Winter, Peggy Teo and T. C. Chang is presented

    Stephen Chang Kim MFA Thesis Statement

    Get PDF
    Stephen Chang Kim Thesi

    Publikationsliste PD Dr. Heide Hoffmann - Publikationen zum Ökolandbau

    Get PDF
    Publikationen von Heide Hoffmann C. Stroemel S. Müller G. Marx N. Künkel Ch.-L. Chang W. Hübner K. Reute

    A note on the phonetic evolution of yod-pa-red in Central Tibet.

    Get PDF
    Despite the current inconsistent spellings such as yod-red (Tournadre 1996: 229-231 et passim, 2003), yog-red (Denwood 1999: 158 et passim), and yoḥo-red (Hu et al. 1989: 64 et passim) of the existential copula and auxiliary verb which is pronounced as yɔɔ ̀ ree ̀ (Chang and Shefts 1964: 15) or yo:re ' (Tournadre 1996: 229-231) there is widespread agreement that yod-pa-red is the etymological origin of this morpheme (Chang and Chang 1968: 106ff, Tournadre 1996: 229). It is regularly spelled yod-pa-red in the newspaper articles collected from the Mi dmaṅs brñan par (人民畫 報 Peoples Pictorial) by Kamil Sedláček (1972, e.g. p. 27, bsam-gyi yod-pa-red ‘he was thinking’). The pronunciation of this auxiliary is not what one would predict from the spelling. In all likelihood it is the frequency and unstressed syntactic position of the word which led to this deviant phonetic development. The existence of studies and handbooks for the language of Lhasa over more than a century permits us to trance the phonetic development of yod-pa-red with surprising precision

    Text to 3D Scene Generation with Rich Lexical Grounding

    Full text link
    The ability to map descriptions of scenes to 3D geometric representations has many applications in areas such as art, education, and robotics. However, prior work on the text to 3D scene generation task has used manually specified object categories and language that identifies them. We introduce a dataset of 3D scenes annotated with natural language descriptions and learn from this data how to ground textual descriptions to physical objects. Our method successfully grounds a variety of lexical terms to concrete referents, and we show quantitatively that our method improves 3D scene generation over previous work using purely rule-based methods. We evaluate the fidelity and plausibility of 3D scenes generated with our grounding approach through human judgments. To ease evaluation on this task, we also introduce an automated metric that strongly correlates with human judgments.Comment: 10 pages, 7 figures, 3 tables. To appear in ACL-IJCNLP 201

    Local Visual Microphones: Improved Sound Extraction from Silent Video

    Full text link
    Sound waves cause small vibrations in nearby objects. A few techniques exist in the literature that can extract sound from video. In this paper we study local vibration patterns at different image locations. We show that different locations in the image vibrate differently. We carefully aggregate local vibrations and produce a sound quality that improves state-of-the-art. We show that local vibrations could have a time delay because sound waves take time to travel through the air. We use this phenomenon to estimate sound direction. We also present a novel algorithm that speeds up sound extraction by two to three orders of magnitude and reaches real-time performance in a 20KHz video.Comment: Accepted to BMVC 201

    RED: Reinforced Encoder-Decoder Networks for Action Anticipation

    Full text link
    Action anticipation aims to detect an action before it happens. Many real world applications in robotics and surveillance are related to this predictive capability. Current methods address this problem by first anticipating visual representations of future frames and then categorizing the anticipated representations to actions. However, anticipation is based on a single past frame's representation, which ignores the history trend. Besides, it can only anticipate a fixed future time. We propose a Reinforced Encoder-Decoder (RED) network for action anticipation. RED takes multiple history representations as input and learns to anticipate a sequence of future representations. One salient aspect of RED is that a reinforcement module is adopted to provide sequence-level supervision; the reward function is designed to encourage the system to make correct predictions as early as possible. We test RED on TVSeries, THUMOS-14 and TV-Human-Interaction datasets for action anticipation and achieve state-of-the-art performance on all datasets

    MI 745 Seminar In Missiology

    Get PDF
    Curtis Chang Engaging Unbelief: A Captivating Strategy from Augustine and Aquinas Downers Grove, Illinois: InterVarsity Press, 2000https://place.asburyseminary.edu/syllabi/2495/thumbnail.jp
    corecore