133,644 research outputs found

    Chang, Ji-Mei

    Get PDF
    University of Southern California, Department of Curriculum, Teaching, & Special Education, 1989, Ph.D. University of Southern California, School of Education, 1978, M.S. National Chengchi University, Department of Education, 1970, B.A.https://scholarworks.sjsu.edu/erfa_bios/1002/thumbnail.jp

    Review: Christian Intercultural Communication

    Get PDF
    A Review: Chang, C. Tim, and Ashley E. Chang. Christian Intercultural Communication: Sharing God’s Love with People of Other Cultures. Dubuque, IA: Kendall Hunt Publishing Company, 2021. 277 Pages. $82.11

    Stephen Chang Kim MFA Thesis Statement

    Get PDF
    Stephen Chang Kim Thesi

    Ressenyes

    Get PDF
    Index de les obres ressenyades: S. FENSTERMAKER ; C. WEST (eds.), Doing Gender, Doing Difference : inequality, power and institutional chang

    Publikationsliste PD Dr. Heide Hoffmann - Publikationen zum Ökolandbau

    Get PDF
    Publikationen von Heide Hoffmann C. Stroemel S. Müller G. Marx N. Künkel Ch.-L. Chang W. Hübner K. Reute

    A Deep Primal-Dual Network for Guided Depth Super-Resolution

    Full text link
    In this paper we present a novel method to increase the spatial resolution of depth images. We combine a deep fully convolutional network with a non-local variational method in a deep primal-dual network. The joint network computes a noise-free, high-resolution estimate from a noisy, low-resolution input depth map. Additionally, a high-resolution intensity image is used to guide the reconstruction in the network. By unrolling the optimization steps of a first-order primal-dual algorithm and formulating it as a network, we can train our joint method end-to-end. This not only enables us to learn the weights of the fully convolutional network, but also to optimize all parameters of the variational method and its optimization procedure. The training of such a deep network requires a large dataset for supervision. Therefore, we generate high-quality depth maps and corresponding color images with a physically based renderer. In an exhaustive evaluation we show that our method outperforms the state-of-the-art on multiple benchmarks.Comment: BMVC 201

    A note on the phonetic evolution of yod-pa-red in Central Tibet.

    Get PDF
    Despite the current inconsistent spellings such as yod-red (Tournadre 1996: 229-231 et passim, 2003), yog-red (Denwood 1999: 158 et passim), and yoḥo-red (Hu et al. 1989: 64 et passim) of the existential copula and auxiliary verb which is pronounced as yɔɔ ̀ ree ̀ (Chang and Shefts 1964: 15) or yo:re ' (Tournadre 1996: 229-231) there is widespread agreement that yod-pa-red is the etymological origin of this morpheme (Chang and Chang 1968: 106ff, Tournadre 1996: 229). It is regularly spelled yod-pa-red in the newspaper articles collected from the Mi dmaṅs brñan par (人民畫 報 Peoples Pictorial) by Kamil Sedláček (1972, e.g. p. 27, bsam-gyi yod-pa-red ‘he was thinking’). The pronunciation of this auxiliary is not what one would predict from the spelling. In all likelihood it is the frequency and unstressed syntactic position of the word which led to this deviant phonetic development. The existence of studies and handbooks for the language of Lhasa over more than a century permits us to trance the phonetic development of yod-pa-red with surprising precision

    Quantifying Facial Age by Posterior of Age Comparisons

    Full text link
    We introduce a novel approach for annotating large quantity of in-the-wild facial images with high-quality posterior age distribution as labels. Each posterior provides a probability distribution of estimated ages for a face. Our approach is motivated by observations that it is easier to distinguish who is the older of two people than to determine the person's actual age. Given a reference database with samples of known ages and a dataset to label, we can transfer reliable annotations from the former to the latter via human-in-the-loop comparisons. We show an effective way to transform such comparisons to posterior via fully-connected and SoftMax layers, so as to permit end-to-end training in a deep network. Thanks to the efficient and effective annotation approach, we collect a new large-scale facial age dataset, dubbed `MegaAge', which consists of 41,941 images. Data can be downloaded from our project page mmlab.ie.cuhk.edu.hk/projects/MegaAge and github.com/zyx2012/Age_estimation_BMVC2017. With the dataset, we train a network that jointly performs ordinal hyperplane classification and posterior distribution learning. Our approach achieves state-of-the-art results on popular benchmarks such as MORPH2, Adience, and the newly proposed MegaAge.Comment: To appear on BMVC 2017 (oral) revised versio
    corecore