1,555 research outputs found

    Doctor of Education Newsletter 2020

    Get PDF
    WSU Doctor of Education Cohort 2020 This newsletter was created by the second Education Doctorate graduate student cohort 2020.https://openriver.winona.edu/educationeddnewsletters/1001/thumbnail.jp

    The powers in PowerPoint: Embedded authorities, documentary tastes, and institutional (second) orders in corporate Korea

    Get PDF
    Microsoft PowerPoint is both the bane and banality of contemporary South Korean office work. Corporate workers spend countless hours refining and crafting plans, proposals, and reports in PowerPoint that often lead to conflicts with coworkers and overtime work. This article theorizes the excessive attention to documents in modern office contexts. Where scholars have been under the impression that institutional documents align with institutional purposes, I describe a context in which making documents for individual purposes and making them for work exist under a basic tension. Based on fieldwork in corporate Korea between 2013 and 2015, I describe how Korean office workers calibrate documents to the tastes of superiors who populate the managerial chain. These practices leave little trace of real "work" on paper, but they are productive for navigating complex internal labor markets and demonstrating a higher order value of attention toward others. These findings suggest that institutional and individual authorities are not competing projects inside organizations but become entangled in increasingly complex participatory encounters, even as they are channeled through a seemingly simple software like PowerPoint. [documents, expertise, authority, technology, South Korea

    Comparison against 186 canid whole genome sequences reveals survival strategies of an ancient clonally transmissible canine tumor.

    Get PDF
    Canine transmissible venereal tumor (CTVT) is a parasitic cancer clone that has propagated for thousands of years via sexual transfer of malignant cells. Little is understood about the mechanisms that converted an ancient tumor into the world's oldest known continuously propagating somatic cell lineage. We created the largest existing catalog of canine genome-wide variation and compared it against two CTVT genome sequences, thereby separating alleles derived from the founder's genome from somatic drivers of clonal transmissibility. We show that CTVT has undergone continuous adaptation to its transmissible allograft niche, with overlapping mutations at every step of immunosurveillance, particularly self-antigen presentation and apoptosis. We also identified chronologically early somatic mutations in oncogenesis- and immune-related genes that may represent key initiators of clonal transmissibility. Thus, we provide the first insights into the specific genomic aberrations that underlie CTVT's dogged perseverance in canids around the world

    PaLM: Scaling Language Modeling with Pathways

    Full text link
    Large language models have been shown to achieve remarkable performance across a variety of natural language tasks using few-shot learning, which drastically reduces the number of task-specific training examples needed to adapt the model to a particular application. To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model PaLM. We trained PaLM on 6144 TPU v4 chips using Pathways, a new ML system which enables highly efficient training across multiple TPU Pods. We demonstrate continued benefits of scaling by achieving state-of-the-art few-shot learning results on hundreds of language understanding and generation benchmarks. On a number of these tasks, PaLM 540B achieves breakthrough performance, outperforming the finetuned state-of-the-art on a suite of multi-step reasoning tasks, and outperforming average human performance on the recently released BIG-bench benchmark. A significant number of BIG-bench tasks showed discontinuous improvements from model scale, meaning that performance steeply increased as we scaled to our largest model. PaLM also has strong capabilities in multilingual tasks and source code generation, which we demonstrate on a wide array of benchmarks. We additionally provide a comprehensive analysis on bias and toxicity, and study the extent of training data memorization with respect to model scale. Finally, we discuss the ethical considerations related to large language models and discuss potential mitigation strategies
    • 

    corecore