149 research outputs found

    Long-Term Relations Among Prosocial-Media Use, Empathy, and Prosocial Behavior

    Get PDF
    Despite recent growth of research on the effects of prosocial media, processes underlying these effects are not well understood. Two studies explored theoretically relevant mediators and moderators of the effects of prosocial media on helping. Study 1 examined associations among prosocial- and violent-media use, empathy, and helping in samples from seven countries. Prosocial-media use was positively associated with helping. This effect was mediated by empathy and was similar across cultures. Study 2 explored longitudinal relations among prosocial-video-game use, violent-video-game use, empathy, and helping in a large sample of Singaporean children and adolescents measured three times across 2 years. Path analyses showed significant longitudinal effects of prosocial- and violent-video-game use on prosocial behavior through empathy. Latent-growth-curve modeling for the 2-year period revealed that change in video-game use significantly affected change in helping, and that this relationship was mediated by change in empathy

    Grouping practices in the primary school: what influences change?

    Get PDF
    During the 1990s, there was considerable emphasis on promoting particular kinds of pupil grouping as a means of raising educational standards. This survey of 2000 primary schools explored the extent to which schools had changed their grouping practices in responses to this, the nature of the changes made and the reasons for those changes. Forty eight percent of responding schools reported that they had made no change. Twenty two percent reported changes because of the literacy hour, 2% because of the numeracy hour, 7% because of a combination of these and 21% for other reasons. Important influences on decisions about the types of grouping adopted were related to pupil learning and differentiation, teaching, the implementation of the national literacy strategy, practical issues and school self-evaluation

    TPU v4: An Optically Reconfigurable Supercomputer for Machine Learning with Hardware Support for Embeddings

    Full text link
    In response to innovations in machine learning (ML) models, production workloads changed radically and rapidly. TPU v4 is the fifth Google domain specific architecture (DSA) and its third supercomputer for such ML models. Optical circuit switches (OCSes) dynamically reconfigure its interconnect topology to improve scale, availability, utilization, modularity, deployment, security, power, and performance; users can pick a twisted 3D torus topology if desired. Much cheaper, lower power, and faster than Infiniband, OCSes and underlying optical components are <5% of system cost and <3% of system power. Each TPU v4 includes SparseCores, dataflow processors that accelerate models that rely on embeddings by 5x-7x yet use only 5% of die area and power. Deployed since 2020, TPU v4 outperforms TPU v3 by 2.1x and improves performance/Watt by 2.7x. The TPU v4 supercomputer is 4x larger at 4096 chips and thus ~10x faster overall, which along with OCS flexibility helps large language models. For similar sized systems, it is ~4.3x-4.5x faster than the Graphcore IPU Bow and is 1.2x-1.7x faster and uses 1.3x-1.9x less power than the Nvidia A100. TPU v4s inside the energy-optimized warehouse scale computers of Google Cloud use ~3x less energy and produce ~20x less CO2e than contemporary DSAs in a typical on-premise data center.Comment: 15 pages; 16 figures; to be published at ISCA 2023 (the International Symposium on Computer Architecture

    In-Datacenter Performance Analysis of a Tensor Processing Unit

    Full text link
    Many architects believe that major improvements in cost-energy-performance must now come from domain-specific hardware. This paper evaluates a custom ASIC---called a Tensor Processing Unit (TPU)---deployed in datacenters since 2015 that accelerates the inference phase of neural networks (NN). The heart of the TPU is a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps/second (TOPS) and a large (28 MiB) software-managed on-chip memory. The TPU's deterministic execution model is a better match to the 99th-percentile response-time requirement of our NN applications than are the time-varying optimizations of CPUs and GPUs (caches, out-of-order execution, multithreading, multiprocessing, prefetching, ...) that help average throughput more than guaranteed latency. The lack of such features helps explain why, despite having myriad MACs and a big memory, the TPU is relatively small and low power. We compare the TPU to a server-class Intel Haswell CPU and an Nvidia K80 GPU, which are contemporaries deployed in the same datacenters. Our workload, written in the high-level TensorFlow framework, uses production NN applications (MLPs, CNNs, and LSTMs) that represent 95% of our datacenters' NN inference demand. Despite low utilization for some applications, the TPU is on average about 15X - 30X faster than its contemporary GPU or CPU, with TOPS/Watt about 30X - 80X higher. Moreover, using the GPU's GDDR5 memory in the TPU would triple achieved TOPS and raise TOPS/Watt to nearly 70X the GPU and 200X the CPU.Comment: 17 pages, 11 figures, 8 tables. To appear at the 44th International Symposium on Computer Architecture (ISCA), Toronto, Canada, June 24-28, 201

    A user's guide to the Encyclopedia of DNA elements (ENCODE)

    Get PDF
    The mission of the Encyclopedia of DNA Elements (ENCODE) Project is to enable the scientific and medical communities to interpret the human genome sequence and apply it to understand human biology and improve health. The ENCODE Consortium is integrating multiple technologies and approaches in a collective effort to discover and define the functional elements encoded in the human genome, including genes, transcripts, and transcriptional regulatory regions, together with their attendant chromatin states and DNA methylation patterns. In the process, standards to ensure high-quality data have been implemented, and novel algorithms have been developed to facilitate analysis. Data and derived results are made available through a freely accessible database. Here we provide an overview of the project and the resources it is generating and illustrate the application of ENCODE data to interpret the human genome

    Reproduction and Breeding

    No full text
    corecore