1,135 research outputs found

    A Survey on Design Methodologies for Accelerating Deep Learning on Heterogeneous Architectures

    Full text link
    In recent years, the field of Deep Learning has seen many disruptive and impactful advancements. Given the increasing complexity of deep neural networks, the need for efficient hardware accelerators has become more and more pressing to design heterogeneous HPC platforms. The design of Deep Learning accelerators requires a multidisciplinary approach, combining expertise from several areas, spanning from computer architecture to approximate computing, computational models, and machine learning algorithms. Several methodologies and tools have been proposed to design accelerators for Deep Learning, including hardware-software co-design approaches, high-level synthesis methods, specific customized compilers, and methodologies for design space exploration, modeling, and simulation. These methodologies aim to maximize the exploitable parallelism and minimize data movement to achieve high performance and energy efficiency. This survey provides a holistic review of the most influential design methodologies and EDA tools proposed in recent years to implement Deep Learning accelerators, offering the reader a wide perspective in this rapidly evolving field. In particular, this work complements the previous survey proposed by the same authors in [203], which focuses on Deep Learning hardware accelerators for heterogeneous HPC platforms

    Coaching parents of children with ADHD: A Western Australian study

    Get PDF
    Parents of children with Attention Deficit/Hyperactivity Disorder (ADHD) often experience emotional and behavioural difficulties that contribute to stress and conflict in their family relationships. ADHD Parent Coaching is a promising intervention for these families; however, little is known about its effectiveness. This study explored the effects parent coaching had on parents of children with ADHD using descriptive case study methodology. A secondary purpose was to measure any reduction in stress and homework problems. A workshop offering solutions to homework-related issues was conducted over two consecutive weeks. Parents who attended (N=10) were offered parent coaching, and five parents were subsequently coached over a period of six to eleven weeks. Parents’ experiences of engaging with coaching were explored using thematic analysis of an interview conducted following the intervention (N=4). They also completed a Parent Stress Index (PSI) and Homework Problem Checklist (HPC) pre and post after intervention. Themes relating to mindfulness in parenting, changed parental cognitions, awareness of parenting styles, improved parent-child relationships, impacts on the wider family, and improved self-efficacy emerged from the interviews. The PSI results indicated significantly lower total parent stress scores following intervention while HPC scores were significantly improved. The results showed that parent coaching may produce positive outcomes, including reduced parental stress, increased self-efficacy and parent mindfulness

    Workshop Report: Campus Bridging: Reducing Obstacles on the Path to Big Answers 2015

    Get PDF
    For the researcher whose experiments require large-scale cyberinfrastructure, there exists significant challenges to successful completion. These challenges are broad and go far beyond the simple issue that there are not enough large-scale resources available; these solvable issues range from a lack of documentation written for a non-technical audience to a need for greater consistency with regard to system configuration and consistent software configuration and availability on the large-scale resources at national tier supercomputing centers, with a number of other challenges existing alongside the ones mentioned here. Campus Bridging is a relatively young discipline that aims to mitigate these issues for the academic end-user, for whom the entire process can feel like a path comprised entirely of obstacles. The solutions to these problems must by necessity include multiple approaches, with focus not only on the end user but on the system administrators responsible for supporting these resources as well as the systems themselves. These system resources include not only those at the supercomputing centers but also those that exist at the campus or departmental level and even on the personal computing devices the researcher uses to complete his or her work. This workshop report compiles the results of a half-day workshop, held in conjunction with IEEE Cluster 2015 in Chicago, IL.NSF XSED

    CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences

    Get PDF
    This report documents the results of a study to address the long range, strategic planning required by NASA's Revolutionary Computational Aerosciences (RCA) program in the area of computational fluid dynamics (CFD), including future software and hardware requirements for High Performance Computing (HPC). Specifically, the "Vision 2030" CFD study is to provide a knowledge-based forecast of the future computational capabilities required for turbulent, transitional, and reacting flow simulations across a broad Mach number regime, and to lay the foundation for the development of a future framework and/or environment where physics-based, accurate predictions of complex turbulent flows, including flow separation, can be accomplished routinely and efficiently in cooperation with other physics-based simulations to enable multi-physics analysis and design. Specific technical requirements from the aerospace industrial and scientific communities were obtained to determine critical capability gaps, anticipated technical challenges, and impediments to achieving the target CFD capability in 2030. A preliminary development plan and roadmap were created to help focus investments in technology development to help achieve the CFD vision in 2030

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Parallel Processes in HPX: Designing an Infrastructure for Adaptive Resource Management

    Get PDF
    Advancement in cutting edge technologies have enabled better energy efficiency as well as scaling computational power for the latest High Performance Computing(HPC) systems. However, complexity, due to hybrid architectures as well as emerging classes of applications, have shown poor computational scalability using conventional execution models. Thus alternative means of computation, that addresses the bottlenecks in computation, is warranted. More precisely, dynamic adaptive resource management feature, both from systems as well as application\u27s perspective, is essential for better computational scalability and efficiency. This research presents and expands the notion of Parallel Processes as a placeholder for procedure definitions, targeted at one or more synchronous domains, meta data for computation and resource management as well as infrastructure for dynamic policy deployment. In addition to this, the research presents additional guidelines for a framework for resource management in HPX runtime system. Further, this research also lists design principles for scalability of Active Global Address Space (AGAS), a necessary feature for Parallel Processes. Also, to verify the usefulness of Parallel Processes, a preliminary performance evaluation of different task scheduling policies is carried out using two different applications. The applications used are: Unbalanced Tree Search, a reference dynamic graph application, implemented by this research in HPX and MiniGhost, a reference stencil based application using bulk synchronous parallel model. The results show that different scheduling policies provide better performance for different classes of applications; and for the same application class, in certain instances, one policy fared better than the others, while vice versa in other instances, hence supporting the hypothesis of the need of dynamic adaptive resource management infrastructure, for deploying different policies and task granularities, for scalable distributed computing
    • …
    corecore