3 research outputs found

    Exploring the performance and mapping of HPC applications to platforms in the cloud

    Full text link

    HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges

    Full text link
    High Performance Computing (HPC) clouds are becoming an alternative to on-premise clusters for executing scientific applications and business analytics services. Most research efforts in HPC cloud aim to understand the cost-benefit of moving resource-intensive applications from on-premise environments to public cloud platforms. Industry trends show hybrid environments are the natural path to get the best of the on-premise and cloud resources---steady (and sensitive) workloads can run on on-premise resources and peak demand can leverage remote resources in a pay-as-you-go manner. Nevertheless, there are plenty of questions to be answered in HPC cloud, which range from how to extract the best performance of an unknown underlying platform to what services are essential to make its usage easier. Moreover, the discussion on the right pricing and contractual models to fit small and large users is relevant for the sustainability of HPC clouds. This paper brings a survey and taxonomy of efforts in HPC cloud and a vision on what we believe is ahead of us, including a set of research challenges that, once tackled, can help advance businesses and scientific discoveries. This becomes particularly relevant due to the fast increasing wave of new HPC applications coming from big data and artificial intelligence.Comment: 29 pages, 5 figures, Published in ACM Computing Surveys (CSUR

    A Framework for Efficient Cluster Computing Services in a Collaborative University Environment

    Get PDF
    Parallel computing techniques have become more important especially now that we have effectively reached the limit on individual processor speeds due to unacceptable levels of heat generation. Multi-core processors are already the norm and will continue to rise in terms of number of cores in the near future. However clusters of machines remain the next major step up in system performance effectively allowing vast numbers of cores to be devoted to any given problem. It is in that context that this Professional Doctorate thesis and Portfolio exists. Most parallel or cluster based software is custom built for an application using techniques such as OpenMP or MPI. But what if the capability of writing such software does not exist, what if the very act of writing a new piece of software compromises the integrity of an industry standard piece of software currently being used in a research project? The first outcome was to explore how grid/cluster computing teaching and learning facilities could be made accessible to students and teaching staff alike within the Department of Computing, Engineering & Technology in order to enhance the student experience. This was achieved through the development of VCNet, a virtual technology cluster solution, based on the design of the University of Sunderland Cluster Computer (USCC) and capable of running behind a dual boot arrangement on standard teaching machines. The second outcome of this Professional Doctorate was to produce a framework for efficient cluster computing services in a collaborative university environment. Although small by national and international standards, the USCC, with its forty machines and 160 cores, packs a mighty punch in computing terms. Through the work of this doctorate, ‘supercomputer class’ performance has been successfully used in cross- disciplinary research through the development and use of the Application Framework for Computational Chemistry (AFCC). In addition, I will also discuss the contribution this doctorate has made within the context of my community of practice by enhancing both my teaching and learning contribution as well as cross-disciplinary research and application
    corecore