113 research outputs found

    The Future of Supercomputing at NASA

    Get PDF
    Computers are a growing part of everyday life in many ways. For some of the biggest and most interesting problems, people have used some of the biggest and most interesting computers. This is the general area of High Performance Computing (HPC) or Supercomputing. One organization with a long and storied history with supercomputing is the National Aeronautics and Space Administration (NASA). The supercomputers at NASA have a variety of missions including weather forecasting to helping astronauts at the International Space Station. As problems continue to grow complex, the growth of supercomputing seems inevitable

    Star Formation with Adaptive Mesh Refinement Radiation Hydrodynamics

    Full text link
    I provide a pedagogic review of adaptive mesh refinement (AMR) radiation hydrodynamics (RHD) methods and codes used in simulations of star formation, at a level suitable for researchers who are not computational experts. I begin with a brief overview of the types of RHD processes that are most important to star formation, and then I formally introduce the equations of RHD and the approximations one uses to render them computationally tractable. I discuss strategies for solving these approximate equations on adaptive grids, with particular emphasis on identifying the main advantages and disadvantages of various approximations and numerical approaches. Finally, I conclude by discussing areas ripe for improvement.Comment: 8 pages, to appear in the Proceedings of IAU Symposium 270: Computational Star Formatio

    Minimizing synchronizations in sparse iterative solvers for distributed supercomputers

    Get PDF
    Eliminating synchronizations is one of the important techniques related to minimizing communications for modern high performance computing. This paper discusses principles of reducing communications due to global synchronizations in sparse iterative solvers on distributed supercomputers. We demonstrates how to minimizing global synchronizations by rescheduling a typical Krylov subspace method. The benefit of minimizing synchronizations is shown in theoretical analysis and is verified by numerical experiments using up to 900 processors. The experiments also show the communication complexity for some structured sparse matrix vector multiplications and global communications in the underlying supercomputers are in the order P1/2.5 and P4/5 respectively, where P is the number of processors and the experiments were carried on a Dawning 5000A

    Ethical Reflections of Human Brain Research and Smart Information Systems

    Get PDF
    open access journalThis case study explores ethical issues that relate to the use of Smart Infor-mation Systems (SIS) in human brain research. The case study is based on the Human Brain Project (HBP), which is a European Union funded project. The project uses SIS to build a research infrastructure aimed at the advancement of neuroscience, medicine and computing. The case study was conducted to assess how the HBP recognises and deal with ethical concerns relating to the use of SIS in human brain research. To under-stand some of the ethical implications of using SIS in human brain research, data was collected through a document review and three semi-structured interviews with partic-ipants from the HBP. Results from the case study indicate that the main ethical concerns with the use of SIS in human brain research include privacy and confidentiality, the security of personal data, discrimination that arises from bias and access to the SIS and their outcomes. Furthermore, there is an issue with the transparency of the processes that are involved in human brain research. In response to these issues, the HBP has put in place different mechanisms to ensure responsible research and innovation through a dedicated pro-gram. The paper provides lessons for the responsible implementation of SIS in research, including human brain research and extends some of the mechanisms that could be employed by researchers and developers of SIS for research in addressing such issues

    Inner product computation for sparse iterative solvers on\ud distributed supercomputer

    Get PDF
    Recent years have witnessed that iterative Krylov methods without re-designing are not suitable for distribute supercomputers because of intensive global communications. It is well accepted that re-engineering Krylov methods for prescribed computer architecture is necessary and important to achieve higher performance and scalability. The paper focuses on simple and practical ways to re-organize Krylov methods and improve their performance for current heterogeneous distributed supercomputers. In construct with most of current software development of Krylov methods which usually focuses on efficient matrix vector multiplications, the paper focuses on the way to compute inner products on supercomputers and explains why inner product computation on current heterogeneous distributed supercomputers is crucial for scalable Krylov methods. Communication complexity analysis shows that how the inner product computation can be the bottleneck of performance of (inner) product-type iterative solvers on distributed supercomputers due to global communications. Principles of reducing such global communications are discussed. The importance of minimizing communications is demonstrated by experiments using up to 900 processors. The experiments were carried on a Dawning 5000A, one of the fastest and earliest heterogeneous supercomputers in the world. Both the analysis and experiments indicates that inner product computation is very likely to be the most challenging kernel for inner product-based iterative solvers to achieve exascale

    OverSketch: Approximate Matrix Multiplication for the Cloud

    Full text link
    We propose OverSketch, an approximate algorithm for distributed matrix multiplication in serverless computing. OverSketch leverages ideas from matrix sketching and high-performance computing to enable cost-efficient multiplication that is resilient to faults and straggling nodes pervasive in low-cost serverless architectures. We establish statistical guarantees on the accuracy of OverSketch and empirically validate our results by solving a large-scale linear program using interior-point methods and demonstrate a 34% reduction in compute time on AWS Lambda.Comment: Published in Proc. IEEE Big Data 2018. Updated version provides details of distributed sketching and highlights other advantages of OverSketc

    04-13-2020 NASA Oklahoma Space Grant Funds SWOSU Student to Research Supercomputer Education

    Get PDF
    Southwestern Oklahoma State University student Ezgi Gursel of Weatherford is one of four SWOSU students who is receiving funding to conduct National Aeronautics and Space Administration (NASA)-related research

    The next-generation of NASA Supercomputers

    Get PDF
    For some of the biggest and most interesting problems, people have used some of the biggest and most interesting computers. This is the general area of High Performance Computing (HPC) or Supercomputing. One organization with a long and storied history with supercomputing is the National Aeronautics and Space Administration (NASA). The supercomputers at NASA have a variety of missions including weather forecasting to helping astronauts at the International Space Station. As problems continue to grow complex, the growth of supercomputing seems inevitable
    • …
    corecore