25,128 research outputs found

    Tackling Exascale Software Challenges in Molecular Dynamics Simulations with GROMACS

    Full text link
    GROMACS is a widely used package for biomolecular simulation, and over the last two decades it has evolved from small-scale efficiency to advanced heterogeneous acceleration and multi-level parallelism targeting some of the largest supercomputers in the world. Here, we describe some of the ways we have been able to realize this through the use of parallelization on all levels, combined with a constant focus on absolute performance. Release 4.6 of GROMACS uses SIMD acceleration on a wide range of architectures, GPU offloading acceleration, and both OpenMP and MPI parallelism within and between nodes, respectively. The recent work on acceleration made it necessary to revisit the fundamental algorithms of molecular simulation, including the concept of neighborsearching, and we discuss the present and future challenges we see for exascale simulation - in particular a very fine-grained task parallelism. We also discuss the software management, code peer review and continuous integration testing required for a project of this complexity.Comment: EASC 2014 conference proceedin

    Computer Simulations of Cosmic Reionization

    Full text link
    The cosmic reionization of hydrogen was the last major phase transition in the evolution of the universe, which drastically changed the ionization and thermal conditions in the cosmic gas. To the best of our knowledge today, this process was driven by the ultra-violet radiation from young, star-forming galaxies and from first quasars. We review the current observational constraints on cosmic reionization, as well as the dominant physical effects that control the ionization of intergalactic gas. We then focus on numerical modeling of this process with computer simulations. Over the past decade, significant progress has been made in solving the radiative transfer of ionizing photons from many sources through the highly inhomogeneous distribution of cosmic gas in the expanding universe. With modern simulations, we have finally converged on a general picture for the reionization process, but many unsolved problems still remain in this young and exciting field of numerical cosmology.Comment: Invited Review to appear on Advanced Science Letters (ASL), Special Issue on Computational Astrophysics, edited by Lucio Maye

    Experimental analysis of computer system dependability

    Get PDF
    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance

    From Big Data to Big Displays: High-Performance Visualization at Blue Brain

    Full text link
    Blue Brain has pushed high-performance visualization (HPV) to complement its HPC strategy since its inception in 2007. In 2011, this strategy has been accelerated to develop innovative visualization solutions through increased funding and strategic partnerships with other research institutions. We present the key elements of this HPV ecosystem, which integrates C++ visualization applications with novel collaborative display systems. We motivate how our strategy of transforming visualization engines into services enables a variety of use cases, not only for the integration with high-fidelity displays, but also to build service oriented architectures, to link into web applications and to provide remote services to Python applications.Comment: ISC 2017 Visualization at Scale worksho

    Solving Task Scheduling Problem in Cloud Computing Environment Using Orthogonal Taguchi-Cat Algorithm

    Get PDF
    In cloud computing datacenter, task execution delay is no longer accidental. In recent times, a number of artificial intelligence scheduling techniques are proposed and applied to reduce task execution delay. In this study, we proposed an algorithm called Orthogonal Taguchi Based-Cat Swarm Optimization (OTB-CSO) to minimize total task execution time. In our proposed algorithm Taguchi Orthogonal approach was incorporated at CSO tracing mode for best task mapping on VMs with minimum execution time. The proposed algorithm was implemented on CloudSim tool and evaluated based on makespan metric. Experimental results showed for 20VMs used, proposed OTB-CSO was able to minimize makespan of total tasks scheduled across VMs with 42.86%, 34.57% and 2.58% improvement over Minimum and Maximum Job First (Min-Max), Particle Swarm Optimization with Linear Descending Inertia Weight (PSO-LDIW) and Hybrid Particle Swarm Optimization with Simulated Annealing (HPSO-SA) algorithms. Results obtained showed OTB-CSO is effective to optimize task scheduling and improve overall cloud computing performance with better system utilization
    • …
    corecore