8,030 research outputs found

    High Energy Physics Forum for Computational Excellence: Working Group Reports (I. Applications Software II. Software Libraries and Tools III. Systems)

    Full text link
    Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to develop an effective response to a series of expected challenges. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -- 1) software effectiveness, and 2) infrastructure and expertise advancement. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3) Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. The final versions of the reports are combined in this document, and are presented along with introductory material.Comment: 72 page

    Computing models in high energy physics

    Get PDF
    Abstract High Energy Physics Experiments (HEP experiments in the following) have been at least in the last 3 decades at the forefront of technology, in aspects like detector design and construction, number of collaborators, and complexity of data analyses. As uncommon in previous particle physics experiments, the computing and data handling aspects have not been marginal in their design and operations; the cost of the IT related components, from software development to storage systems and to distributed complex e-Infrastructures, has raised to a level which needs proper understanding and planning from the first moments in the lifetime of an experiment. In the following sections we will first try to explore the computing and software solutions developed and operated in the most relevant past and present experiments, with a focus on the technologies deployed; a technology tracking section is presented in order to pave the way to possible solutions for next decade experiments, and beyond. While the focus of this review is on offline computing model, the distinction is a shady one, and some experiments have already experienced contaminations between triggers selection and offline workflows; it is anticipated the trend will continue in the future

    Development of a high-order parallel solver for direct and large eddy simulations of turbulent flows

    Get PDF
    Turbulence is inherent in fluid dynamics, in that laminar flows are rather the exception than the rule, hence the longstanding interest in the subject, both within the academic community and the industrial R&D laboratories. Since 1883, much progress has been made, and statistics applied to turbulence have provided understanding of the scaling laws which are peculiar to several model flows, whereas experiments have given insight on the structure of real-world flows, but, soon enough, numerical approaches to the matter have become the most promising ones, since they lay the ground for the solution of high Reynolds number unsteady Navier-Stokes equations by means of computer systems. Nevertheless, despite the exponential rise in computational capability over the last few decades, the more computer technology advances, the higher the Reynolds number sought for test-cases of industrial interest: there is a natural tendency to perform simulations as large as possible, a habit that leaves no room for wasting resources. Indeed, as the scale separation grows with Re, the reduction of wall clock times for a high-fidelity solution of desired accuracy becomes increasingly important. To achieve this task, a CFD solver should rely on the use of appropriate physical models, consistent numerical methods to discretize the equations, accurate non-dissipative numerical schemes, efficient algorithms to solve the numerics, and fast routines implementing those algorithms. Two archetypal approaches to CFD are direct and large-eddy simulation (DNS and LES respectively), which profoundly differ in several aspects but are both “eddy-resolving” methods, meant to resolve the structures of the flow-field with the highest possible accuracy and putting in as little spurious dissipation as possible. These two requirements of accurate resolution of scales, and energy conservation, should be addressed by any numerical method, since they are essential to many real-world fluid flows of industrial interest. As a consequence, high order numerical schemes, and compact schemes among them, have received much consideration, since they address both goals, at the cost of a lower ease of application of the boundary condition, and a higher computational cost. The latter problem is tackled with parallel computing, which also allows to take advantage of the currently available computer power at the best possible extent. The research activity conducted by the present author has concerned the development, from scratch, of a three-dimensional, unsteady, incompressible Navier-Stokes parallel solver, which uses an advanced algorithm for the process-wise solution of the linear systems arising from the application of high order compact finite difference schemes, and hinges upon a three-dimensional decomposition of the cartesian computational space. The code is written in modern Fortran 2003 — plus a few features which are unique to the 2008 standard — and is parallelized through the use of MPI 3.1 standard’s advanced routines, as implemented by the OpenMPI library project. The coding was carried out with the objective of creating an original CFD high-order parallel solver which is maintainable and extendable, of course within a well-defined range of possibilities. With this main priority being outlined, particular attention was paid to several key concepts: modularity and readability of the source code and, in turn, its reusability; ease of implementation of virtually any new explicit or implicit finite difference scheme; modern programming style and avoidance of deprecated old legacy Fortran constructs and features, so that the world wide web is a reliable and active means to the quick solution of coding problems arising from the implementation of new modules in the code; last but not least, thorough comments, especially in critical sections of the code, explaining motives and possible expected weak links. Design, production, and documentation of a program from scratch is almost never complete. This is certainly true for the present effort. The method and the code are verified against the full three-dimensional Lid-Driven Cavity and Taylor-Green Vortex flows. The latter test is used also for the assessment of scalability and parallel efficiency

    Maximin Designs for Computer Experiments.

    Get PDF
    Decision processes are nowadays often facilitated by simulation tools. In the field of engineering, for example, such tools are used to simulate the behavior of products and processes. Simulation runs, however, are often very time-consuming, and, hence, the number of simulation runs allowed is limited in practice. The problem then is to determine which simulation runs to perform such that the maximal amount of information about the product or process is obtained. This problem is addressed in the first part of the thesis. It is proposed to use so-called maximin Latin hypercube designs and many new results for this class of designs are obtained. In the second part, the case of multiple interrelated simulation tools is considered and a framework to deal with such tools is introduced. Important steps in this framework are the construction and the use of coordination methods and of nested designs in order to control the dependencies present between the various simulation tools

    A trivariate interpolation algorithm using a cube-partition searching procedure

    Get PDF
    In this paper we propose a fast algorithm for trivariate interpolation, which is based on the partition of unity method for constructing a global interpolant by blending local radial basis function interpolants and using locally supported weight functions. The partition of unity algorithm is efficiently implemented and optimized by connecting the method with an effective cube-partition searching procedure. More precisely, we construct a cube structure, which partitions the domain and strictly depends on the size of its subdomains, so that the new searching procedure and, accordingly, the resulting algorithm enable us to efficiently deal with a large number of nodes. Complexity analysis and numerical experiments show high efficiency and accuracy of the proposed interpolation algorithm

    GHOST: Building blocks for high performance sparse linear algebra on heterogeneous systems

    Get PDF
    While many of the architectural details of future exascale-class high performance computer systems are still a matter of intense research, there appears to be a general consensus that they will be strongly heterogeneous, featuring "standard" as well as "accelerated" resources. Today, such resources are available as multicore processors, graphics processing units (GPUs), and other accelerators such as the Intel Xeon Phi. Any software infrastructure that claims usefulness for such environments must be able to meet their inherent challenges: massive multi-level parallelism, topology, asynchronicity, and abstraction. The "General, Hybrid, and Optimized Sparse Toolkit" (GHOST) is a collection of building blocks that targets algorithms dealing with sparse matrix representations on current and future large-scale systems. It implements the "MPI+X" paradigm, has a pure C interface, and provides hybrid-parallel numerical kernels, intelligent resource management, and truly heterogeneous parallelism for multicore CPUs, Nvidia GPUs, and the Intel Xeon Phi. We describe the details of its design with respect to the challenges posed by modern heterogeneous supercomputers and recent algorithmic developments. Implementation details which are indispensable for achieving high efficiency are pointed out and their necessity is justified by performance measurements or predictions based on performance models. The library code and several applications are available as open source. We also provide instructions on how to make use of GHOST in existing software packages, together with a case study which demonstrates the applicability and performance of GHOST as a component within a larger software stack.Comment: 32 pages, 11 figure

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 13371 and 13372 constitutes the refereed proceedings of the 34rd International Conference on Computer Aided Verification, CAV 2022, which was held in Haifa, Israel, in August 2022. The 40 full papers presented together with 9 tool papers and 2 case studies were carefully reviewed and selected from 209 submissions. The papers were organized in the following topical sections: Part I: Invited papers; formal methods for probabilistic programs; formal methods for neural networks; software Verification and model checking; hyperproperties and security; formal methods for hardware, cyber-physical, and hybrid systems. Part II: Probabilistic techniques; automata and logic; deductive verification and decision procedures; machine learning; synthesis and concurrency. This is an open access book

    A Three-Level Parallelisation Scheme and Application to the Nelder-Mead Algorithm

    Get PDF
    We consider a three-level parallelisation scheme. The second and third levels define a classical two-level parallelisation scheme and some load balancing algorithm is used to distribute tasks among processes. It is well-known that for many applications the efficiency of parallel algorithms of the second and third level starts to drop down after some critical parallelisation degree is reached. This weakness of the two-level template is addressed by introduction of one additional parallelisation level. As an alternative to the basic solver some new or modified algorithms are considered on this level. The idea of the proposed methodology is to increase the parallelisation degree by using less efficient algorithms in comparison with the basic solver. As an example we investigate two modified Nelder-Mead methods. For the selected application, a few partial differential equations are solved numerically on the second level, and on the third level the parallel Wang's algorithm is used to solve systems of linear equations with tridiagonal matrices. A greedy workload balancing heuristic is proposed, which is oriented to the case of a large number of available processors. The complexity estimates of the computational tasks are model-based, i.e. they use empirical computational data

    Simulation modelling and visualisation: toolkits for building artificial worlds

    Get PDF
    Simulations users at all levels make heavy use of compute resources to drive computational simulations for greatly varying applications areas of research using different simulation paradigms. Simulations are implemented in many software forms, ranging from highly standardised and general models that run in proprietary software packages to ad hoc hand-crafted simulations codes for very specific applications. Visualisation of the workings or results of a simulation is another highly valuable capability for simulation developers and practitioners. There are many different software libraries and methods available for creating a visualisation layer for simulations, and it is often a difficult and time-consuming process to assemble a toolkit of these libraries and other resources that best suits a particular simulation model. We present here a break-down of the main simulation paradigms, and discuss differing toolkits and approaches that different researchers have taken to tackle coupled simulation and visualisation in each paradigm
    • …
    corecore