91 research outputs found

    High-Energy-Physics Event Generation with PYTHIA 6.1

    Get PDF
    PYTHIA version 6 represents a merger of the PYTHIA 5, JETSET 7 and SPYTHIA programs, with many improvements. It can be used to generate high-energy-physics `events', i.e. sets of outgoing particles produced in the interactions between two incoming particles. The objective is to provide as accurate as possible a representation of event properties in a wide range of reactions. The underlying physics is not understood well enough to give an exact description; the programs therefore contain a combination of analytical results and various models. The emphasis in this article is on new aspects, but a few words of general introduction are included. Further documentation is available on the web.Comment: 1 + 27 pages, submitted to Computer Physics Communication

    An Initial Evaluation of the NAG f90 Compiler

    Get PDF

    Climate Models: A Software Engineering Approach

    Get PDF
    Climate Simulation and Weather Forecasting are amongst the most representative examples of scientific software, which has evolved through- out the past sixty years. In this paper, a set of Global Climate Models (GCM) have been analysed from a Software Engineering perspective, analysing the composition of their internal structure and programming constructs which have been used in the building process. We have implemented a set of software metrics such as Cyclomatic Complexity, Lines of Code, Number of Fortran Obsolete Language Features, among others.We have followed a compiler like approach, collecting information based on traversing the Abstract Syntax Tree (AST). The obtained data can be used for different purposes at different stages of the software life cycle such as: maintenance tasks, parallelization, and optimization. The results suggest that some programming techniques used for building scientic software have fallen into disuse because they are now considered obsolete and error-prone. In addition, GCM's internal structure seems to evolve at a slower pace than programming techniques. The analysis methodology can be used to update and enhance the scientific software in order to make simpler other tasks such as optimization and parallelization for specic new hardware such as multi/many-core processors and co-processors, distributed memory parallel hardware, etc

    Legacy fortran software: applying syntactic metrics to global climate models

    Get PDF
    It is di cult to maintain legacy Fortran programs that use outdated programming constructs, especially when this maintenance requires a detailed understanding of the code (e.g., for parallelization).\nInitially, we want to gauge the prevalence of such constructs by applying straightforward syntactic metrics to some well-known global climate models. Detailed information regarding les, subroutines, and loops has been collected from each model by applying a lightweight source code static analysis based on ASTs (Abstract Syntax Tree) for a posterior analysis. Modernizing Fortran Legacy programs is still a challenge. Our objective has been to collect relevant information on these programs to help us approach parallelizing legacy scienti c programs in a shared memory environment (e.g. using multi-core processors). The data we collected indicate that old Fortran features are still being used on these models in these days. Furthermore, we propose some metrics to be used as a guide to determine how many changes a program needs in order to be modernized, optimized, and eventually, parallelized.Eje: Workshop Ingeniería de software (WIS

    SINGINT: Automatic numerical integration of singular integrands

    Get PDF
    We explore the combination of deterministic and Monte Carlo methods to facilitate efficient automatic numerical computation of multidimensional integrals with singular integrands. Two adaptive algorithms are presented that employ recursion and are runtime and memory optimised, respectively. SINGINT, a C implementation of the algorithms, is introduced and its utilisation in the calculation of particle scattering amplitudes is exemplified

    Parallel Computers and Complex Systems

    Get PDF
    We present an overview of the state of the art and future trends in high performance parallel and distributed computing, and discuss techniques for using such computers in the simulation of complex problems in computational science. The use of high performance parallel computers can help improve our understanding of complex systems, and the converse is also true --- we can apply techniques used for the study of complex systems to improve our understanding of parallel computing. We consider parallel computing as the mapping of one complex system --- typically a model of the world --- into another complex system --- the parallel computer. We study static, dynamic, spatial and temporal properties of both the complex systems and the map between them. The result is a better understanding of which computer architectures are good for which problems, and of software structure, automatic partitioning of data, and the performance of parallel machines

    The seven ages of Fortran

    Get PDF
    When IBM's John Backus first developed the Fortran programming language, back in 1957, he certainly never dreamt that it would become a world-wide success and still be going strong many years later. Given the oft-repeated predictions of its imminent demise, starting around 1968, it is a surprise, even to some of its most devoted users, that this much-maligned language is not only still with us, but is being further developed for the demanding applications of the future. What has made this programming language succeed where most slip into oblivion? One reason is certainly that the language has been regularly standardized. In this paper we will trace the evolution of the language from its first version and though six cycles of formal revision, and speculate on how this might continue. Now, modern Fortran is a procedural, imperative, compiled language with a syntax well suited to a direct representation of mathematical formulas. Individual procedures may be compiled separately or grouped into modules, either way allowing the convenient construction of very large programs and procedure libraries. Procedures communicate via global data areas or by argument association. The language now contains features for array processing, abstract data types, dynamic data structures, objectoriented programming and parallel processing.Facultad de Informátic

    The seven ages of Fortran

    Get PDF
    When IBM's John Backus first developed the Fortran programming language, back in 1957, he certainly never dreamt that it would become a world-wide success and still be going strong many years later. Given the oft-repeated predictions of its imminent demise, starting around 1968, it is a surprise, even to some of its most devoted users, that this much-maligned language is not only still with us, but is being further developed for the demanding applications of the future. What has made this programming language succeed where most slip into oblivion? One reason is certainly that the language has been regularly standardized. In this paper we will trace the evolution of the language from its first version and though six cycles of formal revision, and speculate on how this might continue. Now, modern Fortran is a procedural, imperative, compiled language with a syntax well suited to a direct representation of mathematical formulas. Individual procedures may be compiled separately or grouped into modules, either way allowing the convenient construction of very large programs and procedure libraries. Procedures communicate via global data areas or by argument association. The language now contains features for array processing, abstract data types, dynamic data structures, objectoriented programming and parallel processing.Facultad de Informátic

    Sensitivity computations by algorithmic differentiation of a high-­order cfd code based on spectral differences

    Get PDF
    We compute flow sensitivities by differentiating a high-­order computational fluid dynamics code. Our fully discrete approach relies on automatic differentiation (AD) of the original source code. We obtain two transformed codes by using the AD tool Tapenade (INRIA), one for each differentiation mode: tangent and adjoint. Both differentiated codes are tested against each other by computing sensitivities in an unsteady test case. The results from both codes agree to within machine accuracy, and compare well with those approximated by finite differences. We compare execution times and discuss the encountered technical difficulties due to 1) the code parallelism and 2) the memory overhead caused by unsteady problems
    corecore