78 research outputs found

    Performance awareness: execution performance of HEP codes on RISC platforms,issues and solutions

    Get PDF
    The work described in this paper was started during the migration of Aleph's production jobs from the IBM mainframe/CRAY supercomputer to several RISC/Unix workstation platforms. The aim was to understand why Aleph did not obtain the performance on the RISC platforms that was "promised" after a CERN Unit comparison between these RISC platforms and the IBM mainframe. Remedies were also sought. Since the work with the Aleph jobs in turn led to the related task of understanding compilers and their options, the conditions under which the CERN benchmarks (and other benchmarks) were run, kernel routines and frequently used CERNLIB routines, the whole undertaking expanded to try to look at all the factors that influence the performance of High Energy Physics (HEP) jobs in general. Finally, key performance issues were reviewed against the programs of one of the LHC collaborations (Atlas) with the hope that the conclusions would be of long- term interest during the establishment of their simulation, reconstruction and analysis codes

    PC as physics computer for LHC?

    Get PDF
    In the last five years, we have seen RISC workstations take over the computing scene that was once controlled by mainframes and supercomputers. In this paper we will argue that the same phenomenon might happen again. A project, active since March this year in the Physics Data Processing group of CERN's CN division is described where ordinary desktop PCs running Windows (NT and 3.11) have been used for creating an environment for running large LHC batch jobs (initially the DICE simulation job of Atlas). The problems encountered in porting both the CERN library and the specific Atlas codes are described together with some encouraging benchmark results when comparing to existing RISC workstations in use by the Atlas collaboration. The issues of establishing the batch environment (Batch monitor, staging software, etc.) are also covered. Finally a quick extrapolation of commodity computing power available in the future is touched upon to indicate what kind of cost envelope could be sufficient for the simulation farms required by the LHC experiments

    The ALICE Data Challenges

    Get PDF
    Since 1998, the ALICE experiment and the CERN/IT division have jointly executed several large-scale high throughput distributed computing exercises: the ALICE data challenges. The goals of these regular exercises are to test hardware and software components of the data acquisition and computing systems in realistic conditions and to execute an early integration of the overall ALICE computing infrastructure. This paper reports on the third ALICE Data Challenge (ADC III) that has been performed at CERN from January to March 2001. The data used during the ADC III are simulated physics raw data of the ALICE TPC, produced with the ALICE simulation program AliRoot. The data acquisition was based on the ALICE online framework called the ALICE Data Acquisition Test Environment (DATE) system. The data after event building were then formatted with the ROOT I/O package and a data catalogue based on MySQL was established. The Mass Storage System used during ADC III is CASTOR. Different software tools have been used to monitor the performances. DATE has demonstrated performances of more than 500 MByte/s. An aggregate data throughput of 85 MByte/s was sustained in CASTOR over several days. The total collected data amounts to 100 TBytes in 100,000 files

    Signatures of arithmetic simplicity in metabolic network architecture

    Get PDF
    Metabolic networks perform some of the most fundamental functions in living cells, including energy transduction and building block biosynthesis. While these are the best characterized networks in living systems, understanding their evolutionary history and complex wiring constitutes one of the most fascinating open questions in biology, intimately related to the enigma of life's origin itself. Is the evolution of metabolism subject to general principles, beyond the unpredictable accumulation of multiple historical accidents? Here we search for such principles by applying to an artificial chemical universe some of the methodologies developed for the study of genome scale models of cellular metabolism. In particular, we use metabolic flux constraint-based models to exhaustively search for artificial chemistry pathways that can optimally perform an array of elementary metabolic functions. Despite the simplicity of the model employed, we find that the ensuing pathways display a surprisingly rich set of properties, including the existence of autocatalytic cycles and hierarchical modules, the appearance of universally preferable metabolites and reactions, and a logarithmic trend of pathway length as a function of input/output molecule size. Some of these properties can be derived analytically, borrowing methods previously used in cryptography. In addition, by mapping biochemical networks onto a simplified carbon atom reaction backbone, we find that several of the properties predicted by the artificial chemistry model hold for real metabolic networks. These findings suggest that optimality principles and arithmetic simplicity might lie beneath some aspects of biochemical complexity

    Treatment of mastitis during lactation

    Get PDF
    Treatment of mastitis should be based on bacteriological diagnosis and take national and international guidelines on prudent use of antimicrobials into account. In acute mastitis, where bacteriological diagnosis is not available, treatment should be initiated based on herd data and personal experience. Rapid bacteriological diagnosis would facilitate the proper selection of the antimicrobial. Treating subclinical mastitis with antimicrobials during lactation is seldom economical, because of high treatment costs and generally poor efficacy. All mastitis treatment should be evidence-based, i.e., the efficacy of each product and treatment length should be demonstrated by scientific studies. Use of on-farm written protocols for mastitis treatment promotes a judicious use of antimicrobials and reduces the use of antimicrobials

    ATLAS detector and physics performance: Technical Design Report, 1

    Get PDF

    Optimizing IA-64 performance

    No full text
    Examines key features of the Itanium processor architecture and microarchitecture. The Itanium, originally known as the IA-64, is a 64-bit processor designed by Hewlett-Packard and Intel. In addition to the obvious performance gains that 64-bit addressing brings, the Itanium also supports performance-enhancing techniques such as predication, speculation, rotating registers, a wide parallel execution core, high clock speed, fast bus architecture, multiple execution units, and the like. Moreover, the Itanium is designed from the ground up around parallelism and uses a new kind of instruction set based on the Explicit Parallel Instruction Computing (EPIC) specification, which allows the processing of Windows-based and UNIX- based applications, among other features. Operating-system support for the IA-64 has been announced for 64-bit Windows, HP-UX, varieties of Linux, and AIX 51. The author shows how to achieve optimal code generation by a compiler or generate optimized sequences ofIA-64 assembly code to ensure top speed. (0 refs)

    Quo vadis code optimisation in High Energy Physics

    No full text

    A review of Japan and Japanese high-end computers

    No full text
    corecore