603,197 research outputs found

    Warp-X: a new exascale computing platform for beam-plasma simulations

    Full text link
    Turning the current experimental plasma accelerator state-of-the-art from a promising technology into mainstream scientific tools depends critically on high-performance, high-fidelity modeling of complex processes that develop over a wide range of space and time scales. As part of the U.S. Department of Energy's Exascale Computing Project, a team from Lawrence Berkeley National Laboratory, in collaboration with teams from SLAC National Accelerator Laboratory and Lawrence Livermore National Laboratory, is developing a new plasma accelerator simulation tool that will harness the power of future exascale supercomputers for high-performance modeling of plasma accelerators. We present the various components of the codes such as the new Particle-In-Cell Scalable Application Resource (PICSAR) and the redesigned adaptive mesh refinement library AMReX, which are combined with redesigned elements of the Warp code, in the new WarpX software. The code structure, status, early examples of applications and plans are discussed

    Structured Performance Analysis for Component Based Systems

    Get PDF
    International audienceThe Component Based System (CBS) paradigm is now largely used to design software systems. In addition, performance and behavioural analysis remains a required step for the design and the construction of efficient systems. This is especially the case of CBS, which involve interconnected components running concurrent processes. % This paper proposes a compositional method for modeling and structured performance analysis of CBS. Modeling is based on Stochastic Well-formed Nets (SWN), a high level model of Stochastic Petri nets, widely used for dependability analysis of concurrent systems. Starting from the definition of the system given in a suitable Architecture Description Language, and from the definition of the elementary components, we build an SWN of the global system together with a set of SWNs modeling the components of the CBS and their connections. From these models, we derive performances of the system thanks to a structured analysis induced by the structure of the CBS. We describe the application of our method through an example designed in the framework of the CORBA Component Model

    Developing Model-Based Design Evaluation for Pipelined A/D Converters

    Get PDF
    This paper deals with a prospective approach of modeling, design evaluation and error determination applied to pipelined A/D converter architecture. In contrast with conventional ADC modeling algorithms targeted to extract the maximum ADC non-linearity error, the innovative approach presented allows to decompose magnitudes of individual error sources from a measured or simulated response of an ADC device. Design Evaluation methodology was successfully applied to Nyquist rate cyclic converters in our works [13]. Now, we extend its principles to pipelined architecture. This qualitative decomposition can significantly contribute to the ADC calibration procedure performed on the production line in term of integral and differential nonlinearity. This is backgrounded by the fact that the knowledge of ADC performance contributors provided by the proposed method helps to adjust the values of on-chip converter components so as to equalize (and possibly minimize) the total non-linearity error. In this paper, the design evaluation procedure is demonstrated on a system design example of pipelined A/D converter. Significant simulation results of each stage of the design evaluation process are given, starting from the INL performance extraction proceeded in a powerful Virtual Testing Environment implemented in Maple™ software and finishing by an error source simulation, modeling of pipelined ADC structure and determination of error source contribution, suitable for a generic process flow

    A simulation model for wind energy storage systems. Volume 1: Technical report

    Get PDF
    A comprehensive computer program for the modeling of wind energy and storage systems utilizing any combination of five types of storage (pumped hydro, battery, thermal, flywheel and pneumatic) was developed. The level of detail of Simulation Model for Wind Energy Storage (SIMWEST) is consistent with a role of evaluating the economic feasibility as well as the general performance of wind energy systems. The software package consists of two basic programs and a library of system, environmental, and load components. The first program is a precompiler which generates computer models (in FORTRAN) of complex wind source storage application systems, from user specifications using the respective library components. The second program provides the techno-economic system analysis with the respective I/O, the integration of systems dynamics, and the iteration for conveyance of variables. SIMWEST program, as described, runs on the UNIVAC 1100 series computers

    A Holistic Approach to Log Data Analysis in High-Performance Computing Systems: The Case of IBM Blue Gene/Q

    Get PDF
    The complexity and cost of managing high-performance computing infrastructures are on the rise. Automating management and repair through predictive models to minimize human interventions is an attempt to increase system availability and contain these costs. Building predictive models that are accurate enough to be useful in automatic management cannot be based on restricted log data from subsystems but requires a holistic approach to data analysis from disparate sources. Here we provide a detailed multi-scale characterization study based on four datasets reporting power consumption, temperature, workload, and hardware/software events for an IBM Blue Gene/Q installation. We show that the system runs a rich parallel workload, with low correlation among its components in terms of temperature and power, but higher correlation in terms of events. As expected, power and temperature correlate strongly, while events display negative correlations with load and power. Power and workload show moderate correlations, and only at the scale of components. The aim of the study is a systematic, integrated characterization of the computing infrastructure and discovery of correlation sources and levels to serve as basis for future predictive modeling efforts.Comment: 12 pages, 7 Figure

    Autonomous Integrated Receive System (AIRS) requirements definition. Volume 3: Performance and simulation

    Get PDF
    The autonomous and integrated aspects of the operation of the AIRS (Autonomous Integrated Receive System) are discussed from a system operation point of view. The advantages of AIRS compared to the existing SSA receive chain equipment are highlighted. The three modes of AIRS operation are addressed in detail. The configurations of the AIRS are defined as a function of the operating modes and the user signal characteristics. Each AIRS configuration selection is made up of three components: the hardware, the software algorithms and the parameters used by these algorithms. A comparison between AIRS and the wide dynamics demodulation (WDD) is provided. The organization of the AIRS analytical/simulation software is described. The modeling and analysis is for simulating the performance of the PN subsystem is documented. The frequence acquisition technique using a frequency-locked loop is also documented. Doppler compensation implementation is described. The technological aspects of employing CCD's for PN acquisition are addressed

    A Simplified Crossing Fiber Model in Diffusion Weighted Imaging

    Get PDF
    Diffusion MRI (dMRI) is a vital source of imaging data for identifying anatomical connections in the living human brain that form the substrate for information transfer between brain regions. dMRI can thus play a central role toward our understanding of brain function. The quantitative modeling and analysis of dMRI data deduces the features of neural fibers at the voxel level, such as direction and density. The modeling methods that have been developed range from deterministic to probabilistic approaches. Currently, the Ball-and-Stick model serves as a widely implemented probabilistic approach in the tractography toolbox of the popular FSL software package and FreeSurfer/TRACULA software package. However, estimation of the features of neural fibers is complex under the scenario of two crossing neural fibers, which occurs in a sizeable proportion of voxels within the brain. A Bayesian non-linear regression is adopted, comprised of a mixture of multiple non-linear components. Such models can pose a difficult statistical estimation problem computationally. To make the approach of Ball-and-Stick model more feasible and accurate, we propose a simplified version of Ball-and-Stick model that reduces parameter space dimensionality. This simplified model is vastly more efficient in the terms of computation time required in estimating parameters pertaining to two crossing neural fibers through Bayesian simulation approaches. Moreover, the performance of this new model is comparable or better in terms of bias and estimation variance as compared to existing models
    corecore