1,901 research outputs found

    OSIRIS.FRAMEWORK: an integrated tool for modeling astrophysical and laboratory plasmas

    Get PDF
    We describe the osiris.framework [1], a general purpose, three-dimensional, fully relativistic, massively parallel, object oriented particle-in-cell code for the numerical simulation of astrophysical and laboratory plasmas, complemented by a set of specially designed visualization tools [2]. Developed in Fortran 95, the code runs on multiple platforms (Cray T3E, IBM SP, Beowulf, Mac clusters) and can be easily ported to new ones. Details on the code’s capabilities are given. We discuss the object-oriented design of the code, the encapsulation of system dependent code and the parallelization of the algorithms involved. We also discuss the implementation of communications as a boundary condition problem and also load balancing, as well as other key characteristics of the code, such as the moving window, open-space and thermal bath boundaries, arbitrary domain decomposition, 1D (cartesian), 2D (cartesian and cylindrical) and 3D geometry, ion sub-cycling, tunnel and impact ionization and diagnostics. Finally results from three-dimensional simulations are presented, in connection with the data analysis and visualization infrastructure developed to post-process the scalar and vector results from PIC simulations

    Large Scale Earth's Bow Shock with Northern IMF as simulated by PIC code in parallel with MHD model

    Full text link
    In this paper, we propose a 3D kinetic model (Particle-in-Cell PIC ) for the description of the large scale Earth's bow shock. The proposed version is stable and does not require huge or extensive computer resources. Because PIC simulations work with scaled plasma and field parameters, we also propose to validate our code by comparing its results with the available MHD simulations under same scaled Solar wind ( SW ) and ( IMF ) conditions. We report new results from the two models. In both codes the Earth's bow shock position is found to be ~14.8 RE along the Sun-Earth line, and ~ 29 RE on the dusk side. Those findings are consistent with past in situ observations. Both simulations reproduce the theoretical jump conditions at the shock. However, the PIC code density and temperature distributions are inflated and slightly shifted sunward when compared to the MHD results. Kinetic electron motions and reflected ions upstream may cause this sunward shift. Species distributions in the foreshock region are depicted within the transition of the shock (measured ~2 c/{\omega}pi for {\Theta}Bn =90o and MMS =4.7 ) and in the downstream. The size of the foot jump in the magnetic field at the shock is measured to be (1.7 c/{\omega}pi ). In the foreshocked region, the thermal velocity is found equal to 213 km.sec-1 at 15 RE and is equal to 63 km.sec-1at 12 RE (Magnetosheath region). Despite the large cell size of the current version of the PIC code, it is powerful to retain macrostructure of planets magnetospheres in very short time, thus it can be used for a pedagogical test purposes. It is also likely complementary with MHD to deepen our understanding of the large scale magnetosphereComment: 26 pages, 8 figures, 1 table , 66 references, JOAA-D-16-00005/201

    An Application Perspective on High-Performance Computing and Communications

    Get PDF
    We review possible and probable industrial applications of HPCC focusing on the software and hardware issues. Thirty-three separate categories are illustrated by detailed descriptions of five areas -- computational chemistry; Monte Carlo methods from physics to economics; manufacturing; and computational fluid dynamics; command and control; or crisis management; and multimedia services to client computers and settop boxes. The hardware varies from tightly-coupled parallel supercomputers to heterogeneous distributed systems. The software models span HPF and data parallelism, to distributed information systems and object/data flow parallelism on the Web. We find that in each case, it is reasonably clear that HPCC works in principle, and postulate that this knowledge can be used in a new generation of software infrastructure based on the WebWindows approach, and discussed in an accompanying paper

    Index to 1986 NASA Tech Briefs, volume 11, numbers 1-4

    Get PDF
    Short announcements of new technology derived from the R&D activities of NASA are presented. These briefs emphasize information considered likely to be transferrable across industrial, regional, or disciplinary lines and are issued to encourage commercial application. This index for 1986 Tech Briefs contains abstracts and four indexes: subject, personal author, originating center, and Tech Brief Number. The following areas are covered: electronic components and circuits, electronic systems, physical sciences, materials, life sciences, mechanics, machinery, fabrication technology, and mathematics and information sciences

    Towards exascale simulations of the ICM dynamo with WENO-WOMBAT

    Get PDF
    In galaxy clusters, modern radio interferometers observe non-thermal radio sources with unprecedented spatial and spectral resolution. For the first time, the new data allows to infer the structure of the intra-cluster magnetic fields on small scales via Faraday tomography. This leap forward demands new numerical models for the amplification of magnetic fields in cosmic structure formation-the cosmological magnetic dynamo. Here we present a novel numerical approach to astrophyiscal MHD simulations aimed to resolve this small-scale dynamo in future cosmological simulations. As a first step, we implement a fifth order WENO scheme in the new code WOMBAT. We show that this scheme doubles the effective resolution of the simulation and is thus less expensive than common second order schemes. WOMBAT uses a novel approach to parallelization and load balancing developed in collaboration with performance engineers at Cray Inc. This will allow us scale simulation to the exaflop regime and achieve kpc resolution in future cosmological simulations of galaxy clusters. Here we demonstrate the excellent scaling properties of the code and argue that resolved simulations of the cosmological small scale dynamo within the whole virial radius are possible in the next years

    The MEG detector for μ+e+γ{\mu}+\to e+{\gamma} decay search

    Get PDF
    The MEG (Mu to Electron Gamma) experiment has been running at the Paul Scherrer Institut (PSI), Switzerland since 2008 to search for the decay \meg\ by using one of the most intense continuous μ+\mu^+ beams in the world. This paper presents the MEG components: the positron spectrometer, including a thin target, a superconducting magnet, a set of drift chambers for measuring the muon decay vertex and the positron momentum, a timing counter for measuring the positron time, and a liquid xenon detector for measuring the photon energy, position and time. The trigger system, the read-out electronics and the data acquisition system are also presented in detail. The paper is completed with a description of the equipment and techniques developed for the calibration in time and energy and the simulation of the whole apparatus.Comment: 59 pages, 90 figure

    A Study of the Performance of the Topological Clustering and Anti-kT Algorithms Using Graphical Processing Units

    Get PDF
    The advent and proliferation of modern graphical processing units (GPUs) containing hundreds to thousands of cores opens up the new possibility of rewriting programs to execute on GPUs and acheive significant gains in speed over the original implementation. We examine two algorithms taken from high-energy physics: topological clustering, and the anti-k_{T} jet finding algorithm. While no performance gains were attained for topological clustering, the execution time of the anti-k_{T} algorithm is nearly halved

    Distributed computing and farm management with application to the search for heavy gauge bosons using the ATLAS experiment at the LHC (CERN)

    Get PDF
    The Standard Model of particle physics describes the strong, weak, and electromagnetic forces between the fundamental particles of ordinary matter. However, it presents several problems and some questions remain unanswered so it cannot be considered a complete theory of fundamental interactions. Many extensions have been proposed in order to address these problems. Some important recent extensions are the Extra Dimensions theories. In the context of some models with Extra Dimensions of size about 1TeV11 TeV^{-}1, in particular in the ADD model with only fermions confined to a D-brane, heavy Kaluza-Klein excitations are expected, with the same properties as SM gauge bosons but more massive. In this work, three hadronic decay modes of some of such massive gauge bosons, Z* and W*, are investigated using the ATLAS experiment at the Large Hadron Collider (LHC), presently under construction at CERN. These hadronic modes are more difficult to detect than the leptonic ones, but they should allow a measurement of the couplings between heavy gauge bosons and quarks. The events were generated using the ATLAS fast simulation and reconstruction MC program Atlfast coupled to the Monte Carlo generator PYTHIA. We found that for an integrated luminosity of 3×105pb13 × 10^{5} pb^{-}1 and a heavy gauge boson mass of 2 TeV, the channels Z*->bb and Z*->tt would be difficult to detect because the signal would be very small compared with the expected backgrou nd, although the significance in the case of Z*->tt is larger. In the channel W*->tb , the decay might yield a signal separable from the background and a significance larger than 5 so we conclude that it would be possible to detect this particular mode at the LHC. The analysis was also performed for masses of 1 TeV and we conclude that the observability decreases with the mass. In particular, a significance higher than 5 may be achieved below approximately 1.4, 1.9 and 2.2 TeV for Z*->bb , Z*->tt and W*->tb respectively. The LHC will start to operate in 2008 and collect data in 2009. It will produce roughly 15 Petabytes of data per year. Access to this experimental data has to be provided for some 5,000 scientists working in 500 research institutes and universities. In addition, all data need to be available over the estimated 15-year lifetime of the LHC. The analysis of the data, including comparison with theoretical simulations, requires an enormous computing power. The computing challenges that scientists have to face are the huge amount of data, calculations to perform and collaborators. The Grid has been proposed as a solution for those challenges. The LHC Computing Grid project (LCG) is the Grid used by ATLAS and the other LHC experiments and it is analised in depth with the aim of studying the possible complementary use of it with another Grid project. That is the Berkeley Open Infrastructure for Network C omputing middle-ware (BOINC) developed for the SETI@home project, a Grid specialised in high CPU requirements and in using volunteer computing resources. Several important packages of physics software used by ATLAS and other LHC experiments have been successfully adapted/ported to be used with this platform with the aim of integrating them into the LHC@home project at CERN: Atlfast, PYTHIA, Geant4 and Garfield. The events used in our physics analysis with Atlfast were reproduced using BOINC obtaining exactly the same results. The LCG software, in particular SEAL, ROOT and the external software, was ported to the Solaris/sparc platform to study it's portability in general as well. A testbed was performed including a big number of heterogeneous hardware and software that involves a farm of 100 computers at CERN's computing center (lxboinc) together with 30 PCs from CIEMAT and 45 from schools from Extremadura (Spain). That required a preliminary study, development and creation of components of the Quattor software and configuration management tool to install and manage the lxboinc farm and it also involved the set up of a collaboration between the Spanish research centers and government and CERN. The testbed was successful and 26,597 Grid jobs were delivered, executed and received successfully. We conclude that BOINC and LCG are complementary and useful kinds of Grid that can be used by ATLAS and the other LHC experiments. LCG has very good data distribution, management and storage capabilities that BOINC does not have. In the other hand, BOINC does not need high bandwidth or Internet speed and it also can provide a huge and inexpensive amount of computing power coming from volunteers. In addition, it is possible to send jobs from LCG to BOINC and vice versa. So, possible complementary cases are to use volunteer BOINC nodes when the LCG nodes have too many jobs to do or to use BOINC for high CPU tasks like event generators or reconstructions while concentrating LCG for data analysis

    Electromagnetic properties of metal-dielectric media and their applications

    Get PDF
    The main objective of this dissertation is to investigate nano-structured random composite materials, which exhibit anomalous phenomena, such as the extraordinary enhancements of linear and non-linear optical processes due to excitation of collective electronic states, surface plasmons (SP). The main goal is to develop a time and memory efficient novel numerical method to study the properties of these random media in three dimensions (3D) by utilization of multi core processing and packages such as MPI for parallel execution. The developed numerical studies are then utilized to provide a comprehensive characterization and optimization of a surface plasmon enhanced solar cell (SPESC) and to serve as a test bed for enhanced bio and chemical sensing. In this context, this thesis work develops an efficient and exact numerical algorithm here referred to as Block Elimination Method (BE) which provides the unique capability of modeling extremely large scale composite materials (with up to 1 million strongly interacting metal or dielectric particles). This capability is crucial in order to study the electromagnetic response of large scale inhomogeneous (fractal) films and bulk composites at critical concentrations (percolation). The developed numerical method is used to accurately estimate parameters that describe the composite materials, including the effective conductivity and correlation length scaling exponents, as well as density of states and localization length exponents at the band center. This works reveals, for a first time, a unique de-localization mechanism that plays an important role in the excitation of charge–density waves, i.e. surface plasmons (SP), in metal-dielectric composites. It also shows that in 3D metal-dielectric percolation systems the local fields distribution function for frequencies close to the single particle plasmon resonance is log-normal which is a signature of a metal-dielectric phase transition manifested in the optical response of the composites. Based on the obtained numerical data a scaling theory for the higher order electric field moments is developed. A distinct evidence of singularities in the surface plasmon density of states and localization length is obtained, correlating with results previously obtained for two dimensional systems. This leads to the main finding of this work; i.e., the delocalization of surface plasmon states in percolating metal-dielectric composite materials is universally present regardless of the dimensionality of the problem. This dissertation also proposes a new approach toward developing highly efficient inorganic/organic solar cell, by presenting a method for enhancement in the optical absorption and overall cell efficiency. Specifically, the approach improves the operation characteristics of inorganic semiconductor (e.g. Si and a-Si) and organic (P3HT:PCBM) thin film solar cells by integrating a thin, inhomogeneous, metal-dielectric composite (MDC) electrode at the interface between the transparent electrode and active layer. Through numerical simulations, we show that under solar illumination, surface plasmons are excited within the fractal MDC electrode across an extremely broad range of optical frequencies, trapping the incoming light and ensuring an optimal absorption into the active layer of the solar cells. An analytical model is developed to study the I-V characteristics of the cells, providing a pathway toward achieving optimal efficiency and better understanding of the behavior of charge carriers. Using this model, it is shown that including gold MDC electrodes can lead to an enhancement in solar cell power conversion efficiency up to 33% higher compared to the benchmark device

    Optimization Framework for a Radio Frequency Gun Based Injector

    Get PDF
    Linear accelerator based light sources are used to produce coherent x-ray beams with unprecedented peak intensity. In these devices, the key parameters of the photon beam such as brilliance and coherence are directly dependent on the electron beam parameters. This leads to stringent beam quality requirements for the electron beam source. Radio frequency (RF) guns are used in such light sources since they accelerate electrons to relativistic energies over a very short distance, thus minimizing the beam quality degradation due to space charge effects within the particle bunch. Designing such sources including optimization of its beam parameters is a complex process where one needs to meet many requirements simultaneously. It is useful to have a tool to automate the design optimization in the context of the injector beam dynamics performance. Evolutionary and genetic algorithms are powerful tools to apply to nonlinear multi-objective optimization problems, and they have been successfully used in injector optimizations where the electric field profiles for the accelerating devices are fixed. Here the genetic algorithm based approach is extended to modify and optimize the electric field profile for an RF gun concurrently with the injector performance. Two field modification methods are used. This dissertation presents an overview of the optimization system and examples of its application to a state of the art RF gun. Results indicate improved injector performance is possible with unbalanced electric field profiles where the peak field in the cathode cell is larger than in subsequent cells
    corecore