403 research outputs found

    Atmospheric neutrino flux at INO, South Pole and Pyh\"asalmi

    Get PDF
    We present the calculation of the atmospheric neutrino fluxes for the neutrino experiments proposed at INO, South Pole and Pyh\"asalmi. Neutrino fluxes have been obtained using ATMNC, a simulation code for cosmic ray in the atmosphere. Even using the same primary flux model and the interaction model, the calculated atmospheric neutrino fluxes are different for the different sites due to the geomagnetic field. The prediction of these fluxes in the present paper would be quite useful in the experimental analysis.Comment: 12Pages,9Fig

    Calculation of atmospheric neutrino flux using the interaction model calibrated with atmospheric muon data

    Get PDF
    Using the ``modified DPMJET-III'' model explained in the previous paper, we calculate the atmospheric neutrino flux. The calculation scheme is almost the same as HKKM04 \cite{HKKM2004}, but the usage of the ``virtual detector'' is improved to reduce the error due to it. Then we study the uncertainty of the calculated atmospheric neutrino flux summarizing the uncertainties of individual components of the simulation. The uncertainty of KK-production in the interaction model is estimated by modifying FLUKA'97 and Fritiof 7.02 so that they also reproduce the atmospheric muon flux data correctly, and the calculation of the atmospheric neutrino flux with those modified interaction models. The uncertainties of the flux ratio and zenith angle dependence of the atmospheric neutrino flux are also studied

    Communion: a new strategy for memory management in high-performance computer systems

    Get PDF
    Modern computers present a big gap between peak performance and sustained performance. There are many reasons for this situation, but mainly involving an inefficient usage of computational resources. Nowadays the memory system is the most critical component because of its growing inability to keep up with the processor requests. Technological trends have produced a large and growing gap between CPU speeds and DRAM speeds. Much research has focused this memory system problem, including program optimizing techniques, data locality enhancement, hardware and software prefetching, decoupled architectures, mutithreading, speculative loads and execution. These techniques have got a relative success, but they focus only one component in the hardware or software systems. We present here a new strategy for memory management in high-performance computer systems, named COMMUNION. The basic idea behind this strategy is cooperation. We introduce some interaction possibilities among system programs that are responsible to generate and execute application programs. So, we investigate two specific interactions: between the compiler and the operating system, and among the compiling system components. The experimental results show that it’s possible to get improvements of about 10 times in execution time, and about 5 times in memory demand. In the interaction between compiler and operating system, named Compiler-Aided Page Replacement (CAPR), we achieved a reduction of about 10% in space-time product, with an increase of only 0.5% in the total execution time. All these results show that it’s possible to manage main memory with a better efficiency than current systems.Eje: Procesamiento distribuido y paralelo. Tratamiento de señalesRed de Universidades con Carreras en Informática (RedUNCI

    Communion: a new strategy for memory management in high-performance computer systems

    Get PDF
    Modern computers present a big gap between peak performance and sustained performance. There are many reasons for this situation, but mainly involving an inefficient usage of computational resources. Nowadays the memory system is the most critical component because of its growing inability to keep up with the processor requests. Technological trends have produced a large and growing gap between CPU speeds and DRAM speeds. Much research has focused this memory system problem, including program optimizing techniques, data locality enhancement, hardware and software prefetching, decoupled architectures, mutithreading, speculative loads and execution. These techniques have got a relative success, but they focus only one component in the hardware or software systems. We present here a new strategy for memory management in high-performance computer systems, named COMMUNION. The basic idea behind this strategy is cooperation. We introduce some interaction possibilities among system programs that are responsible to generate and execute application programs. So, we investigate two specific interactions: between the compiler and the operating system, and among the compiling system components. The experimental results show that it’s possible to get improvements of about 10 times in execution time, and about 5 times in memory demand. In the interaction between compiler and operating system, named Compiler-Aided Page Replacement (CAPR), we achieved a reduction of about 10% in space-time product, with an increase of only 0.5% in the total execution time. All these results show that it’s possible to manage main memory with a better efficiency than current systems.Eje: Procesamiento distribuido y paralelo. Tratamiento de señalesRed de Universidades con Carreras en Informática (RedUNCI

    Improvement of low energy atmospheric neutrino flux calculation using the JAM nuclear interaction model

    Full text link
    We present the calculation of the atmospheric neutrino fluxes with an interaction model named JAM, which is used in PHITS (Particle and Heavy-Ion Transport code System). The JAM interaction model agrees with the HARP experiment a little better than DPMJET-III. After some modifications, it reproduces the muon flux below 1~GeV/c at balloon altitudes better than the modified-DPMJET-III which we used for the calculation of atmospheric neutrino flux in previous works. Some improvements in the calculation of atmospheric neutrino flux are also reported.Comment: 46 pages, 28 figure
    corecore