5 research outputs found

    Production of {\pi}+ and K+ mesons in argon-nucleus interactions at 3.2 AGeV

    Full text link
    First physics results of the BM@N experiment at the Nuclotron/NICA complex are presented on {\pi}+ and K+ meson production in interactions of an argon beam with fixed targets of C, Al, Cu, Sn and Pb at 3.2 AGeV. Transverse momentum distributions, rapidity spectra and multiplicities of {\pi}+ and K+ mesons are measured. The results are compared with predictions of theoretical models and with other measurements at lower energies.Comment: 29 pages, 20 figure

    DIRAC Interware as a Service for High-Thoughput Computing in JINR

    No full text
    International audienceDIRAC Interware is an open-source development platform for the integration of heterogeneous computing and storage resources. The service based on this platform was deployed and configured in Joint Institute for Nuclear Research in 2016. Now it is actively used for MPD, Baikal-GVD, and BM@N experiments. In JINR we have five big computing resources with uniform access via the DIRAC service: Tier1, Tier2, Govorun supercomputer, cloud, and NICA cluster. In particular, the DIRAC service was used as a tool for the integration of cloud resources of JINR member states. The overall performance of the united system is at least three times more efficient compared to the use of any single computing resource. Of course, using the united system adds complexity for users and requires additional effort to reach high performance. But, for the last three years of active use of the DIRAC, the approaches were elaborated to simplify the use of the system. Right now there are many tools and components developed to allow the fast construction of new workflows. The total number of completed jobs exceeds 1 million, and the total amount of computing work is around 4.5 million HS06days

    Integration of Distributed Heterogeneous Computing Resources for the MPD Experiment with DIRAC Interware

    No full text
    International audienceComputing and storage resources are essential for efficient Monte Carlo generation. In JINR there are several types of computing resources: Tier1 and Tier2 grid clusters, Govorun supercomputer, JINR Cloud, and NICA cluster. There are also EOS disk and dCache tape storage systems. In order to use them, users have to be aware of many details and differences between resources and keep track of load on all of them. The DIRAC Interware was adopted, configured, and expanded to fulfill requirements of massive centralized Monte Carlo generation for the multipurpose detector. For a year, the all the infrastructure was used via DIRAC to run successfully around 500 000 jobs with an average duration of 5 h each. The use of DIRAC allowed for unified data access, performance estimation, and accounting across all resources

    BES-III distributed computing status

    No full text
    International audienceThe BES-III experiment at the Institute of High Energy Physics (Beijing, China) is aimed at the precision measurements in e+e– annihilation in the energy range from 2.0 till 4.6 GeV. The world’s largest samples of J/psi and psi’ events and unique samples of XYZ data have been already collected. The expected increase of the data volume in the coming years required a significant evolution of the computing model, namely shift from a centralized data processing to a distributed one. This report summarizes a current design of the BES-III distributed computing system, some of key decisions and experience gained during 2 years of operations
    corecore