3,333 research outputs found

    Multiparticle production and quantum chromodynamics

    Get PDF
    The theory of strong interactions, quantum chromodynamics (QCD), is quite successful in the prediction and description of main features of multiparticle production processes at high energies. The general perturbative QCD approach to these processes (mainly to e+e- -annihilation) is briefly formulated and its problems are discussed. It is shown that the analytical calculations at the parton level with the low-momentum cut-off reproduce experimental data on the hadronic final state in multiparticle production processes at high energies surprisingly accurately even though the perturbative expansion parameter is not very small. Moreover, it is important that the perturbative QCD has been able not only to describe the existing data but also to predict many bright qualitatively new phenomena.Comment: 30 pages, LATEX, 12 Figs available at www.ufn.ru; the review pap er to be published in Physics-Uspekhi 45 (5) (2002

    How I got to work with Feynman on the covariant quark model

    Full text link
    In the period 1968 - 1974 I was a graduate student and then a postdoc at Caltech and was involved with the developments of the quark and parton models. Most of this time I worked in close contact with Richard Feynman and thus was present from the parton model was proposed until QCD was formulated. A personal account is presented how the collaboration took place and how the various stages of this development looked like from the inside until QCD was established as a theory for strong interactions with the partons being quarks and gluons.Comment: LaTeX, 20 pages, 2 figures. Contribution to "50 Years of Quarks", to be published by World Scientifi

    Summary: Working Group on QCD and Strong Interactions

    Full text link
    In this summary of the considerations of the QCD working group at Snowmass 2001, the roles of quantum chromodynamics in the Standard Model and in the search for new physics are reviewed, with empahsis on frontier areas in the field. We discuss the importance of, and prospects for, precision QCD in perturbative and lattice calculations. We describe new ideas in the analysis of parton distribution functions and jet structure, and review progress in small-xx and in polarization.Comment: Snowmass 2001. Revtex4, 34 pages, 4 figures, revised to include additional references on jets and lattice QC

    A performance portable, fully implicit Landau collision operator with batched linear solvers

    Full text link
    Modern accelerators use hierarchically parallel programming models that enable massive multithreading within a processing element (PE), with multiple PEs per device driven by traditional processes. Batching is a technique for exposing PE-level parallelism in algorithms that previously ran on entire processes or multiple threads within a single MPI process. Kinetic discretizations of magnetized plasmas, for example, advance the Vlasov-Maxwell system, which is then followed by a fully implicit time advance of a collision operator. These collision advances are independent at each spatial point and are well suited to batch processing. This paper builds on previous work on a high-performance, fully nonlinear Landau collision operator by batching the linear solver, as well as batching the spatial point problems and adding new support for multiple grids for highly multiscale, multi-species problems. An anisotropic relaxation verification test that agrees well with previous published results and analytical solutions is presented. The performance of the NVIDIA A100 and AMD MI250X nodes is evaluated, with a detailed hardware utilization analysis on the A100. For portability, the entire Landau operator time advance is implemented in Kokkos and is available in the PETSc numerical library

    TransPimLib: A Library for Efficient Transcendental Functions on Processing-in-Memory Systems

    Full text link
    Processing-in-memory (PIM) promises to alleviate the data movement bottleneck in modern computing systems. However, current real-world PIM systems have the inherent disadvantage that their hardware is more constrained than in conventional processors (CPU, GPU), due to the difficulty and cost of building processing elements near or inside the memory. As a result, general-purpose PIM architectures support fairly limited instruction sets and struggle to execute complex operations such as transcendental functions and other hard-to-calculate operations (e.g., square root). These operations are particularly important for some modern workloads, e.g., activation functions in machine learning applications. In order to provide support for transcendental (and other hard-to-calculate) functions in general-purpose PIM systems, we present \emph{TransPimLib}, a library that provides CORDIC-based and LUT-based methods for trigonometric functions, hyperbolic functions, exponentiation, logarithm, square root, etc. We develop an implementation of TransPimLib for the UPMEM PIM architecture and perform a thorough evaluation of TransPimLib's methods in terms of performance and accuracy, using microbenchmarks and three full workloads (Blackscholes, Sigmoid, Softmax). We open-source all our code and datasets at~\url{https://github.com/CMU-SAFARI/transpimlib}.Comment: Our open-source software is available at https://github.com/CMU-SAFARI/transpimli

    Neural network computing using on-chip accelerators

    Get PDF
    The use of neural networks, machine learning, or artificial intelligence, in its broadest and most controversial sense, has been a tumultuous journey involving three distinct hype cycles and a history dating back to the 1960s. Resurgent, enthusiastic interest in machine learning and its applications bolsters the case for machine learning as a fundamental computational kernel. Furthermore, researchers have demonstrated that machine learning can be utilized as an auxiliary component of applications to enhance or enable new types of computation such as approximate computing or automatic parallelization. In our view, machine learning becomes not the underlying application, but a ubiquitous component of applications. This view necessitates a different approach towards the deployment of machine learning computation that spans not only hardware design of accelerator architectures, but also user and supervisor software to enable the safe, simultaneous use of machine learning accelerator resources. In this dissertation, we propose a multi-transaction model of neural network computation to meet the needs of future machine learning applications. We demonstrate that this model, encompassing a decoupled backend accelerator for inference and learning from hardware and software for managing neural network transactions can be achieved with low overhead and integrated with a modern RISC-V microprocessor. Our extensions span user and supervisor software and data structures and, coupled with our hardware, enable multiple transactions from different address spaces to execute simultaneously, yet safely. Together, our system demonstrates the utility of a multi-transaction model to increase energy efficiency improvements and improve overall accelerator throughput for machine learning applications

    Detecting chaos, determining the dimensions of tori and predicting slow diffusion in Fermi--Pasta--Ulam lattices by the Generalized Alignment Index method

    Full text link
    The recently introduced GALI method is used for rapidly detecting chaos, determining the dimensionality of regular motion and predicting slow diffusion in multi--dimensional Hamiltonian systems. We propose an efficient computation of the GALIk_k indices, which represent volume elements of kk randomly chosen deviation vectors from a given orbit, based on the Singular Value Decomposition (SVD) algorithm. We obtain theoretically and verify numerically asymptotic estimates of GALIs long--time behavior in the case of regular orbits lying on low--dimensional tori. The GALIk_k indices are applied to rapidly detect chaotic oscillations, identify low--dimensional tori of Fermi--Pasta--Ulam (FPU) lattices at low energies and predict weak diffusion away from quasiperiodic motion, long before it is actually observed in the oscillations.Comment: 10 pages, 5 figures, submitted for publication in European Physical Journal - Special Topics. Revised version: Small explanatory additions to the text and addition of some references. A small figure chang

    Domain Specific Language for Magnetic Measurements at CERN

    Get PDF
    CERN, the European Organization for Nuclear Research, is one of the world’s largest and most respected centres for scientific research. Founded in 1954, the CERN Laboratory sits astride the Franco–Swiss border near Geneva. It was one of Europe’s first joint ventures and now has 20 Member States. Its main purpose is fundamental research in partcle physics, namely investigating what the Universe is made of and how it works. At CERN, the design and realization of the new particle accelerator, the Large Hadron Collider (LHC), has required a remarkable technological effort in many areas of engineering. In particular, the tests of LHC superconducting magnets disclosed new horizons to magnetic measurements. At CERN, the objectively large R&D effort of the Technolgy Department/Magnets, Superconductors and Cryostats (TE/MSC) group identified areas where further work is required in order to assist the LHC commissioning and start-up, to provide continuity in the instrumentation for the LHC magnets maintenance, and to achieve more accurate magnet models for the LHC exploitation. In view of future projects, a wide range of software requirements has been recently satisfied by the Flexible Framework for Magnetic Measurements (FFMM), designed also for integrating more performing flexible hardware. FFMM software applications control several devices, such as encoder boards, digital integrators, motor controllers, transducers. In addition, they synchronize and coordinate different measurement tasks and actions
    • …
    corecore