1,754 research outputs found
Renormalization group improved predictions for production at hadron colliders
We study the factorization and resummation of the production
at hadron colliders. The cross section in the threshold limit can be factorized
into a convolution of hard and soft functions and parton distribution functions
with the soft-collinear effective theory. We calculate the next-to-leading
order soft function for the associated production of the heavy quark pair and
colorless particle, and we perform the resummation calculation with the
next-to-next-to-leading logarithms accuracy. Our results show that the
resummation effects reduce the dependence of the cross section on the scales
significantly and increase the total cross section by compared with
NLO QCD results.Comment: 23 pages, 7 figures and 2 tables; final version in PR
Signature of the +jet and dijet production mediated by an excited quark with QCD next-to-leading order accuracy at the LHC
We present a detailed study of the production and decay of the excited quark
at the QCD next-to-leading order (NLO) level at the Large Hadron Collider,
using the narrow width approximation and helicity amplitudes method. We find
that the QCD NLO corrections can tighten the constraints on the model
parameters and reduce the scale dependencies of the total cross sections. We
discuss the signals of the excited quark production with decay mode
and , and present several
important kinematic distributions. Moreover, we give the upper limits of the
excited quark excluded mass range and the allowed parameter space for the
coupling constants and the excited quark mass.Comment: 20 pages, 13 figures; version published in PR
Threshold resummation for the production of a color sextet (antitriplet) scalar at the LHC
We investigate threshold resummation effects in the production of a color
sextet (antitriplet) scalar at next-to-next-to-leading logarithmic (NNLL) order
at the LHC in the frame of soft-collinear effective theory. We show the total
cross section and the rapidity distribution with NLO+NNLL accuracy, and we
compare them with the NLO results. Besides, we use recent dijet data at the LHC
to give the constraints on the couplings between the colored scalars and
quarks.Comment: 21 pages,9 figures,3 tables; Version published in EPJ
Analytical Model of Electromagnetic Performance for Permanent-Magnet Vernier Machines Using Nonlinear Exact Conformal Model
This article investigates the air-gap field distribution of the permanent-magnet Vernier machine (PMVM) using a nonlinear exact conformal model (NECM) to account for slotting effect, flux modulation effect, and iron nonlinearity. The exact conformal model (ECM) based on the region of one-slot and one-flux-modulation-pole (OSECM) are introduced to show the effectiveness of the linear analytical model for PMVM. It can keep high calculation accuracy and significantly reduce the computational burden. Then, the NECM is developed from OSECM by introducing the equivalent saturation current into the air region and coil region. The lumped parameter magnetic circuit model (LPMCM) model is used to obtain the magnetic potential of the iron region and therefore calculate the equivalent saturation current. The NECM which combines LPMCM and OSECM can essentially improve the accuracy of the linear analytical model. The harmonic analysis of the air-gap field is performed to theoretically explain the component of electromagnetic torque. Both finite element model (FEM) simulation and test results are presented to validate the NECM
ROAM: memory-efficient large DNN training via optimized operator ordering and memory layout
As deep learning models continue to increase in size, the memory requirements
for training have surged. While high-level techniques like offloading,
recomputation, and compression can alleviate memory pressure, they also
introduce overheads. However, a memory-efficient execution plan that includes a
reasonable operator execution order and tensor memory layout can significantly
increase the models' memory efficiency and reduce overheads from high-level
techniques. In this paper, we propose ROAM which operates on computation graph
level to derive memory-efficient execution plan with optimized operator order
and tensor memory layout for models. We first propose sophisticated theories
that carefully consider model structure and training memory load to support
optimization for large complex graphs that have not been well supported in the
past. An efficient tree-based algorithm is further proposed to search task
divisions automatically, along with delivering high performance and
effectiveness to solve the problem. Experiments show that ROAM achieves a
substantial memory reduction of 35.7%, 13.3%, and 27.2% compared to Pytorch and
two state-of-the-art methods and offers a remarkable 53.7x speedup. The
evaluation conducted on the expansive GPT2-XL further validates ROAM's
scalability
Optimal Synthesis of Stabilizer Codes via MaxSAT
Quantum Error Correction (QEC) codes are crucial for achieving fault-tolerant
quantum computing in the long term. However, efficiently implementing these
codes on hardware poses significant challenges, including hardware connectivity
matching, efficient circuit scheduling, and fault-tolerance enforcement. In
this study, we present an optimal synthesizer that stitches generic stabilizer
codes onto diverse hardware structures via MaxSAT. Our evaluation demonstrates
(1) the capability of our approach to be applied for various codes and devices
and (2) the consistently better efficiency than the best prior heuristic
approaches that only target specific QEC codes. By bridging the gap between
high-level QEC code design and low-level hardware constraints, this work paves
the way toward achieving long-term fault-tolerant quantum computing goals
- …