24 research outputs found

    pQCD at the Kinematical Boundary of its Applicability

    Full text link
    I examines the applicability and possible need for a re-formulation of pQCD, as we know it, in the DIS limit of small Q2Q^2 and xx. Gluon saturation, implied by s-channel unitarity, and its possible experimental signatures are critically assessed.Comment: 6 pages, 4 figure (in ps) talk given at XXXI International Symposium on Multiparticle Dynamics, Sep. 1-7, 2001, Datong China URL http://ismd31.ccnu.edu.cn

    Soft Scattering Re - Visited

    Get PDF
    An updated formulation of soft diffraction, compatible with ss and tt channel unitarity, is presented. Its consequent general soft scattering features at high energies are explored. The critical interplay between theory and data analysis and its consequent implications with regards to the theoretical foundations of soft scattering theory are discussed

    Efficient Long-Text Understanding with Short-Text Models

    Get PDF
    Transformer-based pretrained language models (LMs) are ubiquitous across natural language understanding, but cannot be applied to long sequences such as stories, scientific articles and long documents, due to their quadratic complexity. While a myriad of efficient transformer variants have been proposed, they are typically based on custom implementations that require expensive pretraining from scratch. In this work, we propose SLED: SLiding-Encoder and Decoder, a simple approach for processing long sequences that re-uses and leverages battle-tested short-text pretrained LMs. Specifically, we partition the input into overlapping chunks, encode each with a short-text LM encoder and use the pretrained decoder to fuse information across chunks (fusion-in-decoder). We illustrate through controlled experiments that SLED offers a viable strategy for long text understanding and evaluate our approach on SCROLLS, a benchmark with seven datasets across a wide range of language understanding tasks. We find that SLED is competitive with specialized models that are up to 50x larger and require a dedicated and expensive pretraining step.Comment: Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023. Authors' final version (pre-MIT

    ZeroSCROLLS: A Zero-Shot Benchmark for Long Text Understanding

    Full text link
    We introduce ZeroSCROLLS, a zero-shot benchmark for natural language understanding over long texts, which contains only test and small validation sets, without training data. We adapt six tasks from the SCROLLS benchmark, and add four new datasets, including two novel information fusing tasks, such as aggregating the percentage of positive reviews. Using ZeroSCROLLS, we conduct a comprehensive evaluation of both open-source and closed large language models, finding that Claude outperforms ChatGPT, and that GPT-4 achieves the highest average score. However, there is still room for improvement on multiple open challenges in ZeroSCROLLS, such as aggregation tasks, where models struggle to pass the naive baseline. As the state of the art is a moving target, we invite researchers to evaluate their ideas on the live ZeroSCROLLS leaderboard.Comment: Findings of EMNLP 202

    An Investigation of the Hard Contribution to phi Photoproduction

    Full text link
    We investigate the possibility that the process of phi photoproduction may have a significant hard perturbative QCD component. This suggestion is based on a study of the energy dependence of the forward phi photoproduction cross section followed by a calculation where we show that a coherent sum of the pQCD and conventional soft Pomeron contributions provides an excellent reproduction of the experimental data. Our results suggest that the transition from the predominantly soft photoproduction of light rho and omega vector mesons to the predominantly hard photoproduction of heavy J/psi and upsilon is smooth and gradual, similar to the transition observed in deep inelastic scattering studies of the proton structure function in the small x limit. Our predictions for higher HERA energies are presented.Comment: 14 pages including 5 postscript figure

    Inclusive production in a QCD and N=4 SYM motivated model for soft interactions

    Full text link
    The results presented in this paper differ from our previous unsuccessful attempt to predict the rapidity distribution at W=7TeVW = 7 \,TeV. The original version of our model (GLMM) only summed a particular class of Pomeron diagrams (enhanced diagrams). We believe that this was the reason for our failure to describe the 7TeV7 \,TeV inclusive LHC data. We have developed a new approach (GLM) that also includes the summation of the semi-enhanced diagrams.This contribution is essential for a successful description of the inclusive distributions, which is presented here.Comment: 4 pages, 3 figure
    corecore