152 research outputs found

    Mellin-Barnes Integrals: A Primer on Particle Physics Applications

    Full text link
    We discuss the Mellin-Barnes representation of complex multidimensional integrals. Experiments frontiered by the High-Luminosity Large Hadron Collider at CERN and future collider projects demand the development of computational methods to achieve the theoretical precision required by experimental setups. In this regard, performing higher-order calculations in perturbative quantum field theory is of paramount importance. The Mellin-Barnes integrals technique has been successfully applied to the analytic and numerical analysis of integrals connected with virtual and real higher-order perturbative corrections to particle scattering. Easy-to-follow examples with the supplemental online material introduce the reader to the construction and the analytic, approximate, and numeric solution of Mellin-Barnes integrals in Euclidean and Minkowskian kinematic regimes. It also includes an overview of the state-of-the-art software packages for manipulating and evaluating Mellin-Barnes integrals. These lecture notes are for advanced students and young researchers to master the theoretical background needed to perform perturbative quantum field theory calculations.Comment: This is a preprint of the following work: Ievgen Dubovyk, Janusz Gluza and Gabor Somogyi, Mellin-Barnes Integrals: A Primer on Particle Physics Applications, 2022, Springer reproduced with permission of Springer Nature Switzerland AG. 280 page

    Turvaliste reaalarvuoperatsioonide efektiivsemaks muutmine

    Get PDF
    Tänapäeval on andmed ja nende analüüsimine laialt levinud ja neist on palju kasu. Selle populaarsuse tõttu on ka rohkem levinud igasugused kombinatsioonid, kuidas andmed ja nende põhjal arvutamine omavahel suhestuda võivad. Meie töö fookuseks on siinkohal need juhtumid, kus andmete omanikud ja need osapooled, kes neid analüüsima peaks, ei lange kas osaliselt või täielikult kokku. Selle näiteks võib tuua meditsiiniandmed, mida nende omanikud tahaks ühest küljest salajas hoida, aga mille kollektiivsel analüüsimine on kasulik. Teiseks näiteks on arvutuste delegeerimine suurema arvutusvõimsusega, ent mitte täiesti usaldusväärsele osapoolele. Valdkond, mis selliseid probleeme uurib, kannab nime turvaline ühisarvutus. Antud valdkond on eelkõige keskendunud juhtumile, kus andmed on kas täisarvulisel või bitilisel kujul, kuna neid on lihtsam analüüsida ja teised juhtumid saab nendest tuletada, sest kõige, mis üldse arvutatav on, väljaarvutamiseks piisab bittide liitmisest ja korrutamisest. See on teoorias tõsi, samas, kui kõike otse bittide või täisarvude tasemel teha, on tulemus ebaefektiivne. Seepärast vaatleb see doktoritöö turvalist ühisarvutust reaalarvudel ja meetodeid, kuidas seda efektiivsemaks teha. Esiteks vaatleme ujukoma- ja püsikomaarve. Ujukomaarvud on väga paindlikud ja täpsed, aga on teisalt jälle üsna keeruka struktuuriga. Püsikomaarvud on lihtsa olemusega, ent kannatavad täpsuses. Töö esimene meetod vaatlebki nende kombineerimist, et mõlema häid omadusi ära kasutada. Teine tehnika baseerub tõigal, et antud paradigmas juhtub, et ei ole erilist ajalist vahet, kas paralleelis teha üks tehe või miljon. Sestap katsume töö teises meetodis teha paralleelselt hästi palju mingit lihtsat operatsiooni, et välja arvutada mõnd keerulisemat. Kolmas tehnika kasutab reaalarvude kujutamiseks täisarvupaare, (a,b), mis kujutavad reaalarvu a- φb, kus φ=1.618... on kuldlõige. Osutub, et see võimaldab meil üsna efektiivselt liita ja korrutada ja saavutada mõistlik täpsus.Nowadays data and its analysis are ubiquitous and very useful. Due to this popularity, different combinations of how these two can relate to each other proliferate. We focus on the cases where the owners of the data and those who compute on them don't coincide either partially or totally. Examples are medicinal data where the owners want secrecy but where doing statistics on them collectively is useful, or outsourcing computation. The discipline that studies these cases is called secure computation. This field has been mostly working on integer and bit data types, as they are easier to work on, and due to it being possible to reduce the other cases to integer and bit manipulations. However, using these reductions bluntly will give inefficient results. Thus this thesis studies secure computation on real numbers and presents three methods for improving efficiency. The first method concerns with fixed-point and floating-point numbers. Fixed-point numbers are simple in construction, but can lack precision and flexibility. Floating-point numbers, on the other hand, are precise and flexible, but are rather complicated in nature, which in secure setting translates to expensive operations. The first method thus combines those two number types for greater efficiency. The second method is based on the fact that in the concrete paradigm we use, it does not matter timewise whether we perform one or million operations in parallel. Thus we attempt to perform many instances of a fast operation in parallel in order to evaluate a more complicated one. Thirdly we introduce a new real number type. We use pairs of integers (a,b) to represent the real number a- φb where φ=1.618... is the golden ratio. This number type allows us to perform addition and multiplication relatively quicky and also achieves reasonable granularity.https://www.ester.ee/record=b522708

    On the Parallel Implementation of the Lehman Factoring Algorithm

    Get PDF
    Abstract not provided

    Statistical Modeling: Regression, Survival Analysis, and Time Series Analysis

    Get PDF
    Statistical Modeling provides an introduction to regression, survival analysis, and time series analysis for students who have completed calculus-based courses in probability and mathematical statistics. The book uses the R language to fit statistical models, conduct Monte Carlo simulation experiments and generate graphics. Over 300 exercises at the end of the chapters makes this an appropriate text for a class in statistical modeling. Part 1: RegressionChapter 1: Simple Linear Regression Chapter 2: Inference in Simple Linear Regression Chapter 3: Topics in RegressionPart II: Survival Analysis Chapter 4: Probability Models in Survival AnalysisChapter 5: Statistical Methods in Survival Analysis Chapter 6: Topics in Survival Analysis Part III: Time Series Analysis Chapter 7: Basic Methods in Time Series AnalysisChapter 8: Modeling in Time Series Analysis Chapter 9: Topics in Time Series Analysi

    Knots and their related qq-series

    Full text link
    We discuss a matrix of periodic holomorphic functions in the upper and lower half-plane which can be obtained from a factorization of an Andersen-Kashaev state integral of a knot complement with remarkable analytic and asymptotic properties that defines a \PSL_2(\BZ)-cocycle on the space of matrix-valued piecewise analytic functions on the real numbers. We identify the corresponding cocycle with the one coming from the Kashaev invariant of a knot (and its matrix-valued extension) via the refined quantum modularity conjecture of~\cite{GZ:kashaev} and also relate the matrix-valued invariant with the 3D-index of Dimofte-Gaiotto-Gukov. The cocycle also has an analytic extendability property that leads to the notion of a matrix-valued holomorphic quantum modular form. This is a tale of several independent discoveries, both empirical and theoretical, all illustrated by the three simplest hyperbolic knots.Comment: 41 pages, 6 figure

    Richard Dedekind and the Creation of an Ideal: Early Developments in Ring Theory

    Get PDF

    Evaluating Feynman Integrals Using D-modules and Tropical Geometry

    Full text link
    Feynman integrals play a central role in the modern scattering amplitudes research program. Advancing our methods for evaluating Feynman integrals will, therefore, strengthen our ability to compare theoretical predictions with data from particle accelerators such as the Large Hadron Collider. Motivated by this, the present manuscript purports to study mathematical concepts related to Feynman integrals. In particular, we present both numerical and analytical algorithms for the evaluation of Feynman integrals. The content is divided into three parts. Part I focuses on the method of DEQs for evaluating Feynman integrals. An otherwise daunting integral expression is thereby traded for the comparatively simpler task of solving a system of DEQs. We use this technique to evaluate a family of two-loop Feynman integrals of relevance for dark matter detection. Part II situates the study of DEQs for Feynman integrals within the framework of D-modules, a natural language for studying PDEs algebraically. Special emphasis is put on a particular D-module called the GKZ system, a set of higher-order PDEs that annihilate a generalized version of a Feynman integral. In the course of matching the generalized integral to a Feynman integral proper, we discover an algorithm for evaluating the latter in terms of logarithmic series. Part III develops a numerical integration algorithm. It combines Monte Carlo sampling with tropical geometry, a particular offspring of algebraic geometry that studies "piecewise-linear" polynomials. Feynman's i*epsilon-prescription is incorporated into the algorithm via contour deformation. We present an open-source program named Feyntrop that implements this algorithm, and use it to numerically evaluate Feynman integrals between 1-5 loops and 0-5 legs in physical regions of phase space.Comment: Ph.D. thesis. Defended on the 11th of December 202
    corecore