1,164 research outputs found

    Renormalization scale uncertainty in tne DIS 2+1 jet cross-section

    Get PDF
    The deep inelastic scattering 2+1 jet cross- section is a useful observable for precision tests of QCD, e.g. measuring the strong coupling constant alpha(s). A consistent analysis requires a good understanding of the theoretical uncertainties and one of the most fundamental ones in QCD is due to the renormalization scheme and scale ambiguity. Different methods, which have been proposed to resolve the scale ambiguity, are applied to the 2+1 jet cross-section and the uncertainty is estimated. It is shown that the uncertainty can be made smaller by choosing the jet definition in a suitable way.Comment: 24 pages, uuencoded compressed tar file, DESY 94-082, TSL-ISV-94-009

    Reproducing Kernels of Generalized Sobolev Spaces via a Green Function Approach with Distributional Operators

    Full text link
    In this paper we introduce a generalized Sobolev space by defining a semi-inner product formulated in terms of a vector distributional operator P\mathbf{P} consisting of finitely or countably many distributional operators PnP_n, which are defined on the dual space of the Schwartz space. The types of operators we consider include not only differential operators, but also more general distributional operators such as pseudo-differential operators. We deduce that a certain appropriate full-space Green function GG with respect to L:=PTPL:=\mathbf{P}^{\ast T}\mathbf{P} now becomes a conditionally positive definite function. In order to support this claim we ensure that the distributional adjoint operator P\mathbf{P}^{\ast} of P\mathbf{P} is well-defined in the distributional sense. Under sufficient conditions, the native space (reproducing-kernel Hilbert space) associated with the Green function GG can be isometrically embedded into or even be isometrically equivalent to a generalized Sobolev space. As an application, we take linear combinations of translates of the Green function with possibly added polynomial terms and construct a multivariate minimum-norm interpolant sf,Xs_{f,X} to data values sampled from an unknown generalized Sobolev function ff at data sites located in some set XRdX \subset \mathbb{R}^d. We provide several examples, such as Mat\'ern kernels or Gaussian kernels, that illustrate how many reproducing-kernel Hilbert spaces of well-known reproducing kernels are isometrically equivalent to a generalized Sobolev space. These examples further illustrate how we can rescale the Sobolev spaces by the vector distributional operator P\mathbf{P}. Introducing the notion of scale as part of the definition of a generalized Sobolev space may help us to choose the "best" kernel function for kernel-based approximation methods.Comment: Update version of the publish at Num. Math. closed to Qi Ye's Ph.D. thesis (\url{http://mypages.iit.edu/~qye3/PhdThesis-2012-AMS-QiYe-IIT.pdf}

    The Surprising Transparency of the sQGP at LHC

    Full text link
    We present parameter-free predictions of the nuclear modification factor, R_{AA}^pi(p_T,s), of high p_T pions produced in Pb+Pb collisions at sqrt{s}_{NN}=2.76 and 5.5 ATeV based on the WHDG/DGLV (radiative+elastic+geometric fluctuation) jet energy loss model. The initial quark gluon plasma (QGP) density at LHC is constrained from a rigorous statistical analysis of PHENIX/RHIC pi^0 quenching data at sqrt{s}_{NN}=0.2 ATeV and the charged particle multiplicity at ALICE/LHC at 2.76 ATeV. Our perturbative QCD tomographic theory predicts significant differences between jet quenching at RHIC and LHC energies, which are qualitatively consistent with the p_T-dependence and normalization---within the large systematic uncertainty---of the first charged hadron nuclear modification factor, R^{ch}_{AA}, data measured by ALICE. However, our constrained prediction of the central to peripheral pion modification, R^pi_{cp}(p_T), for which large systematic uncertainties associated with unmeasured p+p reference data cancel, is found to be over-quenched relative to the charged hadron ALICE R^{ch}_{cp} data in the range 5<p_T<20 GeV/c. The discrepancy challenges the two most basic jet tomographic assumptions: (1) that the energy loss scales linearly with the initial local comoving QGP density, rho_0, and (2) that \rho_0 \propto dN^{ch}(s,C)/dy is proportional to the observed global charged particle multiplicity per unit rapidity as a function of sqrt{s} and centrality class, C. Future LHC identified (h=pi,K,p) hadron R^h_{AA} data (together with precise p+p, p+Pb, and Z boson and direct photon Pb+Pb control data) are needed to assess if the QGP produced at LHC is indeed less opaque to jets than predicted by constrained extrapolations from RHIC.Comment: 13 pages, 8 figure

    The Fluid Nature of Quark-Gluon Plasma

    Full text link
    Collisions of heavy nuclei at very high energies offer the exciting possibility of experimentally exploring the phase transformation from hadronic to partonic degrees of freedom which is predicted to occur at several times normal nuclear density and/or for temperatures in excess of 170\sim 170 MeV. Such a state, often referred to as a quark-gluon plasma, is thought to have been the dominant form of matter in the universe in the first few microseconds after the Big Bang. Data from the first five years of heavy ion collisions of Brookhaven National Laboratory's Relativistic Heavy Ion Collider (RHIC) clearly demonstrate that these very high temperatures and densities have been achieved. While there are strong suggestions of the role of quark degrees of freedom in determining the final-state distributions of the produced matter, there is also compelling evidence that the matter does {\em not} behave as a quasi-ideal state of free quarks and gluons. Rather, its behavior is that of a dense fluid with very low kinematic viscosity exhibiting strong hydrodynamic flow and nearly complete absorption of high momentum probes. The current status of the RHIC experimental studies is presented, with a special emphasis on the fluid properties of the created matter, which may in fact be the most perfect fluid ever studied in the laboratory.Comment: 12 pages, 5 figures; to appear in Proceedings of the 2007 International Conference on Nuclear Physics; version posted as submitted on 27-Sep-0

    Concentration analysis and cocompactness

    Full text link
    Loss of compactness that occurs in may significant PDE settings can be expressed in a well-structured form of profile decomposition for sequences. Profile decompositions are formulated in relation to a triplet (X,Y,D)(X,Y,D), where XX and YY are Banach spaces, XYX\hookrightarrow Y, and DD is, typically, a set of surjective isometries on both XX and YY. A profile decomposition is a representation of a bounded sequence in XX as a sum of elementary concentrations of the form gkwg_kw, gkDg_k\in D, wXw\in X, and a remainder that vanishes in YY. A necessary requirement for YY is, therefore, that any sequence in XX that develops no DD-concentrations has a subsequence convergent in the norm of YY. An imbedding XYX\hookrightarrow Y with this property is called DD-cocompact, a property weaker than, but related to, compactness. We survey known cocompact imbeddings and their role in profile decompositions

    Drag and jet quenching of heavy quarks in a strongly coupled N=2* plasma

    Full text link
    The drag of a heavy quark and the jet quenching parameter are studied in the strongly coupled N=2* plasma using the AdS/CFT correspondence. Both increase in units of the spatial string tension as the theory departs from conformal invariance. The description of heavy quark dynamics using a Langevin equation is also considered. It is found that the difference between the velocity dependent factors of the transverse and longitudinal momentum broadening of the quark admit an interpretation in terms of relativistic effects, so the distribution is spherical in the quark rest frame. When conformal invariance is broken there is a broadening of the longitudinal momentum distribution. This effect may be useful in understanding the jet distribution observed in experiments.Comment: 30 pages, 5 figures, references added, minor corrections. To be published in JHE

    A Reaction Plane Detector for PHENIX at RHIC

    Full text link
    A plastic scintillator paddle detector with embedded fiber light guides and photomultiplier tube readout, referred to as the Reaction Plane Detector (RXNP), was designed and installed in the PHENIX experiment prior to the 2007 run of the Relativistic Heavy Ion Collider (RHIC). The RXNP's design is optimized to accurately measure the reaction plane (RP) angle of heavy-ion collisions, where, for mid-central sNN\sqrt{s_{NN}} = 200 GeV Au+Au collisions, it achieved a 2nd2^{nd} harmonic RP resolution of \sim0.75, which is a factor of \sim2 greater than PHENIX's previous capabilities. This improvement was accomplished by locating the RXNP in the central region of the PHENIX experiment, where, due to its large coverage in pseudorapidity (1.0<η<2.81.0<|\eta|<2.8) and ϕ\phi (2π\pi), it is exposed to the high particle multiplicities needed for an accurate RP measurement. To enhance the observed signal, a 2-cm Pb converter is located between the nominal collision region and the scintillator paddles, allowing neutral particles produced in the heavy-ion collisions to contribute to the signal through conversion electrons. This paper discusses the design, operation and performance of the RXNP during the 2007 RHIC run.Comment: 28 authors from 10 institutions, 24 pages, 16 figures and 3 tables. Published in Nuclear Instruments and Methods in Physics Research Section

    An Effective-Medium Tight-Binding Model for Silicon

    Full text link
    A new method for calculating the total energy of Si systems is presented. The method is based on the effective-medium theory concept of a reference system. Instead of calculating the energy of an atom in the system of interest a reference system is introduced where the local surroundings are similar. The energy of the reference system can be calculated selfconsistently once and for all while the energy difference to the reference system can be obtained approximately. We propose to calculate it using the tight-binding LMTO scheme with the Atomic-Sphere Approximation(ASA) for the potential, and by using the ASA with charge-conserving spheres we are able to treat open system without introducing empty spheres. All steps in the calculational method is {\em ab initio} in the sense that all quantities entering are calculated from first principles without any fitting to experiment. A complete and detailed description of the method is given together with test calculations of the energies of phonons, elastic constants, different structures, surfaces and surface reconstructions. We compare the results to calculations using an empirical tight-binding scheme.Comment: 26 pages (11 uuencoded Postscript figures appended), LaTeX, CAMP-090594-

    On the Generation of Positivstellensatz Witnesses in Degenerate Cases

    Full text link
    One can reduce the problem of proving that a polynomial is nonnegative, or more generally of proving that a system of polynomial inequalities has no solutions, to finding polynomials that are sums of squares of polynomials and satisfy some linear equality (Positivstellensatz). This produces a witness for the desired property, from which it is reasonably easy to obtain a formal proof of the property suitable for a proof assistant such as Coq. The problem of finding a witness reduces to a feasibility problem in semidefinite programming, for which there exist numerical solvers. Unfortunately, this problem is in general not strictly feasible, meaning the solution can be a convex set with empty interior, in which case the numerical optimization method fails. Previously published methods thus assumed strict feasibility; we propose a workaround for this difficulty. We implemented our method and illustrate its use with examples, including extractions of proofs to Coq.Comment: To appear in ITP 201
    corecore