3,154 research outputs found

    Why the xE distribution triggered by a leading particle does not measure the fragmentation function but does measure the ratio of the transverse momenta of the away-side jet to the trigger-side jet

    Get PDF
    Hard-scattering of point-like constituents (or partons) in p-p collisions was discovered at the CERN-ISR in 1972 by measurements utilizing inclusive single or pairs of hadrons with large transverse momentum (pTp_T). It was generally assumed, following Feynman, Field and Fox, as shown by data from the CERN-ISR experiments, that the pTap_{T_a} distribution of away side hadrons from a single particle trigger [with pTtp_{T_t}], corrected for of fragmentation would be the same as that from a jet-trigger and follow the same fragmentation function as observed in e+ee^+ e^- or DIS. PHENIX attempted to measure the fragmentation function from the away side xEpTa/pTtx_E\sim p_{T_a}/p_{T_t} distribution of charged particles triggered by a π0\pi^0 in p-p collisions at RHIC and showed by explicit calculation that the xEx_E distribution is actually quite insensitive to the fragmentation function. Illustrations of the original arguments and ISR results will be presented. Then the lack of sensitivity to the fragmentation function will be explained, and an analytic formula for the xEx_E distribution given, in terms of incomplete Gamma functions, for the case where the fragmentation function is exponential. The away-side distribution in this formulation has the nice property that it both exhibits xEx_E scaling and is directly sensitive to the ratio of the away jet p^Ta\hat{p}_{T_a} to that of the trigger jet, p^Tt\hat{p}_{T_t}, and thus can be used, for example, to measure the relative energy loss of the two jets from a hard-scattering which escape from the medium in A+A collisions. Comparisons of the analytical formula to RHIC measurements will be presented, including data from STAR and PHENIX, leading to some interesting conclusions.Comment: 6 pages, 5 figures, Proceedings of Poster Session, 19th International Conference on Ultra-Relativistic Nucleus-Nucleus Collisions (Quark Matter 2006), November 14-20, 2006, Shanghai, P. R. Chin

    From the ISR to RHIC--measurements of hard-scattering and jets using inclusive single particle production and 2-particle correlations

    Full text link
    Hard scattering in p-p collisions, discovered at the CERN ISR in 1972 by the method of leading particles, proved that the partons of Deeply Inelastic Scattering strongly interacted with each other. Further ISR measurements utilizing inclusive single or pairs of hadrons established that high pT particles are produced from states with two roughly back-to-back jets which are the result of scattering of constituents of the nucleons as desribed by Quantum Chromodynamics (QCD), which was developed during the course of these measurements. These techniques, which are the only practical method to study hard-scattering and jet phenomena in Au+Au central collisions at RHIC energies, are reviewed, as an introduction to present RHIC measurements.Comment: To appear in the proceedings of the workshop on Correlations and Fluctuations in Relativistic Nuclear Collisions, MIT, Cambridge, MA, April 21-23, 2005, 10 pages, 9 figures, Journal of Physics: Conference Proceeding

    Results from RHIC with Implications for LHC

    Full text link
    Results from the PHENIX experiment at RHIC in p-p and Au+Au collisions are reviewed from the perspective of measurements in p-p collisions at the CERN-ISR which serve as a basis for many of the techniques used. Issues such as J/Psi suppression and hydrodynamical flow in A+A collisions require data from LHC-Ions for an improved understanding. Suppression of high pT particles in Au+Au collisions, first observed at RHIC, also has unresolved mysteries such as the equality of the suppression of inclusive pi0 (from light quarks and gluons) and direct-single electrons (from the decay of heavy quarks) in the transverse momentum range 4< pT < 9 GeV/c. This disfavors a radiative explanation of suppression and leads to a fundamental question of whether the Higgs boson gives mass to fermions. Observation of an exponential distribution of direct photons in central Au+Au collisions for 1< pT <2 GeV/c where hard-processes are negligible and with no similar exponential distribution in p-p collisions indicates thermal photon emission from the medium at RHIC, making PHENIX at the moment ``the hottest experiment in Physics''.Comment: Invited lectures at the International School of Subnuclear Physics, 47th Course, "The most unexpected at LHC and the status of High Energy Frontier'', Erice, Sicily, Italy, August 29-September 7. 2009. 32 pages, 22 figure

    A fundamental test of the Higgs Yukawa coupling at RHIC in A+A collisions

    Full text link
    Searches for the intermediate boson, W±W^{\pm}, the heavy quantum of the Weak Interaction, via its semi-leptonic decay, We+νW\to e +\nu, in the 1970's instead discovered unexpectedly large hadron production at high pTp_T, notably π0\pi^0, which provided a huge background of e±e^{\pm} from internal and external conversions. Methods developed at the CERN ISR which led to the discovery of direct-single-e±e^{\pm} in 1974, later determined to be from the semi-leptonic decay of charm which had not yet been discovered, were used by PHENIX at RHIC to make precision measurements of heavy quark production in p-p and Au+Au collisions, leading to the puzzle of apparent equal suppression of light and heavy quarks in the QGP. If the Higgs mechanism gives mass to gauge bosons but not to fermions, then a proposal that all 6 quarks are nearly massless in a QGP, which would resolve the puzzle, can not be excluded. This proposal can be tested with future measurements of heavy quark correlations in A+A collisionsComment: 12 pages, 16 figures, 26th Winter Workshop on Nuclear Dynamics, Ocho Rios, Jamaica WI, January 2-9, 2010. Corrected citation of 1974 direct single lepton discover

    A compilation, classification, and comparison of lists of spontaneous speaking vocabulary of children in kindergarten, Grade I, Grade II, and Grade III

    Full text link
    Thesis (Ed.M.)--Boston UniversityLanguage development has been studied for many years. The beginning vocabularies are easy to count ana record. As tne child grows and moves about, his speaking vocabulary increases very rapidly. Some estimates suggest that a minimum speaking vocabulary at six years would include three thousand words. New words have come into children's speaking vocabularies as a result of modern technology since World War II. Lists of spontaneous vocabulary furnish material for teachers and text book writers. The purpose of this study is to analyze two lists recorded in 1954 and 1955. An attempt has been made to classify the new list. The lists were compared with three existing lists, Rinsland, International Kindergarten Union and Gates

    100 Years of Training and Development Research: What We Know and Where We Should Go

    Get PDF
    Training and development research has a long tradition within applied psychology dating back to the early 1900’s. Over the years, not only has interest in the topic grown but there have been dramatic changes in both the science and practice of training and development. In the current article, we examine the evolution of training and development research using articles published in the Journal of Applied Psychology (JAP) as a primary lens to analyze what we have learned and to identify where future research is needed. We begin by reviewing the timeline of training and development research in JAP from 1918 to the present in order to elucidate the critical trends and advances that define each decade. These trends include the emergence of more theory-driven training research, greater consideration of the role of the trainee and training context, examination of learning that occurs outside the classroom, and understanding training’s impact across different levels of analysis. We then examine in greater detail the evolution of four key research themes: training criteria, trainee characteristics, training design and delivery, and the training context. In each area, we describe how the focus of research has shifted over time and highlight important developments. We conclude by offering several ideas for future training and development research

    Transverse momentum fluctuations and percolation of strings

    Full text link
    The behaviour of the transverse momentum fluctuations with the centrality of the collision shown by the Relativistic Heavy Ion Collider data is naturally explained by the clustering of color sources. In this framework, elementary color sources --strings-- overlap forming clusters, so the number of effective sources is modified. These clusters decay into particles with mean transverse momentum that depends on the number of elementary sources that conform each cluster, and the area occupied by the cluster. The transverse momentum fluctuations in this approach correspond to the fluctuations of the transverse momentum of these clusters, and they behave essentially as the number of effective sources.Comment: 16 pages, RevTex, 4 postscript figures. Enhanced version. New figure

    Nearly optimal solutions for the Chow Parameters Problem and low-weight approximation of halfspaces

    Get PDF
    The \emph{Chow parameters} of a Boolean function f:{1,1}n{1,1}f: \{-1,1\}^n \to \{-1,1\} are its n+1n+1 degree-0 and degree-1 Fourier coefficients. It has been known since 1961 (Chow, Tannenbaum) that the (exact values of the) Chow parameters of any linear threshold function ff uniquely specify ff within the space of all Boolean functions, but until recently (O'Donnell and Servedio) nothing was known about efficient algorithms for \emph{reconstructing} ff (exactly or approximately) from exact or approximate values of its Chow parameters. We refer to this reconstruction problem as the \emph{Chow Parameters Problem.} Our main result is a new algorithm for the Chow Parameters Problem which, given (sufficiently accurate approximations to) the Chow parameters of any linear threshold function ff, runs in time \tilde{O}(n^2)\cdot (1/\eps)^{O(\log^2(1/\eps))} and with high probability outputs a representation of an LTF ff' that is \eps-close to ff. The only previous algorithm (O'Donnell and Servedio) had running time \poly(n) \cdot 2^{2^{\tilde{O}(1/\eps^2)}}. As a byproduct of our approach, we show that for any linear threshold function ff over {1,1}n\{-1,1\}^n, there is a linear threshold function ff' which is \eps-close to ff and has all weights that are integers at most \sqrt{n} \cdot (1/\eps)^{O(\log^2(1/\eps))}. This significantly improves the best previous result of Diakonikolas and Servedio which gave a \poly(n) \cdot 2^{\tilde{O}(1/\eps^{2/3})} weight bound, and is close to the known lower bound of max{n,\max\{\sqrt{n}, (1/\eps)^{\Omega(\log \log (1/\eps))}\} (Goldberg, Servedio). Our techniques also yield improved algorithms for related problems in learning theory
    corecore