4,506 research outputs found

    Analytical QCD and multiparticle production

    Get PDF
    We review the perturbative approach to multiparticle production in hard collision processes. It is investigated to what extent parton level analytical calculations at low momentum cut-off can reproduce experimental data on the hadronic final state. Systematic results are available for various observables with the next-to-leading logarithmic accuracy (the so-called modified leading logarithmic approximation - MLLA). We introduce the analytical formalism and then discuss recent applications concerning multiplicities, inclusive spectra, correlations and angular flows in multi-jet events. In various cases the perturbative picture is surprisingly successful, even for very soft particle production.Comment: 97 pages, LaTeX, 22 figures, uses sprocl.sty (included

    QCD Constituent Counting Rules for Neutral Vector Mesons

    Get PDF
    QCD constituent counting rules define the scaling behavior of exclusive hadronic scattering and electromagnetic scattering amplitudes at high momentum transfer in terms of the total number of fundamental constituents in the initial and final states participating in the hard subprocess. The scaling laws reflect the twist of the leading Fock state for each hadron and hence the leading operator that creates the composite state from the vacuum. Thus, the constituent counting scaling laws can be used to identify the twist of exotic hadronic candidates such as tetraquarks and pentaquarks. Effective field theories must consistently implement the scaling rules in order to be consistent with the fundamental theory. Here we examine how one can apply constituent counting rules for the exclusive production of one or two neutral vector mesons V0V^0 in e+ee^+ e^- annihilation, processes in which the V0V^0 can couple via intermediate photons. In case of a (narrow) real V0V^0, the photon virtuality is fixed to a precise value s1=mV02s_1 = m_{V^0}^2, in effect treating the V0V^0 as a single fundamental particle. Each real V0V^0 thus contributes to the constituent counting rules with NV0=1N_{V_0} = 1. In effect, the leading operator underlying the V0V^0 has twist 1. Thus, in the specific physical case of single or double on-shell V0V^0 production via intermediate photons, the predicted scaling from counting rules coincides with Vector Meson Dominance (VMD), an effective theory that treats V0V^0 as an elementary field. However, the VMD prediction fails in the general case where the V0V^0 is not coupled through an elementary photon field, and then the leading-twist interpolating operator has twist NV0=2N_{V_0} = 2. Analogous effects appear in pppp scattering processes.Comment: 15 page

    QCD Compositeness as Revealed in Exclusive Vector Boson Reactions through Double-Photon Annihilation: e+eγγγV0e^+ e^- \to \gamma \gamma^\ast \to \gamma V^0 and e+eγγV0V0e^+ e^- \to \gamma^\ast \gamma^\ast \to V^0 V^0

    Get PDF
    We study the exclusive double-photon annihilation processes, e+eγγγV0e^+ e^- \to \gamma \gamma^\ast\to \gamma V^0 and e+eγγVa0Vb0,e^+ e^- \to \gamma^\ast \gamma^\ast \to V^0_a V^0_b, where the Vi0V^0_i is a neutral vector meson produced in the forward kinematical region: sts \gg -t and tΛQCD2-t \gg \Lambda_{\rm QCD}^2. We show how the differential cross sections dσdt\frac{d\sigma}{dt}, as predicted by QCD, have additional falloff in the momentum transfer squared tt due to the QCD compositeness of the hadrons, consistent with the leading-twist fixed-θCM\theta_{\rm CM} scaling laws. However, even though they are exclusive channels and not associated with the conventional electron-positron annihilation process e+eγqqˉ,e^+ e^- \to \gamma^\ast \to q \bar q, these total cross sections σ(e+eγV0)\sigma(e^+ e^- \to \gamma V^0) and σ(e+eVa0Vb0),\sigma(e^+ e^- \to V^0_a V^0_b), integrated over the dominant forward- and backward-θCM\theta_{\rm CM} angular domains, scale as 1/s1/s, and thus contribute to the leading-twist scaling behavior of the ratio Re+eR_{e^+ e^-}. We generalize these results to exclusive double-electroweak vector-boson annihilation processes accompanied by the forward production of hadrons, such as e+eZ0V0e^+ e^- \to Z^0 V^0 and e+eWρ+e^+ e^- \to W^-\rho^+. These results can also be applied to the exclusive production of exotic hadrons such as tetraquarks, where the cross-section scaling behavior can reveal their multiquark nature.Comment: 10 page

    Models of neutron star atmospheres enriched with nuclear burning ashes

    Get PDF
    Low-mass X-ray binaries hosting neutron stars (NS) exhibit thermonuclear (type-I) X-ray bursts, which are powered by unstable nuclear burning of helium and/or hydrogen into heavier elements deep in the NS "ocean". In some cases the burning ashes may rise from the burning depths up to the NS photosphere by convection, leading to the appearance of the metal absorption edges in the spectra, which then force the emergent X-ray burst spectra to shift toward lower energies. These effects may have a substantial impact on the color correction factor fcf_c and the dilution factor ww, the parameters of the diluted blackbody model FEwBE(fcTeff)F_E \approx w B_E(f_c T_{eff}) that is commonly used to describe the emergent spectra from NSs. The aim of this paper is to quantify how much the metal enrichment can change these factors. We have developed a new NS atmosphere modeling code, which has a few important improvements compared to our previous code required by inclusion of the metals. The opacities and the internal partition functions (used in the ionization fraction calculations) are now taken into account for all atomic species. In addition, the code is now parallelized to counter the increased computational load. We compute a detailed grid of atmosphere models with different exotic chemical compositions that mimic the presence of the burning ashes. From the emerging model spectra we compute the color correction factors fcf_c and the dilution factors ww that can then be compared to the observations. We find that the metals may change fcf_c by up to about 40%, which is enough to explain the scatter seen in the blackbody radius measurements. The presented models open up the possibility for determining NS mass and radii more accurately, and may also act as a tool to probe the nuclear burning mechanisms of X-ray bursts.Comment: 14 pages, 7 figures, to be published in A&

    Lepton flavor violating decays of vector mesons

    Full text link
    We estimate the rates of lepton flavor violating decays of the vector mesons ρ,ω,ϕeμ\rho, \omega, \phi \to e \mu. The theoretical tools are based on an effective Lagrangian approach without referring to any specific realization of the physics beyond the standard model responsible for lepton flavor violation (\Lfv). The effective lepton-vector meson couplings are extracted from the existing experimental bounds on the nuclear μe\mu^--e^- conversion. In particular, we derive an upper limit for the \Lfv branching ratio Br(ϕeμ)1.3×1021{\rm Br}(\phi \to e \mu) \leq1.3 \times 10^{-21} which is much more stringent than the recent experimental result Br(ϕeμ)<2×106{\rm Br}(\phi \to e \mu) < 2 \times 10^{-6} presented by the SND Collaboration. Very tiny limits on \Lfv decays of vector mesons derived in this letter make direct experimental observation of these processes unrealistic.Comment: 3 pages, 1 figure, accepted for publication in Phys. Rev.

    BiofilmQuant: A Computer-Assisted Tool for Dental Biofilm Quantification

    Full text link
    Dental biofilm is the deposition of microbial material over a tooth substratum. Several methods have recently been reported in the literature for biofilm quantification; however, at best they provide a barely automated solution requiring significant input needed from the human expert. On the contrary, state-of-the-art automatic biofilm methods fail to make their way into clinical practice because of the lack of effective mechanism to incorporate human input to handle praxis or misclassified regions. Manual delineation, the current gold standard, is time consuming and subject to expert bias. In this paper, we introduce a new semi-automated software tool, BiofilmQuant, for dental biofilm quantification in quantitative light-induced fluorescence (QLF) images. The software uses a robust statistical modeling approach to automatically segment the QLF image into three classes (background, biofilm, and tooth substratum) based on the training data. This initial segmentation has shown a high degree of consistency and precision on more than 200 test QLF dental scans. Further, the proposed software provides the clinicians full control to fix any misclassified areas using a single click. In addition, BiofilmQuant also provides a complete solution for the longitudinal quantitative analysis of biofilm of the full set of teeth, providing greater ease of usability.Comment: 4 pages, 4 figures, 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC 2014

    A Statistical Modeling Approach to Computer-Aided Quantification of Dental Biofilm

    Full text link
    Biofilm is a formation of microbial material on tooth substrata. Several methods to quantify dental biofilm coverage have recently been reported in the literature, but at best they provide a semi-automated approach to quantification with significant input from a human grader that comes with the graders bias of what are foreground, background, biofilm, and tooth. Additionally, human assessment indices limit the resolution of the quantification scale; most commercial scales use five levels of quantification for biofilm coverage (0%, 25%, 50%, 75%, and 100%). On the other hand, current state-of-the-art techniques in automatic plaque quantification fail to make their way into practical applications owing to their inability to incorporate human input to handle misclassifications. This paper proposes a new interactive method for biofilm quantification in Quantitative light-induced fluorescence (QLF) images of canine teeth that is independent of the perceptual bias of the grader. The method partitions a QLF image into segments of uniform texture and intensity called superpixels; every superpixel is statistically modeled as a realization of a single 2D Gaussian Markov random field (GMRF) whose parameters are estimated; the superpixel is then assigned to one of three classes (background, biofilm, tooth substratum) based on the training set of data. The quantification results show a high degree of consistency and precision. At the same time, the proposed method gives pathologists full control to post-process the automatic quantification by flipping misclassified superpixels to a different state (background, tooth, biofilm) with a single click, providing greater usability than simply marking the boundaries of biofilm and tooth as done by current state-of-the-art methods.Comment: 10 pages, 7 figures, Journal of Biomedical and Health Informatics 2014. keywords: {Biomedical imaging;Calibration;Dentistry;Estimation;Image segmentation;Manuals;Teeth}, http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758338&isnumber=636350
    corecore