20,189 research outputs found

    Measurements of inclusive J/psi production in Pb-Pb collisions at sqrt(s_NN) = 2.76 TeV with the ALICE experiment

    Full text link
    Charmonium is a prominent probe of the Quark-Gluon Plasma (QGP), expected to be formed in ultrarelativistic heavy-ion (A-A) collisions. It has been predicted that the J/psi(c-cbar) particle is dissolved in the deconfined medium created in A-A systems. However this suppression can be counterbalanced via regeneration of the charm/anti-charm bound state in QGP or via statistical production at the phase boundary. At LHC energies, the latter mechanisms are expected to play a more important role, due to a charm production cross section significantly larger than at lower energies. Measurements obtained by the ALICE experiment for inclusive J/psi production are shown, making use of Pb-Pb data at sqrt(s_NN) = 2.76 TeV, collected in 2010 and 2011. In particular, the focus is given on the nuclear modification factor, R_AA, derived for forward (2.5 < y < 4) and mid rapidities (|y| < 0.9), both down to zero transverse momentum (pT). The centrality, y and pT dependences of R_AA are presented and discussed in the context of theoretical models, together with PHENIX and CMS results.Comment: 8 pages, 7 figures. To be published in PoS. Proceedings of the Xth QCHS conference (Quark Confinement and the Hadron Spectrum), 2012, 8-12 October 2012, Munich. See corresponding presentation under TUM indico : http://intern.universe-cluster.de/indico/contributionDisplay.py?contribId=246&sessionId=36&confId=229

    Monte Carlo approximations of the Neumann problem

    Get PDF
    We introduce Monte Carlo methods to compute the solution of elliptic equations with pure Neumann boundary conditions. We first prove that the solution obtained by the stochastic representation has a zero mean value with respect to the invariant measure of the stochastic process associated to the equation. Pointwise approximations are computed by means of standard and new simulation schemes especially devised for local time approximation on the boundary of the domain. Global approximations are computed thanks to a stochastic spectral formulation taking into account the property of zero mean value of the solution. This stochastic formulation is asymptotically perfect in terms of conditioning. Numerical examples are given on the Laplace operator on a square domain with both pure Neumann and mixed Dirichlet-Neumann boundary conditions. A more general convection-diffusion equation is also numerically studied

    Production of multi-strange baryons in 7 TeV proton-proton collisions with ALICE

    Full text link
    In the perspective of comparisons between proton-proton and heavy-ion physics, understanding the production mechanisms (soft and hard) in pp that lead to strange particles is of importance. Measurements of charged multi-strange (anti-)baryons (Omega and Xi) are presented for pp collisions at sqrt(s) = 7 TeV. This report is based on results obtained by ALICE (A Large Ion Collider Experiment) from the 2010 data-taking. Taking advantage of the characteristic cascade-decay topology, the identification of Xi-, anti-Xi+, Omega- and anti-Omega+ can be performed, over a wide range of momenta (e.g. from 0.6 to 8.5 GeV/c for Xi-, with the present statistics analysed). The production at central rapidity (|y| < 0.5) as a function of transverse momentum, dN/dptdy, is presented. These results are compared to PYTHIA Perugia 2011 predictions.Comment: 6 pages, 3 figures, 1 table. Strangeness In Quark Matter (SQM 2011), 18-24 Sept. 2011, Krakow. To be published in Acta Physica Polonica B (APPB

    Bayesian model selection for exponential random graph models via adjusted pseudolikelihoods

    Get PDF
    Models with intractable likelihood functions arise in areas including network analysis and spatial statistics, especially those involving Gibbs random fields. Posterior parameter es timation in these settings is termed a doubly-intractable problem because both the likelihood function and the posterior distribution are intractable. The comparison of Bayesian models is often based on the statistical evidence, the integral of the un-normalised posterior distribution over the model parameters which is rarely available in closed form. For doubly-intractable models, estimating the evidence adds another layer of difficulty. Consequently, the selection of the model that best describes an observed network among a collection of exponential random graph models for network analysis is a daunting task. Pseudolikelihoods offer a tractable approximation to the likelihood but should be treated with caution because they can lead to an unreasonable inference. This paper specifies a method to adjust pseudolikelihoods in order to obtain a reasonable, yet tractable, approximation to the likelihood. This allows implementation of widely used computational methods for evidence estimation and pursuit of Bayesian model selection of exponential random graph models for the analysis of social networks. Empirical comparisons to existing methods show that our procedure yields similar evidence estimates, but at a lower computational cost.Comment: Supplementary material attached. To view attachments, please download and extract the gzzipped source file listed under "Other formats

    Computationally efficient inference for latent position network models

    Full text link
    Latent position models are widely used for the analysis of networks in a variety of research fields. In fact, these models possess a number of desirable theoretical properties, and are particularly easy to interpret. However, statistical methodologies to fit these models generally incur a computational cost which grows with the square of the number of nodes in the graph. This makes the analysis of large social networks impractical. In this paper, we propose a new method characterised by a linear computational complexity, which can be used to fit latent position models on networks of several tens of thousands nodes. Our approach relies on an approximation of the likelihood function, where the amount of noise introduced by the approximation can be arbitrarily reduced at the expense of computational efficiency. We establish several theoretical results that show how the likelihood error propagates to the invariant distribution of the Markov chain Monte Carlo sampler. In particular, we demonstrate that one can achieve a substantial reduction in computing time and still obtain a good estimate of the latent structure. Finally, we propose applications of our method to simulated networks and to a large coauthorships network, highlighting the usefulness of our approach.Comment: 39 pages, 10 figures, 1 tabl
    corecore