5,741 research outputs found

    Direct photons from relativistic heavy ion collisions at CERN SPS and at RHIC

    Get PDF
    Assuming QGP as the initial state, we have analyzed the direct photon data, obtained by the WA98 collaboration, in 158 A GeV Pb+Pb collisions at CERN SPS. It was shown, that for small thermalisation time, two loop rate contribute substantially to high pTp_T photons. We argue that for extremely short thermalisation time scale, the higher loop contribution should not be neglected. For thermalisation time 0.4 fm or greater, when higher loop contribution are not substantial, the initial temperature of the QGP is not large and the system does not produce enough hard pTp_T photons to fit the WA98 experiment. For initial time in the ranges of 0.4-1.0 fm, WA98 data could be fitted only if the fluid has initial radial velocity in the range of 0.3-0.5c. The model was applied to predict photon spectrum at RHIC energy.Comment: 5 pages, 5 figure

    Learning Arbitrary Statistical Mixtures of Discrete Distributions

    Get PDF
    We study the problem of learning from unlabeled samples very general statistical mixture models on large finite sets. Specifically, the model to be learned, ϑ\vartheta, is a probability distribution over probability distributions pp, where each such pp is a probability distribution over [n]={1,2,…,n}[n] = \{1,2,\dots,n\}. When we sample from ϑ\vartheta, we do not observe pp directly, but only indirectly and in very noisy fashion, by sampling from [n][n] repeatedly, independently KK times from the distribution pp. The problem is to infer ϑ\vartheta to high accuracy in transportation (earthmover) distance. We give the first efficient algorithms for learning this mixture model without making any restricting assumptions on the structure of the distribution ϑ\vartheta. We bound the quality of the solution as a function of the size of the samples KK and the number of samples used. Our model and results have applications to a variety of unsupervised learning scenarios, including learning topic models and collaborative filtering.Comment: 23 pages. Preliminary version in the Proceeding of the 47th ACM Symposium on the Theory of Computing (STOC15

    Transverse energy distributions and J/ψJ/\psi production in Pb+Pb collisions

    Get PDF
    We have analyzed the latest NA50 data on transverse energy distributions and J/ψJ/\psi suppression in Pb+Pb collisions. The transverse energy distribution was analysed in the geometric model of AA collisions. In the geometric model, fluctuations in the number of NN collisions at fixed impact parameter are taken into account. Analysis suggests that in Pb+Pb collisions, individual NN collisions produces less , than in other AA collisions. The nucleons are more transparent in Pb+Pb collisions. The transverse energy dependence of the J/ψJ/\psi suppression was obtained following the model of Blaizot et al, where charmonium suppression is assumed to be 100% effective above a threshold density. With fluctuations in number of NN collisions taken into account, good fit to the data is obtained, with a single parameter, the threshold density.Comment: Revised version with better E_T fit. 4 pages, 2 figure

    Orientifolds and twisted boundary conditions

    Get PDF
    It is argued that the T-dual of a crosscap is a combination of an O+ and an O- orientifold plane. Various theories with crosscaps and D-branes are interpreted as gauge-theories on tori obeying twisted boundary conditions. Their duals live on orientifolds where the various orientifold planes are of different types. We derive how to read off the holonomies from the positions of D-branes in the orientifold background. As an application we reconstruct some results from a paper by Borel, Friedman and Morgan for gauge theories with classical groups, compactified on a 2-- or 3--torus with twisted boundary conditions.Comment: 23 pages, LaTeX, 2 eps figures; minor corrections, references adde

    Time-Explicit Simulation of Wave Interaction in Optical Waveguide Crossings at Large Angles

    Get PDF
    The time-explicit finite-difference time-domain method is used to simulate wave interaction in optical waveguide crossings at large angles. The wave propagation at the intersecting structure is simulated by time stepping the discretized form of the Maxwell’s time dependent curl equations. The power distribution characteristics of the intersections are obtained by extracting the guided-mode amplitudes from these simulated total field data. A physical picture of power flow in the intersection is also obtained from the total field solution; this provides insights into the switching behavior and the origin of the radiations

    Centrality dependence of elliptic flow and QGP viscosity

    Full text link
    In the Israel-Stewart's theory of second order hydrodynamics, we have analysed the recent PHENIX data on charged particles elliptic flow in Au+Au collisions. PHENIX data demand more viscous fluid in peripheral collisions than in central collisions. Over a broad range of collision centrality (0-10%- 50-60%), viscosity to entropy ratio (η/s\eta/s) varies between 0-0.17.Comment: Final version to be publiashed in J. Phys. G. 8 pages, 6 figures and 3 table

    Differentially Private Model Selection with Penalized and Constrained Likelihood

    Full text link
    In statistical disclosure control, the goal of data analysis is twofold: The released information must provide accurate and useful statistics about the underlying population of interest, while minimizing the potential for an individual record to be identified. In recent years, the notion of differential privacy has received much attention in theoretical computer science, machine learning, and statistics. It provides a rigorous and strong notion of protection for individuals' sensitive information. A fundamental question is how to incorporate differential privacy into traditional statistical inference procedures. In this paper we study model selection in multivariate linear regression under the constraint of differential privacy. We show that model selection procedures based on penalized least squares or likelihood can be made differentially private by a combination of regularization and randomization, and propose two algorithms to do so. We show that our private procedures are consistent under essentially the same conditions as the corresponding non-private procedures. We also find that under differential privacy, the procedure becomes more sensitive to the tuning parameters. We illustrate and evaluate our method using simulation studies and two real data examples

    Enhancement of gluonic dissociation of J/ψJ/\psi in viscous QGP

    Full text link
    We have investigated the effect of viscosity on the gluonic dissociation of J/ψJ/\psi in an equilibrating plasma. Suppression of J/ψJ/\psi due to gluonic dissociation depend on the temperature and also on the chemical equilibration rate. In an equilibrating plasma, viscosity affects the temperature evolution and also the chemical equilibration rate, requiring both of them to evolve slowly compared to their ideal counter part. For Au+Au collisions at RHIC and LHC energies, gluonic dissociation of J/ψJ/\psi increases for a viscous plasma. Low PTP_T J/ψJ/\psi's are found to be more suppressed due to viscosity than the high PTP_T ones. Also the effect is more at LHC energy than at RHIC energy.Comment: 3 pages, 1 figur

    J/ψJ/\psi suppression in Pb+Pb collisions and pTp_T broadening

    Full text link
    We have analysed the NA50 data, on the centrality dependence of pTp_T broadening of J/ψJ/\psi's, in Pb+Pb collisions, at the CERN-SPS. The data were analysed in a QCD based model, where J/ψJ/\psi's are suppressed in 'nuclear' medium. Without any free parameter, the model could explain the NA50 pTp_T broadening data. The data were also analysed in a QGP based threshold model, where J/ψJ/\psi suppression is 100% above a critical density. The QGP based model could not explain the NA50 pTp_T broadening data. We have also predicted the centrality dependence of J/ψJ/\psi suppression and pTp_T broadening at RHIC energy. Both the models, the QGP based threshold model and the QCD based nuclear absorption model, predict pTp_T broadening very close to each other.Comment: The paper was completely revised. The conclusion is also changed. 5 pages, 4 figure

    An EPLS model for a variable production rate with stock-price sensitive demand and deterioration

    Get PDF
    It is observed that large piles of consumer goods displayed in supermarkets lead consumers to buy more, which generates more profit to sellers. But a large number of on-hand display of stock leaves a negative impression on the buyer. Also, the amount of shelf or display space is limited. Due to this reason, we impose a restriction on the number of on-hand display of stock and also on initial and ending on-hand stock levels. We introduce an economic production lot size model, where production rate depends on stock and selling price per unit. A constant fraction deterioration rate is considered in this model. To illustrate the results of the model, four numerical examples are established. Sensitivity analysis of the changes of parameter values is also given
    • …
    corecore