5,741 research outputs found
Direct photons from relativistic heavy ion collisions at CERN SPS and at RHIC
Assuming QGP as the initial state, we have analyzed the direct photon data,
obtained by the WA98 collaboration, in 158 A GeV Pb+Pb collisions at CERN SPS.
It was shown, that for small thermalisation time, two loop rate contribute
substantially to high photons. We argue that for extremely short
thermalisation time scale, the higher loop contribution should not be
neglected. For thermalisation time 0.4 fm or greater, when higher loop
contribution are not substantial, the initial temperature of the QGP is not
large and the system does not produce enough hard photons to fit the WA98
experiment. For initial time in the ranges of 0.4-1.0 fm, WA98 data could be
fitted only if the fluid has initial radial velocity in the range of 0.3-0.5c.
The model was applied to predict photon spectrum at RHIC energy.Comment: 5 pages, 5 figure
Learning Arbitrary Statistical Mixtures of Discrete Distributions
We study the problem of learning from unlabeled samples very general
statistical mixture models on large finite sets. Specifically, the model to be
learned, , is a probability distribution over probability
distributions , where each such is a probability distribution over . When we sample from , we do not observe
directly, but only indirectly and in very noisy fashion, by sampling from
repeatedly, independently times from the distribution . The problem is
to infer to high accuracy in transportation (earthmover) distance.
We give the first efficient algorithms for learning this mixture model
without making any restricting assumptions on the structure of the distribution
. We bound the quality of the solution as a function of the size of
the samples and the number of samples used. Our model and results have
applications to a variety of unsupervised learning scenarios, including
learning topic models and collaborative filtering.Comment: 23 pages. Preliminary version in the Proceeding of the 47th ACM
Symposium on the Theory of Computing (STOC15
Transverse energy distributions and production in Pb+Pb collisions
We have analyzed the latest NA50 data on transverse energy distributions and
suppression in Pb+Pb collisions. The transverse energy distribution
was analysed in the geometric model of AA collisions. In the geometric model,
fluctuations in the number of NN collisions at fixed impact parameter are taken
into account. Analysis suggests that in Pb+Pb collisions, individual NN
collisions produces less , than in other AA collisions. The nucleons are
more transparent in Pb+Pb collisions. The transverse energy dependence of the
suppression was obtained following the model of Blaizot et al, where
charmonium suppression is assumed to be 100% effective above a threshold
density. With fluctuations in number of NN collisions taken into account, good
fit to the data is obtained, with a single parameter, the threshold density.Comment: Revised version with better E_T fit. 4 pages, 2 figure
Orientifolds and twisted boundary conditions
It is argued that the T-dual of a crosscap is a combination of an O+ and an
O- orientifold plane. Various theories with crosscaps and D-branes are
interpreted as gauge-theories on tori obeying twisted boundary conditions.
Their duals live on orientifolds where the various orientifold planes are of
different types. We derive how to read off the holonomies from the positions of
D-branes in the orientifold background. As an application we reconstruct some
results from a paper by Borel, Friedman and Morgan for gauge theories with
classical groups, compactified on a 2-- or 3--torus with twisted boundary
conditions.Comment: 23 pages, LaTeX, 2 eps figures; minor corrections, references adde
Time-Explicit Simulation of Wave Interaction in Optical Waveguide Crossings at Large Angles
The time-explicit finite-difference time-domain method is used to simulate wave interaction in optical waveguide crossings at large angles. The wave propagation at the intersecting structure is simulated by time stepping the discretized form of the Maxwell’s time dependent curl equations. The power distribution characteristics of the intersections are obtained by extracting the guided-mode amplitudes from these simulated total field data. A physical picture of power flow in the intersection is also obtained from the total field solution; this provides insights into the switching behavior and the origin of the radiations
Centrality dependence of elliptic flow and QGP viscosity
In the Israel-Stewart's theory of second order hydrodynamics, we have
analysed the recent PHENIX data on charged particles elliptic flow in Au+Au
collisions.
PHENIX data demand more viscous fluid in peripheral collisions than in
central collisions. Over a broad range of collision centrality (0-10%- 50-60%),
viscosity to entropy ratio () varies between 0-0.17.Comment: Final version to be publiashed in J. Phys. G. 8 pages, 6 figures and
3 table
Differentially Private Model Selection with Penalized and Constrained Likelihood
In statistical disclosure control, the goal of data analysis is twofold: The
released information must provide accurate and useful statistics about the
underlying population of interest, while minimizing the potential for an
individual record to be identified. In recent years, the notion of differential
privacy has received much attention in theoretical computer science, machine
learning, and statistics. It provides a rigorous and strong notion of
protection for individuals' sensitive information. A fundamental question is
how to incorporate differential privacy into traditional statistical inference
procedures. In this paper we study model selection in multivariate linear
regression under the constraint of differential privacy. We show that model
selection procedures based on penalized least squares or likelihood can be made
differentially private by a combination of regularization and randomization,
and propose two algorithms to do so. We show that our private procedures are
consistent under essentially the same conditions as the corresponding
non-private procedures. We also find that under differential privacy, the
procedure becomes more sensitive to the tuning parameters. We illustrate and
evaluate our method using simulation studies and two real data examples
Enhancement of gluonic dissociation of in viscous QGP
We have investigated the effect of viscosity on the gluonic dissociation of
in an equilibrating plasma. Suppression of due to gluonic
dissociation depend on the temperature and also on the chemical equilibration
rate. In an equilibrating plasma, viscosity affects the temperature evolution
and also the chemical equilibration rate, requiring both of them to evolve
slowly compared to their ideal counter part. For Au+Au collisions at RHIC and
LHC energies, gluonic dissociation of increases for a viscous plasma.
Low 's are found to be more suppressed due to viscosity than the
high ones. Also the effect is more at LHC energy than at RHIC energy.Comment: 3 pages, 1 figur
suppression in Pb+Pb collisions and broadening
We have analysed the NA50 data, on the centrality dependence of
broadening of 's, in Pb+Pb collisions, at the CERN-SPS. The data were
analysed in a QCD based model, where 's are suppressed in 'nuclear'
medium. Without any free parameter, the model could explain the NA50
broadening data. The data were also analysed in a QGP based threshold model,
where suppression is 100% above a critical density. The QGP based
model could not explain the NA50 broadening data. We have also predicted
the centrality dependence of suppression and broadening at RHIC
energy. Both the models, the QGP based threshold model and the QCD based
nuclear absorption model, predict broadening very close to each other.Comment: The paper was completely revised. The conclusion is also changed. 5
pages, 4 figure
An EPLS model for a variable production rate with stock-price sensitive demand and deterioration
It is observed that large piles of consumer goods displayed in supermarkets lead consumers to buy more, which generates more profit to sellers. But a large number of on-hand display of stock leaves a negative impression on the buyer. Also, the amount of shelf or display space is limited. Due to this reason, we impose a restriction on the number of on-hand display of stock and also on initial and ending on-hand stock levels. We introduce an economic production lot size model, where production rate depends on stock and selling price per unit. A constant fraction deterioration rate is considered in this model. To illustrate the results of the model, four numerical examples are established. Sensitivity analysis of the changes of parameter values is also given
- …