5,297 research outputs found
A Linear Iterative Unfolding Method
A frequently faced task in experimental physics is to measure the probability
distribution of some quantity. Often this quantity to be measured is smeared by
a non-ideal detector response or by some physical process. The procedure of
removing this smearing effect from the measured distribution is called
unfolding, and is a delicate problem in signal processing, due to the
well-known numerical ill behavior of this task. Various methods were invented
which, given some assumptions on the initial probability distribution, try to
regularize the unfolding problem. Most of these methods definitely introduce
bias into the estimate of the initial probability distribution. We propose a
linear iterative method, which has the advantage that no assumptions on the
initial probability distribution is needed, and the only regularization
parameter is the stopping order of the iteration, which can be used to choose
the best compromise between the introduced bias and the propagated statistical
and systematic errors. The method is consistent: "binwise" convergence to the
initial probability distribution is proved in absence of measurement errors
under a quite general condition on the response function. This condition holds
for practical applications such as convolutions, calorimeter response
functions, momentum reconstruction response functions based on tracking in
magnetic field etc. In presence of measurement errors, explicit formulae for
the propagation of the three important error terms is provided: bias error,
statistical error, and systematic error. A trade-off between these three error
terms can be used to define an optimal iteration stopping criterion, and the
errors can be estimated there. We provide a numerical C library for the
implementation of the method, which incorporates automatic statistical error
propagation as well.Comment: Proceedings of ACAT-2011 conference (Uxbridge, United Kingdom), 9
pages, 5 figures, changes of corrigendum include
10 Years of Object-Oriented Analysis on H1
Over a decade ago, the H1 Collaboration decided to embrace the
object-oriented paradigm and completely redesign its data analysis model and
data storage format. The event data model, based on the RooT framework,
consists of three layers - tracks and calorimeter clusters, identified
particles and finally event summary data - with a singleton class providing
unified access. This original solution was then augmented with a fourth layer
containing user-defined objects.
This contribution will summarise the history of the solutions used, from
modifications to the original design, to the evolution of the high-level
end-user analysis object framework which is used by H1 today. Several important
issues are addressed - the portability of expert knowledge to increase the
efficiency of data analysis, the flexibility of the framework to incorporate
new analyses, the performance and ease of use, and lessons learned for future
projects.Comment: 14th International Workshop on Advanced Computing and Analysis
Techniques in Physics Researc
Validation of Kalman Filter alignment algorithm with cosmic-ray data using a CMS silicon strip tracker endcap
A Kalman Filter alignment algorithm has been applied to cosmic-ray data. We
discuss the alignment algorithm and an experiment-independent implementation
including outlier rejection and treatment of weakly determined parameters.
Using this implementation, the algorithm has been applied to data recorded with
one CMS silicon tracker endcap. Results are compared to both photogrammetry
measurements and data obtained from a dedicated hardware alignment system, and
good agreement is observed.Comment: 11 pages, 8 figures. CMS NOTE-2010/00
Signals of Disoriented Chiral Condensate
If a disoriented chiral condensate is created over an extended space-time
region following a rapid cooling in hadronic or nuclear collisions, the
misalignment of the condensate with the electroweak symmetry breaking can
generate observable effects in the processes which involve both strong and
electromagnetic interactions. We point out the relevance of the dilepton decay
of light vector mesons as a signal for formation of the disoriented condensate.
We predict that the decay \rho^0 to dileptons will be suppressed and/or the
\rho resonance peak widens, while the decay \omega to dileptons will not be
affected by the condensate.Comment: 13 pages in LaTeX, UCB-PTH-94/05, LBL-3533
J/Psi Suppression in Heavy Ion Collisions at the CERN SPS
We reexamine the production of J/Psi and other charmonium states for a
variety of target-projectile choices at the SPS. For this study we use a newly
constructed cascade code LUCIFER II, which yields acceptable descriptions of
both hard and soft processes, specifically Drell-Yan and hidden charm
production, and soft energy loss and meson production, at the SPS. Glauber
calculations of other authors are redone, and compared directly to the cascade
results. The modeling of the charmonium states differs from that of earlier
workers in its unified treatment of the hidden charm meson spectrum, which is
introduced from the outset as a set of coupled states. The result is a
description of the NA38 and NA50 data in terms of a conventional hadronic
picture. The apparently anomalous suppression found in the most massive Pb+Pb
system arises from three sources: destruction in the initial nucleon-nucleon
cascade, use of coupled channels to exploit the larger breakup in the less
bound Chi and Psi' states, and comover interaction in the final low energy
phase.Comment: 36 pages (15 figures
新収作品 : ジョルジュ・ド・ラ・トゥール《聖トマス》
We present a tomographic technique making use of a gigaelectronvolt electron beam for the determination of the material budget distribution of centimeter-sized objects by means of simulations and measurements. In both cases, the trajectory of electrons traversing a sample under test is reconstructed using a pixel beam-telescope. The width of the deflection angle distribution of electrons undergoing multiple Coulomb scattering at the sample is estimated. Basing the sinogram on position-resolved estimators enables the reconstruction of the original sample using an inverse radon transform. We exemplify the feasibility of this tomographic technique via simulations of two structured cubes—made of aluminium and lead—and via an in-beam measured coaxial adapter. The simulations yield images with FWHM edge resolutions of (177 ± 13) μm and a contrast-to-noise ratio of 5.6 ± 0.2 (7.8 ± 0.3) for aluminium (lead) compared to air. The tomographic reconstruction of a coaxial adapter serves as experimental evidence of the technique and yields a contrast-to-noise ratio of 15.3 ± 1.0 and a FWHM edge resolution of (117 ± 4) μm
Charged-Particle Multiplicity in Proton-Proton Collisions
This article summarizes and critically reviews measurements of
charged-particle multiplicity distributions and pseudorapidity densities in
p+p(pbar) collisions between sqrt(s) = 23.6 GeV and sqrt(s) = 1.8 TeV. Related
theoretical concepts are briefly introduced. Moments of multiplicity
distributions are presented as a function of sqrt(s). Feynman scaling, KNO
scaling, as well as the description of multiplicity distributions with a single
negative binomial distribution and with combinations of two or more negative
binomial distributions are discussed. Moreover, similarities between the energy
dependence of charged-particle multiplicities in p+p(pbar) and e+e- collisions
are studied. Finally, various predictions for pseudorapidity densities, average
multiplicities in full phase space, and multiplicity distributions of charged
particles in p+p(pbar) collisions at the LHC energies of sqrt(s) = 7 TeV, 10
TeV, and 14 TeV are summarized and compared.Comment: Invited review for Journal of Physics G -- version 2: version after
referee's comment
Leading particle effect, inelasticity and the connection between average multiplicities in {\bf } and {\bf } processes
The Regge-Mueller formalism is used to describe the inclusive spectrum of the
proton in collisions. From such a description the energy dependences of
both average inelasticity and leading proton multiplicity are calculated. These
quantities are then used to establish the connection between the average
charged particle multiplicities measured in {\bf } and {\bf } processes. The description obtained for the leading proton cross section
implies that Feynman scaling is strongly violated only at the extreme values of
, that is at the central region () and at the diffraction
region (), while it is approximately observed in the
intermediate region of the spectrum.Comment: 20 pages, 10 figures, to be published in Physical Review
, K and f Production in Au-Au and pp Collisions at = 200 GeV
Preliminary results on , KK and production using the mixed-event
technique are presented. The measurements are performed at mid-rapidity by the
STAR detector in = 200 GeV Au-Au and pp interactions at RHIC.
The results are compared to different measurements at various energies.Comment: 4 pages, 6 figures. Talk presented at Quark Matter 2002, Nantes,
France, July 18-24, 2002. To appear in the proceedings (Nucl. Phys. A
Antiproton Production in 11.5 A GeV/c Au+Pb Nucleus-Nucleus Collisions
We present the first results from the E864 collaboration on the production of
antiprotons in 10% central 11.5 A GeV/c Au+Pb nucleus collisions at the
Brookhaven AGS. We report invariant multiplicities for antiproton production in
the kinematic region 1.4<y<2.2 and 50<p_T<300 MeV/c, and compare our data with
a first collision scaling model and previously published results from the E878
collaboration. The differences between the E864 and E878 antiproton
measurements and the implications for antihyperon production are discussed.Comment: 4 pages, 4 figures; accepted for publication in Physical Review
Letter
- …
