336 research outputs found
Angiographic Evidence of Coronary Embolism and Resolution
This report provides angiographic documentation of an embolus to the left coronary artery, followed by a second angiographic study which recorded disappearance of the embolus. The relationship between coronary embolism and ventricular dysfunction is discussed
Diffraction dissociation in proton-proton collisions at = 0.9 TeV, 2.76 TeV and 7 TeV with ALICE at the LHC
The relative rates of single- and double- diffractive processes were measured
with the ALICE detector by studying properties of gaps in the pseudorapidity
distribution of particles produced in proton-proton collisions at =
0.9 TeV, 2.76 TeV and 7 TeV. ALICE triggering efficiencies are determined for
various classes of events, using a detector simulation validated with data on
inclusive particle production. Cross-sections are determined using van der Meer
scans to measure beam properties and obtain a measurement of the luminosity
Unitarity Corrections to the Proton Structure Functions through the Dipole Picture
We study the dipole picture for the description of the deep inelastic
scattering, focusing on the structure functions which are driven directly by
the gluon distribution. One performs estimates using the effective dipole cross
section given by the Glauber-Mueller approach in QCD, which encodes the
corrections due to the unitarity effects associated with the saturation
phenomenon. We also address issues about frame invariance of the calculations
when analysing the observables.Comment: 16 pages, 8 figures. Version to be published in Phys. Rev.
The survival probability of large rapidity gaps in a three channel model
The values and energy dependence for the survival probability of large rapidity gaps (LRG) are calculated in a three channel model. This
model includes single and double diffractive production, as well as elastic
rescattering. It is shown that decreases with increasing
energy, in line with recent results for LRG dijet production at the Tevatron.
This is in spite of the weak dependence on energy of the ratio .Comment: 26 pages in latex file,11 figures in eps file
Dijet Rapidity Gaps in Photoproduction from Perturbative QCD
By defining dijet rapidity gap events according to interjet energy flow, we
treat the photoproduction cross section of two high transverse momentum jets
with a large intermediate rapidity region as a factorizable quantity in
perturbative QCD. We show that logarithms of soft gluon energy in the interjet
region can be resummed to all orders in perturbation theory. The resummed cross
section depends on the eigenvalues of a set of soft anomalous dimension
matrices, specific to each underlying partonic process, and on the
decomposition of the scattering according to the possible patterns of hard
color flow. We present a detailed discussion of both. Finally, we evaluate
numerically the gap cross section and gap fraction and compare the results with
ZEUS data. In the limit of low gap energy, good agreement with experiment is
obtained.Comment: 37 pages, Latex, 17 figure
Verifying linearizability on TSO architectures
Linearizability is the standard correctness criterion for fine-grained, non-atomic concurrent algorithms, and a variety of methods for verifying linearizability have been developed. However, most approaches assume a sequentially consistent memory model, which is not always realised in practice. In this paper we define linearizability on a weak memory model: the TSO (Total Store Order) memory model, which is implemented in the x86 multicore architecture. We also show how a simulation-based proof method can be adapted to verify linearizability for algorithms running on TSO architectures. We demonstrate our approach on a typical concurrent algorithm, spinlock, and prove it linearizable using our simulation-based approach. Previous approaches to proving linearizabilty on TSO architectures have required a modification to the algorithm's natural abstract specification. Our proof method is the first, to our knowledge, for proving correctness without the need for such modification
BxDF material acquisition, representation, and rendering for VR and design
Photorealistic and physically-based rendering of real-world environments with high fidelity materials is important to a range of applications, including special effects, architectural modelling, cultural heritage, computer games, automotive design, and virtual reality (VR). Our perception of the world depends on lighting and surface material characteristics, which determine how the light is reflected, scattered, and absorbed. In order to reproduce appearance, we must therefore understand all the ways objects interact with light, and the acquisition and representation of materials has thus been an important part of computer graphics from early days. Nevertheless, no material model nor acquisition setup is without limitations in terms of the variety of materials represented, and different approaches vary widely in terms of compatibility and ease of use. In this course, we describe the state of the art in material appearance acquisition and modelling, ranging from mathematical BSDFs to data-driven capture and representation of anisotropic materials, and volumetric/thread models for patterned fabrics. We further address the problem of material appearance constancy across different rendering platforms. We present two case studies in architectural and interior design. The first study demonstrates Yulio, a new platform for the creation, delivery, and visualization of acquired material models and reverse engineered cloth models in immersive VR experiences. The second study shows an end-to-end process of capture and data-driven BSDF representation using the physically-based Radiance system for lighting simulation and rendering
Admit your weakness: Verifying correctness on TSO architectures
“The final publication is available at http://link.springer.com/chapter/10.1007%2F978-3-319-15317-9_22 ”.Linearizability has become the standard correctness criterion for fine-grained non-atomic concurrent algorithms, however, most approaches assume a sequentially consistent memory model, which is not always realised in practice. In this paper we study the correctness of concurrent algorithms on a weak memory model: the TSO (Total Store Order) memory model, which is commonly implemented by multicore architectures. Here, linearizability is often too strict, and hence, we prove a weaker criterion, quiescent consistency instead. Like linearizability, quiescent consistency is compositional making it an ideal correctness criterion in a component-based context. We demonstrate how to model a typical concurrent algorithm, seqlock, and prove it quiescent consistent using a simulation-based approach. Previous approaches to proving correctness on TSO architectures have been based on linearizabilty which makes it necessary to modify the algorithm’s high-level requirements. Our approach is the first, to our knowledge, for proving correctness without the need for such a modification
Energy dependence of gap survival probability and antishadowing
We discuss energy dependence of gap survival probability which follows from
rational form of amplitude unitarization. In contrast to eikonal form of
unitarization which leads to decreasing energy dependence of gap survival
probability, we predict a non-monotonous form for this dependence.Comment: 9 pages, 3 figures, revised and extended versio
Drell-Yan diffraction: breakdown of QCD factorisation
We consider the diffractive Drell-Yan process in proton-(anti)proton
collisions at high energies in the color dipole approach. The calculations are
performed at forward rapidities of the leptonic pair. Effect of eikonalization
of the universal "bare"dipole-target elastic amplitude in the saturation regime
takes into account the principal part of the gap survival probability. We
present predictions for the total and differential cross sections of the single
diffractive lepton pair production at RHIC and LHC energies. We analyze
implications of the QCD factorisation breakdown in the diffractive Drell-Yan
process, which is caused by a specific interplay of the soft and hard
interactions, and resulting in rather unusual properties of the corresponding
observables.Comment: 19 pages, 7 figure
- …