58 research outputs found
Neutrino oscillation studies with IceCube-DeepCore
AbstractIceCube, a gigaton-scale neutrino detector located at the South Pole, was primarily designed to search for astrophysical neutrinos with energies of PeV and higher. This goal has been achieved with the detection of the highest energy neutrinos to date. At the other end of the energy spectrum, the DeepCore extension lowers the energy threshold of the detector to approximately 10 GeV and opens the door for oscillation studies using atmospheric neutrinos. An analysis of the disappearance of these neutrinos has been completed, with the results produced being complementary with dedicated oscillation experiments. Following a review of the detector principle and performance, the method used to make these calculations, as well as the results, is detailed. Finally, the future prospects of IceCube-DeepCore and the next generation of neutrino experiments at the South Pole (IceCube-Gen2, specifically the PINGU sub-detector) are briefly discussed
A muon-track reconstruction exploiting stochastic losses for large-scale Cherenkov detectors
IceCube is a cubic-kilometer Cherenkov telescope operating at the South Pole. The main goal of IceCube is the detection of astrophysical neutrinos and the identification of their sources. High-energy muon neutrinos are observed via the secondary muons produced in charge current interactions with nuclei in the ice. Currently, the best performing muon track directional reconstruction is based on a maximum likelihood method using the arrival time distribution of Cherenkov photons registered by the experiment\u27s photomultipliers. A known systematic shortcoming of the prevailing method is to assume a continuous energy loss along the muon track. However at energies >1 TeV the light yield from muons is dominated by stochastic showers. This paper discusses a generalized ansatz where the expected arrival time distribution is parametrized by a stochastic muon energy loss pattern. This more realistic parametrization of the loss profile leads to an improvement of the muon angular resolution of up to 20% for through-going tracks and up to a factor 2 for starting tracks over existing algorithms. Additionally, the procedure to estimate the directional reconstruction uncertainty has been improved to be more robust against numerical errors
Study of the lineshape of the chi(c1) (3872) state
A study of the lineshape of the chi(c1) (3872) state is made using a data sample corresponding to an integrated luminosity of 3 fb(-1) collected in pp collisions at center-of-mass energies of 7 and 8 TeV with the LHCb detector. Candidate chi(c1)(3872) and psi(2S) mesons from b-hadron decays are selected in the J/psi pi(+)pi(-) decay mode. Describing the lineshape with a Breit-Wigner function, the mass splitting between the chi(c1 )(3872) and psi(2S) states, Delta m, and the width of the chi(c1 )(3872) state, Gamma(Bw), are determined to be (Delta m=185.598 +/- 0.067 +/- 0.068 Mev,)(Gamma BW=1.39 +/- 0.24 +/- 0.10 Mev,) where the first uncertainty is statistical and the second systematic. Using a Flatte-inspired model, the mode and full width at half maximum of the lineshape are determined to be (mode=3871.69+0.00+0.05 MeV.)(FWHM=0.22-0.04+0.13+0.07+0.11-0.06-0.13 MeV, ) An investigation of the analytic structure of the Flatte amplitude reveals a pole structure, which is compatible with a quasibound D-0(D) over bar*(0) state but a quasivirtual state is still allowed at the level of 2 standard deviations
Measurement of the CKM angle in and decays with
A measurement of -violating observables is performed using the decays
and , where the meson is
reconstructed in one of the self-conjugate three-body final states and (commonly denoted ). The decays are analysed in bins of the -decay phase space, leading
to a measurement that is independent of the modelling of the -decay
amplitude. The observables are interpreted in terms of the CKM angle .
Using a data sample corresponding to an integrated luminosity of
collected in proton-proton collisions at centre-of-mass
energies of , , and with the LHCb experiment,
is measured to be . The hadronic
parameters , , , and ,
which are the ratios and strong-phase differences of the suppressed and
favoured decays, are also reported
Study of the doubly charmed tetraquark T+cc
Quantum chromodynamics, the theory of the strong force, describes interactions of coloured quarks and gluons and the formation of hadronic matter. Conventional hadronic matter consists of baryons and mesons made of three quarks and quark-antiquark pairs, respectively. Particles with an alternative quark content are known as exotic states. Here a study is reported of an exotic narrow state in the D0D0π+ mass spectrum just below the D*+D0 mass threshold produced in proton-proton collisions collected with the LHCb detector at the Large Hadron Collider. The state is consistent with the ground isoscalar T+cc tetraquark with a quark content of ccu⎯⎯⎯d⎯⎯⎯ and spin-parity quantum numbers JP = 1+. Study of the DD mass spectra disfavours interpretation of the resonance as the isovector state. The decay structure via intermediate off-shell D*+ mesons is consistent with the observed D0π+ mass distribution. To analyse the mass of the resonance and its coupling to the D*D system, a dedicated model is developed under the assumption of an isoscalar axial-vector T+cc state decaying to the D*D channel. Using this model, resonance parameters including the pole position, scattering length, effective range and compositeness are determined to reveal important information about the nature of the T+cc state. In addition, an unexpected dependence of the production rate on track multiplicity is observed
Severe Acute Mountain Sickness and Suspected High Altitude Cerebral Edema Related to Nitroglycerin Use
Lineamenti per una storia della critica della falsificazione epigrafica
This article offers the first comprehensive investigation of the history of scholarship related to epigraphic forgeries. Fake inscriptions were already produced in Antiquity and throughout the Middle Ages, but their number began to rise dramatically from the Renaissance onwards. By the mid-1500s, scholars became attentive of the risks of using fake sources for antiquarian purposes, while in the 17th and 18th centuries they started isolating forged or suspect texts within specific sections of their new epigraphic corpora. Tentative sets of criteria for isolating non-genuine inscriptions were first identified by Scipione Maffei around 1720, but an actual epistemology for epigraphic criticism was only developed by Theodor Mommsen and his collaborators in the mid-1800s. Since then, most corpora and critical editions have, often implicitly, followed their scientific principles. Current scholars should be well aware of them, because they can present both considerable rewards and serious shortcomings
TIN ORGANOMETALLIC COMPOUNDS: CLASSIFICATION AND ANALYSIS OF CRYSTALLOGRAPHIC AND STRUCTURAL DATA: PART II. DIMERIC DERIVATIVES
- …