14 research outputs found

    Neutrinos below 100 TeV from the southern sky employing refined veto techniques to IceCube data

    Get PDF
    Many Galactic sources of gamma rays, such as supernova remnants, are expected to produce neutrinos with a typical energy cutoff well below 100 TeV. For the IceCube Neutrino Observatory located at the South Pole, the southern sky, containing the inner part of the Galactic plane and the Galactic Center, is a particularly challenging region at these energies, because of the large background of atmospheric muons. In this paper, we present recent advancements in data selection strategies for track-like muon neutrino events with energies below 100 TeV from the southern sky. The strategies utilize the outer detector regions as veto and features of the signal pattern to reduce the background of atmospheric muons to a level which, for the first time, allows IceCube searching for point-like sources of neutrinos in the southern sky at energies between 100 GeV and several TeV in the muon neutrino charged current channel. No significant clustering of neutrinos above background expectation was observed in four years of data recorded with the completed IceCube detector. Upper limits on the neutrino flux for a number of spectral hypotheses are reported for a list of astrophysical objects in the southern hemisphere.Comment: 19 pages, 17 figures, 2 table

    A muon-track reconstruction exploiting stochastic losses for large-scale Cherenkov detectors

    Get PDF
    IceCube is a cubic-kilometer Cherenkov telescope operating at the South Pole. The main goal of IceCube is the detection of astrophysical neutrinos and the identification of their sources. High-energy muon neutrinos are observed via the secondary muons produced in charge current interactions with nuclei in the ice. Currently, the best performing muon track directional reconstruction is based on a maximum likelihood method using the arrival time distribution of Cherenkov photons registered by the experiment's photomultipliers. A known systematic shortcoming of the prevailing method is to assume a continuous energy loss along the muon track. However at energies >1>1 TeV the light yield from muons is dominated by stochastic showers. This paper discusses a generalized ansatz where the expected arrival time distribution is parametrized by a stochastic muon energy loss pattern. This more realistic parametrization of the loss profile leads to an improvement of the muon angular resolution of up to 20%20\% for through-going tracks and up to a factor 2 for starting tracks over existing algorithms. Additionally, the procedure to estimate the directional reconstruction uncertainty has been improved to be more robust against numerical errors

    A muon-track reconstruction exploiting stochastic losses for large-scale Cherenkov detectors

    Get PDF
    IceCube is a cubic-kilometer Cherenkov telescope operating at the South Pole. The main goal of IceCube is the detection of astrophysical neutrinos and the identification of their sources. High-energy muon neutrinos are observed via the secondary muons produced in charge current interactions with nuclei in the ice. Currently, the best performing muon track directional reconstruction is based on a maximum likelihood method using the arrival time distribution of Cherenkov photons registered by the experiment\u27s photomultipliers. A known systematic shortcoming of the prevailing method is to assume a continuous energy loss along the muon track. However at energies >1 TeV the light yield from muons is dominated by stochastic showers. This paper discusses a generalized ansatz where the expected arrival time distribution is parametrized by a stochastic muon energy loss pattern. This more realistic parametrization of the loss profile leads to an improvement of the muon angular resolution of up to 20% for through-going tracks and up to a factor 2 for starting tracks over existing algorithms. Additionally, the procedure to estimate the directional reconstruction uncertainty has been improved to be more robust against numerical errors

    LeptonInjector and LeptonWeighter: A neutrino event generator and weighter for neutrino observatories

    Full text link
    We present a high-energy neutrino event generator, called LeptonInjector, alongside an event weighter, called LeptonWeighter. Both are designed for large-volume Cherenkov neutrino telescopes such as IceCube. The neutrino event generator allows for quick and flexible simulation of neutrino events within and around the detector volume, and implements the leading Standard Model neutrino interaction processes relevant for neutrino observatories: neutrino-nucleon deep-inelastic scattering and neutrino-electron annihilation. In this paper, we discuss the event generation algorithm, the weighting algorithm, and the main functions of the publicly available code, with examples.Comment: 28 pages, 10 figures, 3 table

    Unifying supervised learning and VAEs -- automating statistical inference in high-energy physics

    Full text link
    A KL-divergence objective of the joint distribution of data and labels allows to unify supervised learning, variational autoencoders (VAEs) and semi-supervised learning under one umbrella of variational inference. This viewpoint has several advantages. For VAEs, it clarifies the interpretation of encoder and decoder parts. For supervised learning, it re-iterates that the training procedure approximates the true posterior over labels and can always be viewed as approximate likelihood-free inference. This is typically not discussed, even though the derivation is well-known in the literature. In the context of semi-supervised learning it motivates an extended supervised scheme which allows to calculate a goodness-of-fit p-value using posterior predictive simulations. Flow-based networks with a standard normal base distribution are crucial. We discuss how they allow to rigorously define coverage for arbitrary joint posteriors on RnĂ—Sm\mathbb{R}^n \times \mathcal{S}^m, which encompasses posteriors over directions. Finally, systematic uncertainties are naturally included in the variational viewpoint. With the three ingredients of (1) systematics, (2) coverage and (3) goodness-of-fit, flow-based neural networks have the potential to replace a large part of the statistical toolbox of the contemporary high-energy physicist

    Twenty years of P-splines

    Get PDF
    P-splines first appeared in the limelight twenty years ago. Since then they have become popular in applications and in theoretical work. The combination of a rich B-spline basis and a simple difference penalty lends itself well to a variety of generalizations, because it is based on regression. In effect, P-splines allow the building of a “backbone” for the “mixing and matching” of a variety of additive smooth structure components, while inviting all sorts of extensions: varying-coefficient effects, signal (functional) regressors, two-dimensional surfaces, non-normal responses, quantile (expectile) modelling, among others. Strong connections with mixed models and Bayesian analysis have been established. We give an overview of many of the central developments during the first two decades of P-splines.Peer Reviewe

    Twenty years of P-splines

    Get PDF
    P-splines first appeared in the limelight twenty years ago. Since then they have become popular in applications and in theoretical work. The combination of a rich B-spline basis and a simple difference penalty lends itself well to a variety of generalizations, because it is based on regression. In effect, P-splines allow the building of a “backbone” for the “mixing and matching” of a variety of additive smooth structure components, while inviting all sorts of extensions: varying-coefficient effects, signal (functional) regressors, two-dimensional surfaces, non-normal responses, quantile (expectile) modelling, among others. Strong connections with mixed models and Bayesian analysis have been established. We give an overview of many of the central developments during the first two decades of P-splines

    Determining the Neutrino Mass Hierarchy with the Precision IceCube Next Generation Upgrade (PINGU)

    Get PDF
    In this thesis, the development of a fast effective simulation for the planned PINGU experiment at the geographic South Pole is described, which will make a precision measurement of the atmospheric neutrino flux at low GeV energies. In this flux, the effects of neutrino oscillations in the matter potential of the Earth are visible, which will be observed by PINGU with unprecedented precision. Using the aforementioned simulation, PINGU’s expected precision in determining the relevant neutrino oscillation parameters and the neutrino mass hierarchy is calculated, incorporating a variety of parameters covering systematic uncertainties in the experimental outcome. The analysis is done in the framework of the Fisher Matrix technique, whose application to a particle physics experiment is novel. It allows for a fast and stable evaluation of the multi-dimensional parameter space and an easy combination of different experiments
    corecore