122 research outputs found

    From PeV to TeV: Astrophysical Neutrinos with Contained Vertices in 10 years of IceCube Data

    Full text link
    The IceCube Neutrino Observatory is a cubic-kilometer Cherenkov detector at the South Pole, designed to study neutrinos of astrophysical origin. We present an analysis of the Medium Energy Starting Events (MESE) sample, a veto-based event selection that selects neutrinos and efficiently rejects a background of cosmic ray-induced muons This is an extension of the High Energy Starting Event (HESE) analysis, which established the existence of high-energy neutrinos of astrophysical origin. The HESE sample is consistent with a single power law spectrum with best-fit index 2.870.19+0.202.87^{+0.20}_{-0.19}, which is softer than complementary IceCube measurements of the astrophysical neutrino spectrum. While HESE is sensitive to neutrinos above 60 TeV, MESE improves the sensitivity to lower energies, down to 1 TeV. In this analysis we use an improved understanding of atmospheric backgrounds in the astrophysical neutrino sample via more accurate modeling of the detector self-veto. A previous measurement with a 2-year MESE dataset had indicated the presence of a possible 30 TeV excess. With 10 years of data, we have a larger sample size to investigate this excess. We will use this event selection to measure the cosmic neutrino energy spectrum over a wide energy range. The flavor ratio of astrophysical neutrinos will also be discussed.Comment: Presented at the 38th International Cosmic Ray Conference (ICRC2023). See arXiv:2307.13047 for all IceCube contribution

    NeuroBench: Advancing Neuromorphic Computing through Collaborative, Fair and Representative Benchmarking

    Full text link
    The field of neuromorphic computing holds great promise in terms of advancing computing efficiency and capabilities by following brain-inspired principles. However, the rich diversity of techniques employed in neuromorphic research has resulted in a lack of clear standards for benchmarking, hindering effective evaluation of the advantages and strengths of neuromorphic methods compared to traditional deep-learning-based methods. This paper presents a collaborative effort, bringing together members from academia and the industry, to define benchmarks for neuromorphic computing: NeuroBench. The goals of NeuroBench are to be a collaborative, fair, and representative benchmark suite developed by the community, for the community. In this paper, we discuss the challenges associated with benchmarking neuromorphic solutions, and outline the key features of NeuroBench. We believe that NeuroBench will be a significant step towards defining standards that can unify the goals of neuromorphic computing and drive its technological progress. Please visit neurobench.ai for the latest updates on the benchmark tasks and metrics

    NeuroBench:Advancing Neuromorphic Computing through Collaborative, Fair and Representative Benchmarking

    Get PDF
    The field of neuromorphic computing holds great promise in terms of advancing computing efficiency and capabilities by following brain-inspired principles. However, the rich diversity of techniques employed in neuromorphic research has resulted in a lack of clear standards for benchmarking, hindering effective evaluation of the advantages and strengths of neuromorphic methods compared to traditional deep-learning-based methods. This paper presents a collaborative effort, bringing together members from academia and the industry, to define benchmarks for neuromorphic computing: NeuroBench. The goals of NeuroBench are to be a collaborative, fair, and representative benchmark suite developed by the community, for the community. In this paper, we discuss the challenges associated with benchmarking neuromorphic solutions, and outline the key features of NeuroBench. We believe that NeuroBench will be a significant step towards defining standards that can unify the goals of neuromorphic computing and drive its technological progress. Please visit neurobench.ai for the latest updates on the benchmark tasks and metrics

    NeuroBench:A Framework for Benchmarking Neuromorphic Computing Algorithms and Systems

    Get PDF
    Neuromorphic computing shows promise for advancing computing efficiency and capabilities of AI applications using brain-inspired principles. However, the neuromorphic research field currently lacks standardized benchmarks, making it difficult to accurately measure technological advancements, compare performance with conventional methods, and identify promising future research directions. Prior neuromorphic computing benchmark efforts have not seen widespread adoption due to a lack of inclusive, actionable, and iterative benchmark design and guidelines. To address these shortcomings, we present NeuroBench: a benchmark framework for neuromorphic computing algorithms and systems. NeuroBench is a collaboratively-designed effort from an open community of nearly 100 co-authors across over 50 institutions in industry and academia, aiming to provide a representative structure for standardizing the evaluation of neuromorphic approaches. The NeuroBench framework introduces a common set of tools and systematic methodology for inclusive benchmark measurement, delivering an objective reference framework for quantifying neuromorphic approaches in both hardware-independent (algorithm track) and hardware-dependent (system track) settings. In this article, we present initial performance baselines across various model architectures on the algorithm track and outline the system track benchmark tasks and guidelines. NeuroBench is intended to continually expand its benchmarks and features to foster and track the progress made by the research community

    Studies of a muon-based mass sensitive parameter for the IceTop surface array

    Get PDF

    Measuring the Neutrino Cross Section Using 8 years of Upgoing Muon Neutrinos Observed with IceCube

    Get PDF
    The IceCube Neutrino Observatory detects neutrinos at energies orders of magnitude higher than those available to current accelerators. Above 40 TeV, neutrinos traveling through the Earth will be absorbed as they interact via charged current interactions with nuclei, creating a deficit of Earth-crossing neutrinos detected at IceCube. The previous published results showed the cross section to be consistent with Standard Model predictions for 1 year of IceCube data. We present a new analysis that uses 8 years of IceCube data to fit the νμ_{μ} absorption in the Earth, with statistics an order of magnitude better than previous analyses, and with an improved treatment of systematic uncertainties. It will measure the cross section in three energy bins that span the range 1 TeV to 100 PeV. We will present Monte Carlo studies that demonstrate its sensitivity

    Observation of Cosmic Ray Anisotropy with Nine Years of IceCube Data

    Get PDF

    The Acoustic Module for the IceCube Upgrade

    Get PDF

    A Combined Fit of the Diffuse Neutrino Spectrum using IceCube Muon Tracks and Cascades

    Get PDF

    Non-standard neutrino interactions in IceCube

    Get PDF
    Non-standard neutrino interactions (NSI) may arise in various types of new physics. Their existence would change the potential that atmospheric neutrinos encounter when traversing Earth matter and hence alter their oscillation behavior. This imprint on coherent neutrino forward scattering can be probed using high-statistics neutrino experiments such as IceCube and its low-energy extension, DeepCore. Both provide extensive data samples that include all neutrino flavors, with oscillation baselines between tens of kilometers and the diameter of the Earth. DeepCore event energies reach from a few GeV up to the order of 100 GeV - which marks the lower threshold for higher energy IceCube atmospheric samples, ranging up to 10 TeV. In DeepCore data, the large sample size and energy range allow us to consider not only flavor-violating and flavor-nonuniversal NSI in the μ−τ sector, but also those involving electron flavor. The effective parameterization used in our analyses is independent of the underlying model and the new physics mass scale. In this way, competitive limits on several NSI parameters have been set in the past. The 8 years of data available now result in significantly improved sensitivities. This improvement stems not only from the increase in statistics but also from substantial improvement in the treatment of systematic uncertainties, background rejection and event reconstruction
    corecore