222 research outputs found
Determinants of outcome in operatively and non-operatively treated Weber-B ankle fractures
Introduction: Treatment of ankle fractures is often based on fracture type and surgeon's individual judgment. Literature concerning the treatment options and outcome are dated and frequently contradicting. The aim of this study was to determine the clinical and functional outcome after AO-Weber B-type ankle fractures in operatively and conservatively treated patients and to determine which factors influenced outcome. Patients and methods: A retrospective cohort study in patients with a AO-Weber B-type ankle fracture. Patient, fracture and treatment characteristics were recorded. Clinical and functional outcome was measured using the Olerud-Molander Ankle Score (OMAS), the American Orthopaedic Foot and Ankle Society ankle-hindfoot score (AOFAS) and a Visual Analog Score (VAS) for overall satisfaction (range 0-10). Results: Eighty-two patients were treated conservatively and 103 underwent operative treatment. The majority was female. Most conservatively treated fractures were AO-Weber B1.1 type fractures. Fractures with fibular displacement (mainly AO type B1.2 and Lauge-Hansen type SER-4) were predominantly treated operatively. The outcome scores in the non-operative group were OMAS 93, AOFAS 98, and VAS 8. Outcome in this group was independently negatively affected by age, affected side, BMI, fibular displacement, and duration of plaster immobilization. In the surgically treated group, the OMAS, AOFAS, and VAS scores were 90, 97, and 8, respectively, with outcome negatively influenced by duration of plaster immobilization. Conclusion: Treatment selection based upon stability and surgeon's judgment led to overall good clinical outcome in both treatment groups. Reducing the cast immobilization period may further improve outcome
Combining Maximum-Likelihood with Deep Learning for Event Reconstruction in IceCube
The field of deep learning has become increasingly important for particle physics experiments, yielding a multitude of advances, predominantly in event classification and reconstruction tasks. Many of these applications have been adopted from other domains. However, data in the field of physics are unique in the context of machine learning, insofar as their generation process and the laws and symmetries they abide by are usually well understood. Most commonly used deep learning architectures fail at utilizing this available information. In contrast, more traditional likelihood-based methods are capable of exploiting domain knowledge, but they are often limited by computational complexity.
In this contribution, a hybrid approach is presented that utilizes generative neural networks to approximate the likelihood, which may then be used in a traditional maximum-likelihood setting. Domain knowledge, such as invariances and detector characteristics, can easily be incorporated in this approach. The hybrid approach is illustrated by the example of event reconstruction in IceCube
New Flux Limits in the Low Relativistic Regime for Magnetic Monopoles at IceCube
Magnetic monopoles are hypothetical particles that carry magnetic charge. Depending on their velocity, different light production mechanisms exist to facilitate detection. In this work, a previously unused light production mechanism, luminescence of ice, is introduced. This light production mechanism is nearly independent of the velocity of the incident magnetic monopole and becomes the only viable light production mechanism in the low relativistic regime (0.1-0.55c). An analysis in the low relativistic regime searching for magnetic monopoles in seven years of IceCube data is presented. While no magnetic monopole detection can be claimed, a new flux limit in the low relativistic regime is presented, superseding the previous best flux limit by 2 orders of magnitude
A Search for Neutrinos from Decaying Dark Matter in Galaxy Clusters and Galaxies with IceCube
The observed dark matter abundance in the Universe can be explained with non-thermal, heavy dark matter models. In order for dark matter to still be present today, its lifetime has to far exceed the age of the Universe. In these scenarios, dark matter decay can produce highly energetic neutrinos, along with other Standard Model particles. To date, the IceCube Neutrino Observatory is the world’s largest neutrino telescope, located at the geographic South Pole. In 2013, the IceCube collaboration reported the first observation of high-energy astrophysical neutrinos. Since then, IceCube has collected a large amount of astrophysical neutrino data with energies up to tens of PeV, allowing us to probe the heavy dark matter models using neutrinos. We search the IceCube data for neutrinos from decaying dark matter in galaxy clusters and galaxies. The targeted dark matter masses range from 10 TeV to 10 PeV. In this contribution, we present the method and sensitivities of the analysis
A Combined Fit of the Diffuse Neutrino Spectrum using IceCube Muon Tracks and Cascades
The IceCube Neutrino Observatory first observed a diffuse flux of high energy astrophysical neutrinos in 2013. Since then, this observation has been confirmed in multiple detection channels such as high energy starting events, cascades, and through-going muon tracks. Combining these event selections into a high statistics global fit of 10 years of IceCube’s neutrino data could strongly improve the understanding of the diffuse astrophysical neutrino flux: challenging or confirming the simple unbroken power-law flux model as well as the astrophysical neutrino flux composition. One key component of such a combined analysis is the consistent modelling of systematic uncertainties of different event selections. This can be achieved using the novel SnowStorm Monte Carlo method which allows constraints to be placed on multiple systematic parameters from a single simulation set. We will report on the status of a new combined analysis of through-going muon tracks and cascades. It is based on a consistent all flavor neutrino signal and background simulation using, for the first time, the SnowStorm method to analyze IceCube’s high-energy neutrino data. Estimated sensitivities for the energy spectrum of the diffuse astrophysical neutrino flux will be shown
Density of GeV Muons Measured with IceTop
We present a measurement of the density of GeV muons in near-vertical air showers using three years of data recorded by the IceTop array at the South Pole. We derive the muon densities as functions of energy at reference distances of 600 m and 800 m for primary energies between 2.5 PeV and 40 PeV and between 9 PeV and 120 PeV, respectively, at an atmospheric depth of about 690g/cm. The measurements are consistent with the predicted muon densities obtained from Sibyll 2.1 assuming any physically reasonable cosmic ray flux model. However, comparison to the post-LHC models QGSJet-II.04 and EPOS-LHC shows that the post-LHC models yield a higher muon density than predicted by Sibyll 2.1 and are in tension with the experimental data for air shower energies between 2.5 PeV and 120 PeV
Design, performance, and analysis of a measurement of optical properties of antarctic ice below 400 nm
The IceCube Neutrino Observatory, located at the geographic South Pole, is the world\u27s largest neutrino telescope, instrumenting 1 km of Antarctic ice with 5160 photosensors to detect Cherenkov light. For the IceCube Upgrade, to be deployed during the 2022-23 polar field season, and the enlarged detector IceCube-Gen2 several new optical sensor designs are under development. One of these optical sensors, the Wavelength-shifting Optical Module (WOM), uses wavelength-shifting and light-guiding techniques to measure Cherenkov photons in the UV range from 250 nm to 380 nm. In order to understand the potential gains from this new technology, a measurement of the scattering and absorption lengths of UV light was performed in the SPICEcore borehole at the South Pole during the winter seasons of 2018/2019 and 2019/2020. For this purpose, a calibration device with a UV light source and a detector using the wavelength shifting technology was developed. We present the design of the developed calibration device, its performance during the measurement campaigns, and the comparison of data to a Monte Carlo simulation
Testing Hadronic Interaction Models with Cosmic Ray Measurements at the IceCube Neutrino Observatory
The IceCube Neutrino Observatory provides the opportunity to perform unique measurements of cosmic-ray air showers with its combination of a surface array and a deep detector. Electromagnetic particles and low-energy muons (∼GeV) are detected by IceTop, while a bundle of high-energy muons (>~400 GeV) can be measured in coincidence in IceCube. Predictions of air-shower observables based on simulations show a strong dependence on the choice of the high-energy hadronic interaction model. By reconstructing different composition-dependent observables, one can provide strong tests of hadronic interaction models, as these measurements should be consistent with one another. In this work, we present an analysis of air-shower data between 2.5 and 80 PeV, comparing the composition interpretation of measurements of the surface muon density, the slope of the IceTop lateral distribution function, and the energy loss of the muon bundle, using the models Sibyll 2.1, QGSJet-II.04 and EPOS-LHC. We observe inconsistencies in all models under consideration, suggesting they do not give an adequate description of experimental data. The results furthermore imply a significant uncertainty in the determination of the cosmic-ray mass composition through indirect measurements
Performance of the D-Egg Optical Sensor for the IceCube Upgrade
New optical sensors called the "D-Egg" have been developed for cost-effective instrumentation for the IceCube Upgrade. With two 8-inch high QE photomultipliers, they offer increased effective photocathode area while retaining as much of the successful IceCube Digital Optical Module (DOM) design as possible. Mass production of D-Eggs has started in 2020. By the end of 2021, there will be 310 D-Eggs produced with 288 deployed in the IceCube Upgrade. The D-Egg readout system uses advanced technologies in electronics and computing power. Each of the two PMT signals is digitized using ultra-low-power 14-bit ADCs with a sampling frequency of 250-MSPS, enabling seamless and lossless event recording from single-photon signals to signals exceeding 200pe within 10ns, as well as flexible event triggering. In this paper, we report the single photon detection performance as well as the multiple photon recording capability of D-Eggs from the mass production line which have been evaluated with the built-in DAQ system
Search for dark matter from the center of the Earth with 8 years of IceCube data
The nature of Dark Matter (DM) remains one of the most important unresolved questions of fundamental physics. Many models, including Weakly Interacting Massive Particles (WIMPs), assume DM to be a particle and predict a weak coupling with Standard Model matter. If DM particles can scatter off nuclei in the vicinity of a massive object such as a star or a planet, they may lose kinetic energy and become gravitationally trapped in the center of such objects, including Earth. As DM accumulates in the center of the Earth, self-annihilation of WIMPs into Standard Model particles can result in an excess of neutrinos which are detectable at the IceCube Neutrino Observatory, situated at the geographic South Pole. A search for excess neutrinos from these annihilations has been performed using 8 years of IceCube data, and results have been interpreted in the context of a number of WIMP annihilation channels and masses ranging from 10 GeV to 10 TeV. We present the latest results from this analysis and compare the outcome with previous analyses by IceCube and other experiments, showing competitive results, which are even world-leading in some parts of the parameter space
- …