2,799 research outputs found
Determination of the absolute energy scale of extensive air showers via radio emission: systematic uncertainty of underlying first-principle calculations
Recently, the energy determination of extensive air showers using radio
emission has been shown to be both precise and accurate. In particular, radio
detection offers the opportunity for an independent measurement of the absolute
energy scale of cosmic rays, since the radiation energy (the energy radiated in
the form of radio signals) can be predicted using first-principle calculations
involving no free parameters, and the measurement of radio waves is not subject
to any significant absorption or scattering in the atmosphere. To quantify the
uncertainty associated with such an approach, we collate the various
contributions to the uncertainty, and we verify the consistency of
radiation-energy calculations from microscopic simulation codes by comparing
Monte Carlo simulations made with the two codes CoREAS and ZHAireS. We compare
a large set of simulations with different primary energies and shower
directions and observe differences in the radiation energy prediction for the
30 - 80 MHz band of 5.2 %. This corresponds to an uncertainty of 2.6 % for the
determination of the absolute cosmic-ray energy scale. Our result has general
validity and can be built upon directly by experimental efforts for the
calibration of the cosmic-ray energy scale on the basis of radio emission
measurements.Comment: 22 pages, 3 figures, accepted for publication in Astroparticle
Physic
Modelling uncertainty of the radiation energy emitted by extensive air showers
Recently, the energy determination of extensive air showers using radio
emission has been shown to be both precise and accurate. In particular, radio
detection offers the opportunity for an independent measurement of the absolute
energy of cosmic rays, since the radiation energy (the energy radiated in the
form of radio signals) can be predicted using first-principle calculations
involving no free parameters, and the measurement of radio waves is not subject
to any significant absorption or scattering in the atmosphere. Here, we verify
the implementation of radiation-energy calculations from microscopic simulation
codes by comparing Monte Carlo simulations made with the two codes CoREAS and
ZHAireS. To isolate potential differences in the radio-emission calculation
from differences in the air-shower simulation, the simulations are performed
with equivalent settings, especially the same model for the hadronic
interactions and the description of the atmosphere. Comparing a large set of
simulations with different primary energies and shower directions we observe
differences amounting to a total of only 3.3 %. This corresponds to an
uncertainty of only 1.6 % in the determination of the absolute energy scale and
thus opens the potential of using the radiation energy as an accurate
calibration method for cosmic ray experiments.Comment: 8 pages, 2 figures, ICRC2017 contributio
Uncertainty Quantification in Biophotonic Imaging using Invertible Neural Networks
Owing to high stakes in the field of healthcare, medical machine learning (ML) applications have to adhere to strict safety standards. In particular, their performance needs to be robust toward volatile clinical inputs. The aim of the work presented in this thesis was to develop a framework for uncertainty handling in medical ML applications as a way to increase their robustness and trustworthiness. In particular, it addresses three root causes for lack of robustness that can be deemed central to the successful clinical translation of ML methods:
First, many tasks in medical imaging can be phrased in the language of inverse problems. Most common ML methods aimed at solving such inverse problems implicitly assume that they are well-posed, especially that the problem has a unique solution. However, the solution might be ambiguous. In this thesis, we introduce a data-driven method for analyzing the well-posedness of inverse problems. In addition, we propose a framework to validate the suggested method in a problem-aware manner.
Second, simulation is an important tool for the development of medical ML systems due to small in vivo data sets and/or a lack of annotated references (e. g. spatially resolved blood oxygenation (sO 2 )). However, simulation introduces a new uncertainty to the ML pipeline as ML performance guarantees generally rely on the testing data being sufficiently similar to the training data. This thesis addresses the uncertainty by quantifying the domain gap between training and testing data via an out-of-distribution (OoD) detection approach.
Third, we introduce a new paradigm for medical ML based on personalized models. In a data-scarce regime with high inter-patient variability, classical ML models cannot be assumed to generalize well to new patients. To overcome this problem, we propose to train ML models on a per-patient basis. This approach circumvents the inter-patient variability, but it requires training without a supervision signal. We address this issue via OoD detection, where the current status quo is encoded as in-distribution (ID) using a personalized ML model. Changes to the status quo are then detected as OoD.
While these three facets might seem distinct, the suggested framework provides a unified view of them. The enabling technology is the so-called invertible neural network (INN), which can be used as a flexible and expressive (conditional) density estimator. In this way, they can encode solutions to inverse problems as a probability distribution as well as tackle OoD detection tasks via density-based scores, like the widely applicable information criterion (WAIC).
The present work validates our framework on the example of biophotonic imaging. Biophotonic imaging promises the estimation of tissue parameters such as sO 2 in a non-invasive way by evaluating the “fingerprint” of the tissue in the light spectrum. We apply our framework to analyze the well-posedness of the tissue parameter estimation problem at varying spectral and spatial resolutions. We find that with sufficient spectral and/or spatial context, the sO 2 estimation problem is well-posed. Furthermore, we examine the realism of simulated biophotonic data using the proposed OoD approach to gauge the generalization
capabilities of our ML models to in vivo data. Our analysis shows a considerable remaining domain gap between the in silico and in vivo spectra. Lastly, we validate the personalized ML approach on the example of non-invasive ischemia monitoring in minimally invasive kidney surgery, for which we developed the first-in-human laparoscopic multispectral imaging system. In our study, we find a strong OoD signal between perfused and ischemic kidney spectra. Furthermore, the proposed approach is video-rate capable.
In conclusion, we successfully developed a framework for uncertainty handling in medical ML and validated it using a diverse set of medical ML tasks, highlighting the flexibility and potential impact of our approach. The framework opens the door to robust solutions to applications like (recording) device design, quality control for simulation pipelines, and personalized video-rate tissue parameter monitoring. In this way, this thesis facilitates the development of the next generation of trustworthy ML systems in medicine
Introducing Risk Shadowing For Decisive and Comfortable Behavior Planning
We consider the problem of group interactions in urban driving.
State-of-the-art behavior planners for self-driving cars mostly consider each
single agent-to-agent interaction separately in a cost function in order to
find an optimal behavior for the ego agent, such as not colliding with any of
the other agents. In this paper, we develop risk shadowing, a situation
understanding method that allows us to go beyond single interactions by
analyzing group interactions between three agents. Concretely, the presented
method can find out which first other agent does not need to be considered in
the behavior planner of an ego agent, because this first other agent cannot
reach the ego agent due to a second other agent obstructing its way. In
experiments, we show that using risk shadowing as an upstream filter module for
a behavior planner allows to plan more decisive and comfortable driving
strategies than state of the art, given that safety is ensured in these cases.
The usability of the approach is demonstrated for different intersection
scenarios and longitudinal driving.Comment: Accepted at IEEE ITSC 202
Continuous Risk Measures for Driving Support
In this paper, we compare three different model-based risk measures by
evaluating their stengths and weaknesses qualitatively and testing them
quantitatively on a set of real longitudinal and intersection scenarios. We
start with the traditional heuristic Time-To-Collision (TTC), which we extend
towards 2D operation and non-crash cases to retrieve the
Time-To-Closest-Encounter (TTCE). The second risk measure models position
uncertainty with a Gaussian distribution and uses spatial occupancy
probabilities for collision risks. We then derive a novel risk measure based on
the statistics of sparse critical events and so-called survival conditions. The
resulting survival analysis shows to have an earlier detection time of crashes
and less false positive detections in near-crash and non-crash cases supported
by its solid theoretical grounding. It can be seen as a generalization of TTCE
and the Gaussian method which is suitable for the validation of ADAS and AD
- …