746 research outputs found

    Modern machine learning in the presence of systematic uncertainties for robust and optimized multivariate data analysis in high-energy particle physics

    Get PDF
    In high energy particle physics, machine learning has already proven to be an indispensable technique to push data analysis to the limits. So far widely accepted and successfully applied in the event reconstruction at the LHC experiments, machine learning is today also increasingly often part of the final steps of an analysis and, for example, used to construct observables for the statistical inference of the physical parameters of interest. This thesis presents such a machine learning based analysis measuring the production of Standard Model Higgs bosons in the decay to two tau leptons at the CMS experiment and discusses the possibilities and challenges of machine learning at this stage of an analysis. To allow for a precise and reliable physics measurement, the application of the chosen machine learning model has to be well under control. Therefore, novel techniques are introduced to identify and control the dependence of the neural network function on features in the multidimensional input space. Further, possible improvements of machine learning based analysis strategies are studied. A novel solution is presented to maximize the expected sensitivity of the measurement to the physics of interest by incorporating information about known uncertainties in the optimization of the machine learning model, yielding an optimal statistical inference in the presence of systematic uncertainties

    Optimal statistical inference in the presence of systematic uncertainties using neural network optimization based on binned Poisson likelihoods with nuisance parameters

    Get PDF
    Data analysis in science, e.g., high-energy particle physics, is often subject to an intractable likelihood if the observables and observations span a high-dimensional input space. Typically the problem is solved by reducing the dimensionality using feature engineering and histograms, whereby the latter technique allows to build the likelihood using Poisson statistics. However, in the presence of systematic uncertainties represented by nuisance parameters in the likelihood, the optimal dimensionality reduction with a minimal loss of information about the parameters of interest is not known. This work presents a novel strategy to construct the dimensionality reduction with neural networks for feature engineering and a differential formulation of histograms so that the full workflow can be optimized with the result of the statistical inference, e.g., the variance of a parameter of interest, as objective. We discuss how this approach results in an estimate of the parameters of interest that is close to optimal and the applicability of the technique is demonstrated with a simple example based on pseudo-experiments and a more complex example from high-energy particle physics

    Deep Learning based assessment of groundwater level development in Germany until 2100

    Get PDF
    Clear signs of climate stress on groundwater resources have been observed in recent years even in generally water-rich regions such as Germany. Severe droughts, resulting in decreased groundwater recharge, led to declining groundwater levels in many regions and even local drinking water shortages have occurred in past summers. We investigate how climate change will directly influence the groundwater resources in Germany until the year 2100. For this purpose, we use a machine learning groundwater level forecasting framework, based on Convolutional Neural Networks, which has already proven its suitability in modelling groundwater levels. We predict groundwater levels on more than 120 wells distributed over the entire area of Germany that showed strong reactions to meteorological signals in the past. The inputs are derived from the RCP8.5 scenario of six climate models, pre-selected and pre-processed by the German Meteorological Service, thus representing large parts of the range of the expected change in the next 80 years. Our models are based on precipitation and temperature and are carefully evaluated in the past and only wells with models reaching high forecasting skill scores are included in our study. We only consider natural climate change effects based on meteorological changes, while highly uncertain human factors, such as increased groundwater abstraction or irrigation effects, remain unconsidered due to a lack of reliable input data. We can show significant (p<0.05) declining groundwater levels for a large majority of the considered wells, however, at the same time we interestingly observe the opposite behaviour for a small portion of the considered locations. Further, we show mostly strong increasing variability, thus an increasing number of extreme groundwater events. The spatial patterns of all observed changes reveal stronger decreasing groundwater levels especially in the northern and eastern part of Germany, emphasizing the already existing decreasing trends in these region

    Groundwater Level Forecasting with Artificial Neural Networks: A Comparison of LSTM, CNN and NARX

    Get PDF
    It is now well established to use shallow artificial neural networks (ANN) to obtain accurate and reliable groundwater level forecasts, which are an important tool for sustainable groundwater management. However, we observe an increasing shift from conventional shallow ANNs to state-of-the-art deep learning (DL) techniques, but a direct comparison of the performance is often lacking. Although they have already clearly proven their suitability, especially shallow recurrent networks frequently seem to be excluded from the study design despite the euphoria about new DL techniques and its successes in various disciplines. Therefore, we aim to provide an overview on the predictive ability in terms of groundwater levels of shallow conventional recurrent ANN namely nonlinear autoregressive networks with exogenous inputs (NARX), and popular state-of-the-art DL-techniques such as long short-term memory (LSTM) and convolutional neural networks (CNN). We compare both the performance on sequence-to-value (seq2val) and sequence-to-sequence (seq2seq) forecasting on a 4-year period, while using only few, widely available and easy to measure meteorological input parameters, which makes our approach widely applicable. We observe that for seq2val forecasts NARX models on average perform best, however, CNNs are much faster and only slightly worse in terms of accuracy. For seq2seq forecasts, mostly NARX outperform both DL-models and even almost reach the speed of CNNs. However, NARX are the least robust against initialization effects, which nevertheless can be handled easily using ensemble forecasting. We showed that shallow neural networks, such as NARX, should not be neglected in comparison to DL-techniques; however, LSTMs and CNNs might perform substantially better with a larger data set, where DL really can demonstrate its strengths, which is rarely available in the groundwater domain though

    Feature-based Groundwater Hydrograph Clustering Using Unsupervised Self-Organizing Map-Ensembles

    Get PDF
    Hydrograph clustering helps to identify dynamic patterns within aquifers systems, an important foundation of characterizing groundwater systems and their influences, which is necessary to effectively manage groundwater resources. We develope an unsupervised modeling approach to characterize and cluster hydrographs on regional scale according to their dynamics. We apply feature-based clustering to improve the exploitation of heterogeneous datasets, explore the usefulness of existing features and propose new features specifically useful to describe groundwater hydrographs. The clustering itself is based on a powerful combination of Self-Organizing Maps with a modified DS2L-Algorithm, which automatically derives the cluster number but also allows to influence the level of detail of the clustering. We further develop a framework that combines these methods with ensemble modeling, internal cluster validation indices, resampling and consensus voting to finally obtain a robust clustering result and remove arbitrariness from the feature selection process. Further we propose a measure to sort hydrographs within clusters, useful for both interpretability and visualization. We test the framework with weekly data from the Upper Rhine Graben System, using more than 1800 hydrographs from a period of 30 years (1986-2016). The results show that our approach is adaptively capable of identifying homogeneous groups of hydrograph dynamics. The resulting clusters show both spatially known and unknown patterns, some of which correspond clearly to external controlling factors, such as intensive groundwater management in the northern part of the test area. This framework is easily transferable to other regions and, by adapting the describing features, also to other time series-clustering applications

    Deep learning shows declining groundwater levels in Germany until 2100 due to climate change

    Get PDF
    In this study we investigate how climate change will directly influence the groundwater resources in Germany during the 21(st) century. We apply a machine learning groundwater level prediction approach based on convolutional neural networks to 118 sites well distributed over Germany to assess the groundwater level development under different RCP scenarios (2.6, 4.5, 8.5). We consider only direct meteorological inputs, while highly uncertain anthropogenic factors such as groundwater extractions are excluded. While less pronounced and fewer significant trends can be found under RCP2.6 and RCP4.5, we detect significantly declining trends of groundwater levels for most of the sites under RCP8.5, revealing a spatial pattern of stronger decreases, especially in the northern and eastern part of Germany, emphasizing already existing decreasing trends in these regions. We can further show an increased variability and longer periods of low groundwater levels during the annual cycle towards the end of the century

    Groundwater level forecasting with artificial neural networks: A comparison of long short-term memory (LSTM), convolutional neural networks (CNNs), and non-linear autoregressive networks with exogenous input (NARX)

    Get PDF
    It is now well established to use shallow artificial neural networks (ANNs) to obtain accurate and reliable groundwater level forecasts, which are an important tool for sustainable groundwater management. However, we observe an increasing shift from conventional shallow ANNs to state-of-the-art deep-learning (DL) techniques, but a direct comparison of the performance is often lacking. Although they have already clearly proven their suitability, shallow recurrent networks frequently seem to be excluded from the study design due to the euphoria about new DL techniques and its successes in various disciplines. Therefore, we aim to provide an overview on the predictive ability in terms of groundwater levels of shallow conventional recurrent ANNs, namely non-linear autoregressive networks with exogenous input (NARX) and popular state-of-the-art DL techniques such as long short-term memory (LSTM) and convolutional neural networks (CNNs). We compare the performance on both sequence-to-value (seq2val) and sequence-to-sequence (seq2seq) forecasting on a 4-year period while using only few, widely available and easy to measure meteorological input parameters, which makes our approach widely applicable. Further, we also investigate the data dependency in terms of time series length of the different ANN architectures. For seq2val forecasts, NARX models on average perform best; however, CNNs are much faster and only slightly worse in terms of accuracy. For seq2seq forecasts, mostly NARX outperform both DL models and even almost reach the speed of CNNs. However, NARX are the least robust against initialization effects, which nevertheless can be handled easily using ensemble forecasting. We showed that shallow neural networks, such as NARX, should not be neglected in comparison to DL techniques especially when only small amounts of training data are available, where they can clearly outperform LSTMs and CNNs; however, LSTMs and CNNs might perform substantially better with a larger dataset, where DL really can demonstrate its strengths, which is rarely available in the groundwater domain though

    Reducing the dependence of the neural network function to systematic uncertainties in the input space

    Get PDF
    Applications of neural networks to data analyses in natural sciences are complicated by the fact that many inputs are subject to systematic uncertainties. To control the dependence of the neural network function to variations of the input space within these systematic uncertainties, several methods have been proposed. In this work, we propose a new approach of training the neural network by introducing penalties on the variation of the neural network output directly in the loss function. This is achieved at the cost of only a small number of additional hyperparameters. It can also be pursued by treating all systematic variations in the form of statistical weights. The proposed method is demonstrated with a simple example, based on pseudo-experiments, and by a more complex example from high-energy particle physics

    Identifying the relevant dependencies of the neural network response on characteristics of the input space

    Full text link
    The relation between the input and output spaces of neural networks (NNs) is investigated to identify those characteristics of the input space that have a large influence on the output for a given task. For this purpose, the NN function is decomposed into a Taylor expansion in each element of the input space. The Taylor coefficients contain information about the sensitivity of the NN response to the inputs. A metric is introduced that allows for the identification of the characteristics that mostly determine the performance of the NN in solving a given task. Finally, the capability of this metric to analyze the performance of the NN is evaluated based on a task common to data analyses in high-energy particle physics experiments
    corecore