858 research outputs found

    Neural Network Emulation of the Integral Equation Model with Multiple Scattering

    Get PDF
    The Integral Equation Model with multiple scattering (IEMM) represents a well-established method that provides a theoretical framework for the scattering of electromagnetic waves from rough surfaces. A critical aspect is the long computational time required to run such a complex model. To deal with this problem, a neural network technique is proposed in this work. In particular, we have adopted neural networks to reproduce the backscattering coefficients predicted by IEMM at L- and C-bands, thus making reference to presently operative satellite radar sensors, i.e., that aboard ERS-2, ASAR on board ENVISAT (C-band), and PALSAR aboard ALOS (L-band). The neural network-based model has been designed for radar observations of both flat and tilted surfaces, in order to make it applicable for hilly terrains too. The assessment of the proposed approach has been carried out by comparing neural network-derived backscattering coefficients with IEMM-derived ones. Different databases with respect to those employed to train the networks have been used for this purpose. The outcomes seem to prove the feasibility of relying on a neural network approach to efficiently and reliably approximate an electromagnetic model of surface scattering

    Machine learning high multiplicity matrix elements for electron-positron and hadron-hadron colliders

    Get PDF
    The LHC is a large-scale particle collider experiment collecting vast quantities of experimental data to study the fundamental particles, and forces, of nature. Theoretical predictions made with the SM can be compared with observables measured at experiments. These predictions rely on the use of Monte Carlo event generators to simulate events which demand the evaluation of a matrix element. For high multiplicity processes this can take up a significant portion of the time spent simulating an event. In this thesis, we explore the usage of machine learning to accelerate the evaluation of matrix elements by introducing a factorisation-aware neural network model. Matrix elements are plagued with singular structures in regions of phase-space where particles become soft or collinear, however, the behaviour of the matrix element in these limits is well-understood. By exploiting the factorisation property of matrix elements in these limits, the model can learn how to best represent the approximation of the matrix elements as a linear combination of singular functions. We examine the application of the model to e−e+ annihilation matrix elements at tree-level and one-loop level, as well as to leading order pp collisions where the acceleration of event generation is critical for current experiments

    Analytic and numerical analysis of the cosmic 21cm signal

    Get PDF
    Cosmology in the 21st century has matured into a precision science. Measurements of the cosmic microwave background, galaxy surveys, weak lensing studies and supernovae surveys all but confirm that we live in a geometrically flat Universe dominated by a dark energy component where most of the matter is dark. Yet, challenges to this model remain as well as periods in its evolution unobserved at present. The next decade will see the construction of a new generation of telescopes poised to answer some of these remaining questions and peer into unseen depths. Because of the technological advances of the previous decades and the scale of the new generation of telescopes, for the first time, cosmology will be constrained through the observation of the cosmic 21cm signal emitted by hydrogen atoms across the Universe. Being the ubiquitous element present throughout the different evolutionary stages of the Universe, neutral hydrogen holds great potential to answer many of the remaining challenges which face cosmology today. In the context of 21cm radiation, we identify two approaches which will increase the information gain from future observations, a numerical as well as an analytic approach. The numerical challenges of future analyses are a consequence of the data rates of next generation telescopes, and we address this here introducing machine learning techniques as a possible solution. Artificial neural networks have gained much attention in both the scientific and commercial world, and we apply one such network here as a way to emulate numerical simulations necessary for parameter inference from future data. Further, we identify the potential of the bispectrum, the Fourier transform of the three-point statistic, as a cosmological probe in the context of low redshift 21cm intensity mapping experiments. This higher order statistical analysis can constrain cosmological parameters beyond the capabilities of CMB observations and power spectrum analyses of the 21cm signal. Lastly, we focus on a fully 3D expansion of the 21cm power spectrum in the natural spherical basis for large angle observations, drawing on the success of the technique in weak lensing studies.Open Acces

    Efficient Super-Resolution of Near-Surface Climate Modeling Using the Fourier Neural Operator

    Get PDF
    Downscaling methods are critical in efficiently generating high-resolution atmospheric data. However, state-of-the-art statistical or dynamical downscaling techniques either suffer from the high computational cost of running a physical model or require high-resolution data to develop a downscaling tool. Here, we demonstrate a recently proposed zero-shot super-resolution method, the Fourier neural operator (FNO), to efficiently perform downscaling without the need for high-resolution data. Because the FNO learns dynamics in Fourier space, FNO is a resolution-invariant emulator; it can be trained at a coarse resolution and produces emulation at any high resolution. We applied FNO to downscale a 4-km resolution Weather Research and Forecasting (WRF) Model simulation of near-surface heat-related variables over the Great Lakes region. The FNO is driven by the atmospheric forcings and topographic features used in the WRF model at the same resolution. We incorporated a physics-constrained loss in FNO by using the Clausius–Clapeyron relation to better constrain the relations among the emulated states. Trained on merely 600 WRF snapshots at 4-km resolution, the FNO shows comparable performance with a widely-used convolutional network, U-Net, achieving averaged modified Kling–Gupta Efficiency of 0.88 and 0.94 on the test data set for temperature and pressure, respectively. We then employed the FNO to produce 1-km emulations to reproduce the fine climate features. Further, by taking the WRF simulation as ground truth, we show consistent performances at the two resolutions, suggesting the reliability of FNO in producing high-resolution dynamics. Our study demonstrates the potential of using FNO for zero-shot super-resolution in generating first-order estimation on atmospheric modeling

    Using Machine Learning for Model Physics: an Overview

    Full text link
    In the overview, a generic mathematical object (mapping) is introduced, and its relation to model physics parameterization is explained. Machine learning (ML) tools that can be used to emulate and/or approximate mappings are introduced. Applications of ML to emulate existing parameterizations, to develop new parameterizations, to ensure physical constraints, and control the accuracy of developed applications are described. Some ML approaches that allow developers to go beyond the standard parameterization paradigm are discussed.Comment: 50 pages, 3 figures, 1 tabl

    Machine Learning in Nuclear Physics

    Full text link
    Advances in machine learning methods provide tools that have broad applicability in scientific research. These techniques are being applied across the diversity of nuclear physics research topics, leading to advances that will facilitate scientific discoveries and societal applications. This Review gives a snapshot of nuclear physics research which has been transformed by machine learning techniques.Comment: Comments are welcom

    Simulating 3D Radiation Transport, a modern approach to discretisation and an exploration of probabilistic methods

    Get PDF
    Light, or electromagnetic radiation in general, is a profound and invaluable resource to investigate our physical world. For centuries, it was the only and it still is the main source of information to study the Universe beyond our planet. With high-resolution spectroscopic imaging, we can identify numerous atoms and molecules, and can trace their physical and chemical environments in unprecedented detail. Furthermore, radiation plays an essential role in several physical and chemical processes, ranging from radiative pressure, heating, and cooling, to chemical photo-ionisation and photo-dissociation reactions. As a result, almost all astrophysical simulations require a radiative transfer model. Unfortunately, accurate radiative transfer is very computationally expensive. Therefore, in this thesis, we aim to improve the performance of radiative transfer solvers, with a particular emphasis on line radiative transfer. First, we review the classical work on accelerated lambda iterations and acceleration of convergence, and we propose a simple but effective improvement to the ubiquitously used Ng-acceleration scheme. Next, we present the radiative transfer library, Magritte: a formal solver with a ray-tracer that can handle structured and unstructured meshes as well as smoothed-particle data. To mitigate the computational cost, it is optimised to efficiently utilise multi-node and multi-core parallelism as well as GPU offloading. Furthermore, we demonstrate a heuristic algorithm that can reduce typical input models for radiative transfer by an order of magnitude, without significant loss of accuracy. This strongly suggests the existence of more efficient representations for radiative transfer models. To investigate this, we present a probabilistic numerical method for radiative transfer that naturally allows for uncertainty quantification, providing us with a mathematical framework to study the trade-off between computational speed and accuracy. Although we cannot yet construct optimal representations for radiative transfer problems, we point out several ways in which this method can lead to more rigorous optimisation

    Predicting atmospheric optical properties for radiative transfer computations using neural networks

    Full text link
    The radiative transfer equations are well-known, but radiation parametrizations in atmospheric models are computationally expensive. A promising tool for accelerating parametrizations is the use of machine learning techniques. In this study, we develop a machine learning-based parametrization for the gaseous optical properties by training neural networks to emulate a modern radiation parameterization (RRTMGP). To minimize computational costs, we reduce the range of atmospheric conditions for which the neural networks are applicable and use machine-specific optimised BLAS functions to accelerate matrix computations. To generate training data, we use a set of randomly perturbed atmospheric profiles and calculate optical properties using RRTMGP. Predicted optical properties are highly accurate and the resulting radiative fluxes have average errors within \SI{0.5}{\flux} compared to RRTMGP. Our neural network-based gas optics parametrization is up to 4 times faster than RRTMGP, depending on the size of the neural networks. We further test the trade-off between speed and accuracy by training neural networks for the narrow range of atmospheric conditions of a single large-eddy simulation, so smaller and therefore faster networks can achieve a desired accuracy. We conclude that our machine learning-based parametrization can speed-up radiative transfer computations whilst retaining high accuracy.Comment: 13 pages,5 figures, submitted to Philosophical Transactions
    corecore