1,574 research outputs found

    Flood dynamics derived from video remote sensing

    Get PDF
    Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models. Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science

    Multiscale Modeling of Curing and Crack Propagation in Fiber-Reinforced Thermosets

    Get PDF
    Aufgrund ihres Leichtbaupotenzials bei relativ geringen Kosten gewinnen glasfaserverstärkte Polymere in industriellen Anwendungen zunehmend an Bedeutung. Sie verbinden die hohe Festigkeit von Glasfasern mit der Beständigkeit von z.B. duroplastischen Harzen. Bei der Verarbeitung von faserverstärkten Duroplasten kommt es zu einer chemischen Reaktion des Harzes. Die chemische Reaktion geht mit einer chemischen Schrumpfung einher. In Verbindung mit der thermischen Ausdehnung kann das Material bereits beim Herstellungsprozess beschädigt werden. Auch wenn das Komposit nicht vollständig versagt, kann es zu Mikrorissbildung kommen. Diese Schäden können die Blastbarkeit des Bauteils und damit seine Lebensdauer beeinträchtigen. Faserverstärkte Duroplaste enthalten Strukturen auf verschiedenen Längenskalen, die das Verhalten des Gesamtbauteils beeinflussen und daher für eine genaue Vorhersage der Rissbildung berücksichtigt werden müssen. Das Verständnis der Mechanismen der Rissbildung auf den verschiedenen Längenskalaen ist daher von großem Interesse. Auf der Grundlage von Molekulardynamiksimulationen wird ein Harzsystem zusammen mit einer Faseroberfläche und einer Schlichte auf der Nanoskala betrachtet und ein systematisches Verfahren für die Entwicklung eines ausgehärteten Systems vorgestellt. Eine zweistufige Reaktion, eine Polyurethanreaktion und eine radikale Polymerisation, wird auf der Grundlage eines etablierten Ansatzes modelliert. Anhand des fertig ausgehärteten Systems werden Auswertungen über gemittelte Größen und entlang der Normalenrichtung der Faseroberfläche durchgeführt, was eine räumliche Analyse der Faser-Schlichtharz-Grenzfläche erlaubt. Auf der Mikrolängenskala werden die einzelnen Fasern räumlich aufgelöst. Mit Hilfe der Kontinuumsmechanik und der Phasenfeldmethode wird das Versagen während des Aushärtungsprozesses auf dieser Längenskala untersucht. In der Materialwissenschaft wird die Phasenfeldmethode häufig zur Modellierung der Rissausbreitung verwendet. Sie ist in der Lage, das komplexe Bruchverhalten zu beschreiben und zeigt eine gute Übereinstimmung mit analytischen Lösungen. Dennoch sind die meisten Modelle auf homogene Systeme beschränkt, und nur wenige Ansätze für heterogene Systeme existieren. Es werden bestehende Modelle diskutiert und ein neues Modell für heterogene Systeme abgeleitet, das auf einem etablierten Phasenfeldansatz zur Rissausbreitung basiert. Das neue Modell mit mehreren Rissordnungsparametern ist in der Lage, quantitatives Risswachstum vorherzusagen, wo die etablierten Modelle eine analytische Lösung nicht reproduzieren können. Darüber hinaus wird ein verbessertes Homogenisierungsschema, das auf der mechanischen Sprungbedingung basiert, auf das neuartige Modell angewandt, was zu einer Verbesserung der Rissvorhersage selbst bei unterschiedlichen Steifigkeiten und Risswiderständen der betrachteten Materialien führt. Zudem wird zur Erzeugung digitaler Mikrostrukturen, die für Aushärtungssimulationen im Mikrobereich verwendet werden, ein Generator für gekrümmte Faserstrukturen eingeführt. Anschließend wird die Verteilung mechanischer und thermischer Größen für verschiedene Abstraktionsebenen der realen Mikrostruktur sowie für verschiedene Faservolumenanteile verglichen. Schließlich wird das neue Rissausbreitungsmodell mit dem Aushärtungsmodell kombiniert, was die Vorhersage der Mikrorissbildung während des Aushärtungsprozesses von glasfaserverstärktem UPPH-Harz ermöglicht

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Marchenko-Lippmann-Schwinger inversion

    Get PDF
    Seismic wave reflections recorded at the Earth’s surface provide a rich source of information about the structure of the subsurface. These reflections occur due to changes in the material properties of the Earth; in the acoustic approximation, these are the density of the Earth and the velocity of seismic waves travelling through it. Therefore, there is a physical relationship between the material properties of the Earth and the reflected seismic waves that we observe at the surface. This relationship is non-linear, due to the highly scattering nature of the Earth, and to our inability to accurately reproduce these scattered waves with the low resolution velocity models that are usually available to us. Typically, we linearize the scattering problem by assuming that the waves are singly-scattered, requiring multiple reflections to be removed from recorded data at great effort and with varying degrees of success. This assumption is called the Born approximation. The equation that describes the relationship between the Earth’s properties and the fully-scattering reflection data is called the Lippmann-Schwinger equation, and this equation is linear if the full scattering wavefield inside the Earth could be known. The development of Marchenko methods makes such wavefields possible to estimate using only the surface reflection data and an estimate of the direct wave from the surface to each point in the Earth. Substituting the results from a Marchenko method into the Lippmann-Schwinger equation results in a linear equation that includes all orders of scattering. The aim of this thesis is to determine whether higher orders of scattering improve the linear inverse problem from data to velocities, by comparing linearized inversion under the Born approximation to the inversion of the linear Lippmann-Schwinger equation. This thesis begins by deriving the linear Lippmann-Schwinger and Born inverse problems, and reviewing the theoretical basis for Marchenko methods. By deriving the derivative of the full scattering Green’s function with respect to the model parameters of the Earth, the gradient direction for a new type of least-squares full waveform inversion called Marchenko-Lippmann-Schwinger full waveform inversion is defined that uses all orders of scattering. By recreating the analytical 1D Born inversion of a boxcar perturbation by Beydoun and Tarantola (1988), it is shown that high frequency-sampling density is required to correctly estimate the amplitude of the velocity perturbation. More importantly, even when the scattered wavefield is defined to be singly-scattering and the velocity model perturbation can be found without matrix inversion, Born inversion cannot reproduce the true velocity structure exactly. When the results of analytical inversion are compared to inversions where the inverse matrices have been explicitly calculated, the analytical inversion is found to be superior. All three matrix inversion methods are found to be extremely ill-posed. With regularisation, it is possible to accurately determine the edges of the perturbation, but not the amplitude. Moving from a boxcar perturbation with a homogeneous starting velocity to a many-layered 1D model and a smooth representation of this model as the starting point, it is found that the inversion solution is highly dependent on the starting model. By optimising an iterative inversion in both the model and data domains, it is found that optimising the velocity model misfit does not guarantee improvement in the resulting data misfit, and vice versa. Comparing unregularised inversion to inversions with Tikhonov damping or smoothing applied to the kernel matrix, it is found that strong Tikhonov damping results in the most accurate velocity models. From the consistent under-performance of Lippmann-Schwinger inversion when using Marchenko-derived Green’s functions compared to inversions carried out with true Green’s functions, it is concluded that the fallibility of Marchenko methods results in inferior inversion results. Born and Lippmann-Schwinger inversion are tested on a 2D syncline model. Due to computational limitations, using all sources and receivers in the inversion required limiting the number of frequencies to 5. Without regularisation, the model update is uninterpretable due to the presence of strong oscillations across the model. With strong Tikhonov damping, the model updates obtained are poorly scaled, have low resolution, and low amplitude oscillatory noise remains. By replacing the inversion of all sources simultaneously with single source inversions, it is possible to reinstate all frequencies within our limited computational resources. These single source model updates can be stacked similarly to migration images to improve the overall model update. As predicted by the 1D analytical inversion, restoring the full frequency bandwidth eliminates the oscillatory noise from the inverse solution. With or without regularisation, Born and Lippmann-Schwinger inversion results are found to be nearly identical. When Marchenko-derived Green’s functions are introduced, the inversion results are worse than either the Born inversion or the Lippmann-Schwinger inversion without Marchenko methods. On this basis, one concludes that the inclusion of higher order scattering does not improve the outcome of solving the linear inverse scattering problem using currently available methods. Nevertheless, some recent developments in the methods used to solve the Marchenko equation hold some promise for improving solutions in future

    Learning and Control of Dynamical Systems

    Get PDF
    Despite the remarkable success of machine learning in various domains in recent years, our understanding of its fundamental limitations remains incomplete. This knowledge gap poses a grand challenge when deploying machine learning methods in critical decision-making tasks, where incorrect decisions can have catastrophic consequences. To effectively utilize these learning-based methods in such contexts, it is crucial to explicitly characterize their performance. Over the years, significant research efforts have been dedicated to learning and control of dynamical systems where the underlying dynamics are unknown or only partially known a priori, and must be inferred from collected data. However, much of these classical results have focused on asymptotic guarantees, providing limited insights into the amount of data required to achieve desired control performance while satisfying operational constraints such as safety and stability, especially in the presence of statistical noise. In this thesis, we study the statistical complexity of learning and control of unknown dynamical systems. By utilizing recent advances in statistical learning theory, high-dimensional statistics, and control theoretic tools, we aim to establish a fundamental understanding of the number of samples required to achieve desired (i) accuracy in learning the unknown dynamics, (ii) performance in the control of the underlying system, and (iii) satisfaction of the operational constraints such as safety and stability. We provide finite-sample guarantees for these objectives and propose efficient learning and control algorithms that achieve the desired performance at these statistical limits in various dynamical systems. Our investigation covers a broad range of dynamical systems, starting from fully observable linear dynamical systems to partially observable linear dynamical systems, and ultimately, nonlinear systems. We deploy our learning and control algorithms in various adaptive control tasks in real-world control systems and demonstrate their strong empirical performance along with their learning, robustness, and stability guarantees. In particular, we implement one of our proposed methods, Fourier Adaptive Learning and Control (FALCON), on an experimental aerodynamic testbed under extreme turbulent flow dynamics in a wind tunnel. The results show that FALCON achieves state-of-the-art stabilization performance and consistently outperforms conventional and other learning-based methods by at least 37%, despite using 8 times less data. The superior performance of FALCON arises from its physically and theoretically accurate modeling of the underlying nonlinear turbulent dynamics, which yields rigorous finite-sample learning and performance guarantees. These findings underscore the importance of characterizing the statistical complexity of learning and control of unknown dynamical systems.</p

    Quantum Optimal Transport: Quantum Couplings and Many-Body Problems

    Full text link
    This text is a set of lecture notes for a 4.5-hour course given at the Erd\"os Center (R\'enyi Institute, Budapest) during the Summer School "Optimal Transport on Quantum Structures" (September 19th-23rd, 2023). Lecture I introduces the quantum analogue of the Wasserstein distance of exponent 22 defined in [F. Golse, C. Mouhot, T. Paul: Comm. Math. Phys. 343 (2016), 165-205], and in [F. Golse, T. Paul: Arch. Ration. Mech. Anal. 223 (2017) 57-94]. Lecture II discusses various applications of this quantum analogue of the Wasserstein distance of exponent 22, while Lecture III discusses several of its most important properties, such as the triangle inequality, and the Kantorovich duality in the quantum setting, together with some of their implications.Comment: 81 pages, 7 figure

    Effective theories of phase transitions

    Get PDF
    In this thesis we study systems undergoing a superfluid phase transition at finite temperature and chemical potential. We construct an effective description valid at late times and long wavelengths, using both the holographic duality and the Schwinger-Keldysh formalism for non-equilibrium field theories. In particular, in chapter 2 we employ analytic techniques to find the leading dissipative corrections to the energy-momentum tensor and the electric current of a holographic superfluid, away from criticality. Our method is based on the symplectic current of Crnkovic and Witten [1] and extends on previous results [2, 3]. We assume a general black hole background in the bulk, with finite charge density and scalars fields turned on. We express one-point functions of the boundary field theory solely in terms of thermodynamic quantities and data related to the black hole horizon in the bulk spacetime. Matching our results with the expected constitutive relations of superfluid hydrodynamics, we obtain analytic expressions for the five transport coefficients characterising superfluids with small superfluid velocities. In chapter 3 we examine the hydrodynamics of holographic superfluids arbitrarily close to the critical point. The main difference in this case is that, close to the critical point, the amplitude of the order parameter is an additional hydrodynamic degree of freedom and we have to include it in our effective theory. For simplicity, we choose to work in the probe limit. Utilising the symplectic current once again, we find the equations that govern the critical dynamics of the order parameter and the charge density and show that our holographic results are in complete agreement with Model F of Hohenberg and Halperin [4]. Through this process, we find analytic expressions for all the parameters of Model F, including the dissipative kinetic coefficient, in terms of thermodynamics and horizon data. In addition, we perform various numerical checks of our analytic results. Finally, in chapter 4 we consider critical superfluid dynamics within the Schwinger-Keldysh formalism. As in chapter 3, we focus on the complex order parameter and the conserved current of the spontaneously broken global symmetry, ignoring temperature and normal fluid velocity fluctuations. We construct an effective action up to second order in the a-fields and compare the resulting stochastic system with Model F and our holographic results in chapter 3. A crucial role in this construction is played by a time independent gauge symmetry, called “chemical shift symmetry”. We also integrate out the amplitude mode and obtain the conventional equations of superfluid hydrodynamics, valid for energies well below the gap of the amplitude mode

    Asymptotics of stochastic learning in structured networks

    Get PDF
    corecore