2,926 research outputs found

    Facing the storm:Assessing global storm tide hazards in a changing climate

    Get PDF
    Coastal flooding is one of the most frequent natural hazards around the globe and can have devastating societal impacts. It is caused by extreme storm tides, which are composed of storm surges and tides, on top of mean sea levels. Due to socio-economic developments in the world’s coastal zones, the impacts of coastal floods have increased in recent decades. In addition, projected changes in the frequency and intensity of storms, as well as sea level rise due to climate change are expected to increase the coastal flood hazard. These trends show that it is crucial to further improve coastal flood hazard assessments to support coastal flood management. A lack of understanding of the influence of tropical cyclones (TCs) on storm tide level return periods (RPs) currently prevails. Available meteorological data does not adequately capture the structure of TCs, and the temporal length of this data is too short to accurately compute RPs because TCs are low-probability events. Existing large scale coastal flood hazard assessments assume an infinite flood duration and do not capture the physical hydrodynamic processes that drive coastal flooding. Furthermore, future changes in the frequency and intensity of TCs and extratropical cyclones (ETCs) are often neglected in coastal flood hazard assessments. As such, the goal of this thesis is to improve global storm tide modelling through the better representation of TC-related extremes and enable dynamic flood mapping in both current and future climates. The research in this thesis contributes to ongoing efforts in the coastal risk community to better understand coastal flood hazards and risks on a global scale. The COAST-RP dataset can help identify hotspot regions most prone to coastal flooding. Such information can then be used to determine where more detailed local-scale coastal flood hazard assessments are most needed. Combining data from COAST-RP with the HGRAPHER method allows us to move away from planar towards more advanced dynamic inundation methods. This will improve the accuracy of the coastal flood hazard maps. Lastly, the developed TC intensity Δ method that is applicable to different kinds of future climate TC datasets opens the door to studying the future intensity of TCs and corresponding storm surges by placing them in a future climate

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Learning and Control of Dynamical Systems

    Get PDF
    Despite the remarkable success of machine learning in various domains in recent years, our understanding of its fundamental limitations remains incomplete. This knowledge gap poses a grand challenge when deploying machine learning methods in critical decision-making tasks, where incorrect decisions can have catastrophic consequences. To effectively utilize these learning-based methods in such contexts, it is crucial to explicitly characterize their performance. Over the years, significant research efforts have been dedicated to learning and control of dynamical systems where the underlying dynamics are unknown or only partially known a priori, and must be inferred from collected data. However, much of these classical results have focused on asymptotic guarantees, providing limited insights into the amount of data required to achieve desired control performance while satisfying operational constraints such as safety and stability, especially in the presence of statistical noise. In this thesis, we study the statistical complexity of learning and control of unknown dynamical systems. By utilizing recent advances in statistical learning theory, high-dimensional statistics, and control theoretic tools, we aim to establish a fundamental understanding of the number of samples required to achieve desired (i) accuracy in learning the unknown dynamics, (ii) performance in the control of the underlying system, and (iii) satisfaction of the operational constraints such as safety and stability. We provide finite-sample guarantees for these objectives and propose efficient learning and control algorithms that achieve the desired performance at these statistical limits in various dynamical systems. Our investigation covers a broad range of dynamical systems, starting from fully observable linear dynamical systems to partially observable linear dynamical systems, and ultimately, nonlinear systems. We deploy our learning and control algorithms in various adaptive control tasks in real-world control systems and demonstrate their strong empirical performance along with their learning, robustness, and stability guarantees. In particular, we implement one of our proposed methods, Fourier Adaptive Learning and Control (FALCON), on an experimental aerodynamic testbed under extreme turbulent flow dynamics in a wind tunnel. The results show that FALCON achieves state-of-the-art stabilization performance and consistently outperforms conventional and other learning-based methods by at least 37%, despite using 8 times less data. The superior performance of FALCON arises from its physically and theoretically accurate modeling of the underlying nonlinear turbulent dynamics, which yields rigorous finite-sample learning and performance guarantees. These findings underscore the importance of characterizing the statistical complexity of learning and control of unknown dynamical systems.</p

    Dynamical analysis of mushroom bifurcations: deterministic and stochastic approaches

    Full text link
    Treballs Finals de Grau de Matemàtiques, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2023, Director: Àlex Haro i Josep Sardanyés i Cayuela[en] Bifurcation theory has found contemporary applications in synthetic biology, particularly in the field of biosensors [43]. The aim of this thesis is to expand upon the framework presented in the referenced paper, which introduces a model depicting the behavior of mushroom bifurcations. The mushroom bifurcation diagram exhibits four saddle-node bifurcations and involves bistability. Our goal is to develop a comprehensive mathematical formalism that can effectively describe this behavior, both deterministically and stochastically. By doing so, we seek to uncover additional properties regarding the transients exhibited by these biosensors, specifically focusing on optimizing their timer-effect, memory properties, and signaling capabilities. We will introduce stochastic dynamics by considering intrinsic noise in the molecular processes, allowing us to investigate the slowing-down effects in the vicinity of the saddle-nodes and transcritical bifurcations. To conduct this study, we will use three fundamental mathematical tools, which can be regarded as the backbone of our analysis. These mathematical vertebrae include the Lemma of Morse, the Weierstrass Preparation Theorem and, most notably, the Implicit Function Theorem. Through this rigorous analysis, we aim to enhance our understanding of the underlying dynamics of these biosensors and facilitate their further improvement and utilization in various applications

    On the interaction of stochastic forcing and regime dynamics

    Get PDF
    Stochastic forcing can, sometimes, stabilise atmospheric regime dynamics, increasing their persistence. This counter-intuitive effect has been observed in geophysical models of varying complexity, and here we investigate the mechanisms underlying stochastic regime dynamics in a conceptual model. We use a six-mode truncation of a barotropic β-plane model, featuring transitions between analogues of zonal and blocked flow conditions, and identify mechanisms similar to those seen previously in work on low-dimensional random maps. Namely, we show that a geometric mechanism, here relating to monotonic instability growth, allows for asymmetric action of symmetric perturbations on regime lifetime and that random scattering can “trap” the flow in more stable regions of phase space. We comment on the implications for understanding more complex atmospheric systems

    Asymptotics of stochastic learning in structured networks

    Get PDF

    Learning and interpreting the galaxy-halo connection in cosmic simulations

    Get PDF
    In modern galactic astronomy, cosmological simulations and observational galaxy surveys work hand in hand, offering valuable insights into the historical evolution of galaxies on both cosmological scales and an individual basis. As dark matter halos constitute a significant portion of the mass in galaxies, clusters, and cosmic structures, they profoundly impact the properties of galaxies. This relationship is known as the galaxy-halo connection. Galaxies possess a complex nature necessitating computationally intensive modelling. Accurately and consistently modelling galaxy-halo coevolution across all scales thus presents a challenge, and compromises are usually made between simulation size and resolution. However, it is possible to conduct pure dark matter simulations on larger scales, requiring a fraction of the power of complete simulations. As observational surveys expand in size and detail, however, simulations of this magnitude become crucial in supporting their findings, surpassing the limitations of galaxy simulations. In this thesis, I present a machine learning model which encodes the galaxy-halo connection within a cosmohydrodynamical simulation. This model predicts the star formation and metallicity of galaxies over time, from properties of their halos and cosmic environment. These predictions are used to emulate observational data using spectral synthesis models, and subsequently the model is applied to a large dark matter simulation. Through these predictions, the model replicates the correlations responsible for galaxy evolution, as well as observable quantities reflecting this galaxy-halo connection, with similar results in dark matter simulations. The model computes accurate galaxy-halo statistics and reveals important physical relationships; specifically, variables associated with halo accretion influence a galaxy's mass and star formation, while environmental variables are linked to its metallicity. While the predictions from dark matter simulations are reasonably accurate, they are affected by the absence of baryonic processes, the resolution of the simulation, and the calculation of halo properties

    An extended generalized Markov model for the spread risk and its calibration by using filtering techniques in Solvency II framework

    Get PDF
    The Solvency II regulatory regime requires the calculation of a capital requirement, the Solvency Capital Requirement (SCR), for the insurance and reinsurance companies, that is based on a market-consistent evaluation of the Basic Own Funds probability distribution forecast over a one-year time horizon. This work proposes an extended generalized Markov model for rating-based pricing of risky securities for spread risk assessment and management within the Solvency II framework, under an internal model or partial internal model. This model is based on Jarrow, Lando and Turnbull (1997), Lando (1998) and Gambaro et al. (2018) and models the credit rating transitions and the default process using an extension of a time-homogeneous Markov chain and two subordinator processes. This approach allows simultaneous modeling of credit spreads for different rating classes and credit spreads to fluctuate randomly even when the rating does not change. The estimation methodologies used in this work are consistent with the scope of the work and the scope of the proposed model, i.e., pricing of defaultable bonds and calculation of SCR for the spread risk sub-module, and with the market-consistency principle required by Solvency II. For this purpose, estimation techniques on time series known as filtering techniques are used, which allow the model parameters to be jointly estimated under both the real-world probability measure (necessary for risk assessment) and the risk-neutral probability measure (necessary for pricing). Specifically, an appropriate set of time series of credit spread term structures, differentiated by economic sector and rating class, is used. The proposed model, in its final version, returns excellent results in terms of goodness of fit to historical data, and the projected data are consistent with historical data and the Solvency II framework. The filtering techniques, in the different configurations used in this work (particle filtering with Gauss-Legendre quadrature techniques, particle filtering with Sequential Importance Resampling algorithm, Kalman filter), were found to be an effective and flexible tool for estimating the models proposed, able to handle the high computational complexity of the problem addressed

    Collective variables between large-scale states in turbulent convection

    Full text link
    The dynamics in a confined turbulent convection flow is dominated by multiple long-lived macroscopic circulation states, which are visited subsequently by the system in a Markov-type hopping process. In the present work, we analyze the short transition paths between these subsequent macroscopic system states by a data-driven learning algorithm that extracts the low-dimensional transition manifold and the related new coordinates, which we term collective variables, in the state space of the complex turbulent flow. We therefore transfer and extend concepts for conformation transitions in stochastic microscopic systems, such as in the dynamics of macromolecules, to a deterministic macroscopic flow. Our analysis is based on long-term direct numerical simulation trajectories of turbulent convection in a closed cubic cell at a Prandtl number Pr=0.7Pr = 0.7 and Rayleigh numbers Ra=106Ra = 10^6 and 10710^7 for a time lag of 10510^5 convective free-fall time units. The simulations resolve vortices and plumes of all physically relevant scales resulting in a state space spanned by more than 3.5 million degrees of freedom. The transition dynamics between the large-scale circulation states can be captured by the transition manifold analysis with only two collective variables which implies a reduction of the data dimension by a factor of more than a million. Our method demonstrates that cessations and subsequent reversals of the large-scale flow are unlikely in the present setup and thus paves the way to the development of efficient reduced-order models of the macroscopic complex nonlinear dynamical system.Comment: 24 pages, 12 Figures, 1 tabl
    • …
    corecore