60 research outputs found

    B-Spline based uncertainty quantification for stochastic analysis

    Get PDF
    The consideration of uncertainties has become inevitable in state-of-the-art science and technology. Research in the field of uncertainty quantification has gained much importance in the last decades. The main focus of scientists is the identification of uncertain sources, the determination and hierarchization of uncertainties, and the investigation of their influences on system responses. Polynomial chaos expansion, among others, is suitable for this purpose, and has asserted itself as a versatile and powerful tool in various applications. In the last years, its combination with any kind of dimension reduction methods has been intensively pursued, providing support for the processing of high-dimensional input variables up to now. Indeed, this is also referred to as the curse of dimensionality and its abolishment would be considered as a milestone in uncertainty quantification. At this point, the present thesis starts and investigates spline spaces, as a natural extension of polynomials, in the field of uncertainty quantification. The newly developed method 'spline chaos', aims to employ the more complex, but thereby more flexible, structure of splines to counter harder real-world applications where polynomial chaos fails. Ordinarily, the bases of polynomial chaos expansions are orthogonal polynomials, which are replaced by B-spline basis functions in this work. Convergence of the new method is proved and emphasized by numerical examples, which are extended to an accuracy analysis with multi-dimensional input. Moreover, by solving several stochastic differential equations, it is shown that the spline chaos is a generalization of multi-element Legendre chaos and superior to it. Finally, the spline chaos accounts for solving partial differential equations and results in a stochastic Galerkin isogeometric analysis that contributes to the efficient uncertainty quantification of elliptic partial differential equations. A general framework in combination with an a priori error estimation of the expected solution is provided

    Probabilistic micromechanical spatial variability quantification in laminated composites

    Get PDF
    SN and SS are grateful for the support provided through the Lloyd’s Register Foundation Centre. The Foundation helps to protect life and property by supporting engineering-related education, public engagement and the application of research.Peer reviewedPostprin

    Exact Open Quantum System Dynamics – Investigating Environmentally Induced Entanglement

    Get PDF
    When calculating the dynamics of a quantum system, including the effect of its environment is highly relevant since virtually any real quantum system is exposed to environmental influences. It has turned out that the widely used perturbative approaches to treat such so-called open quantum systems have severe limitations. Furthermore, due to current experiments which have implemented strong system-environment interactions the non-perturbative regime is far from being academical. Therefore determining the exact dynamics of an open quantum system is of fundamental relevance. The hierarchy of pure states (HOPS) formalism poses such an exact approach. Its novel and detailed derivation, as well as several numerical aspects constitute the main methodical part of this work. Motivated by fundamental issues but also due to practical relevance for real world devices exploiting quantum effects, the entanglement dynamics of two qubits in contact with a common environment is investigated extensively. The HOPS formalism is based on the exact stochastic description of open quantum system dynamics in terms of the non-Markovian quantum state diffusion (NMQSD) theory. The distinguishing and numerically beneficial features of the HOPS approach are the stochastic nature, the implicit treatment of the environmental dynamics and, related to this, the enhanced statistical convergence (importance sampling), as well as the fact that only pure states have to be propagated. In order to claim that the HOPS approach is exact, we develop schemes to ensure that the numerical errors can be made arbitrarily small. This includes the sampling of Gaussian stochastic processes, the multi-exponential representation of the bath correlation function and the truncation of the hierarchy. Moreover, we incorporated thermal effects on the reduced dynamics by a stochastic Hermitian contribution to the system Hamiltonian. In particular, for strong system-environment couplings this is very beneficial for the HOPS. To confirm the accuracy assertion we utilize the seemingly simple, however, non-trivial spin-boson model to show agreement between the HOPS and other methods. The comparison shows the HOPS method’s versatile applicability over a broad range of model parameters including weak and strong coupling to the environment, as well as zero and high temperatures. With the gained knowledge that the HOPS method is versatile and accurately applicable, we investigate the specific case of two qubits while focusing on their entanglement dynamics. It is well known that entanglement, the relevant property when exploiting quantum effects in fields like quantum computation, communication and metrology, is fragile when exposed to environmental noise. On the other hand, a common environment can also mediate an effective interaction between the two parties featuring entanglement generation. In this work we elucidate the interplay between these competing effects, focusing on several different aspects. For the perturbative (weak coupling) regime we enlighten the difficulties inherent to the frequently used rotating wave approximation (RWA), an approximation often applied to ensure positivity of the reduced state for all times. We show that these difficulties are best overcome when simply omitting the RWA. The seemingly unphysical dynamics can still be used to approximate the exact entanglement dynamics very well. Furthermore, the influence of the renormalizing counter term is investigated. It is expected that under certain conditions (adiabatic regime) the generation of entanglement is suppressed by the presence of the counter term. It is shown, however, that for a deep sub-Ohmic environment this expectation fails. Leaving the weak coupling regime, we show that the generation of entanglement due to the influence of the common environment is a general property of the open two-spin system. Even for non-zero temperatures it is demonstrated that entanglement can still be generated and may last for arbitrary long times. Finally, we determine the maximum of the steady state entanglement as a function of the coupling strength and show how the known delocalization-to-localization phase transition is reflected in the long time entanglement dynamics. All these results require an exact treatment of the open quantum system dynamics and, thus, contribute to the fundamental understanding of the entanglement dynamics of open quantum systems.Bei der Bestimmung der Dynamik eines Quantensystems ist die Berücksichtigung seiner Umgebung von großem Interessen, da faktisch jedes reale Quantensystem von seiner Umgebung beeinflusst wird. Es zeigt sich, dass die viel verwendeten störungstheoretischen Ansätze starken Einschränkungen unterliegen. Außerdem, da es in aktuellen Experimenten gelungen ist starke Wechselwirkung zwischen dem System und seiner Umgebung zu realisieren, gewinnt das nicht-störungstheoretischen Regime stets an Relevanz. Dementsprechend ist die Berechnung der exakten Dynamik offener Quantensysteme von grundlegender Bedeutung. Einen solchen exakten nummerischen Zugang stellt der hierarchy of pure states (HOPS) Formalismus dar. Dessen neuartige und detaillierte Herleitung, sowie diverse nummerische Aspekte werden im methodischen Teil dieser Arbeit dargelegt. In vielerlei Hinsicht relevant folgt als Anwendung eine umfangreiche Untersuchung der Verschränkungsdynamik zweier Qubits unter dem Einfluss einer gemeinsamen Umgebung. Vor allem im Hinblick auf die experimentell realisierbare starke Kopplung mit der Umgebung ist dieses Analyse von Interesse. Der HOPS Formalismus basiert auf der stochastischen Beschreibung der Dynamik offener Quantensysteme im Rahmen der non-Markovian quantum state diffusion (NMQSD) Theorie. Der stochastische Charakter der Methode, die implizite Berücksichtigung der Umgebungsdynamik, sowie das damit verbundene Importance Sampling, als auch die Tatsache dass lediglich reine Zustände propagiert werden müssen unterscheidet diese Methode maßgeblich von anderen Ansätzen und birgt numerische Vorteile. Um zu behaupten, dass die HOPS Methode exakte Ergebnisse liefert, müssen auftretenden nummerischen Fehler beliebig klein gemacht werden können. Ein grundlegender Teil der hier vorgestellten methodischen Arbeit liegt in der Entwicklung diverser Schemata, die genau das erreichen. Dazu zählen die numerische Realisierung von Gauss’schen stochastischen Prozessen, die Darstellung der Badkorrelationsfunktion als Summe von Exponentialfunktionen sowie das Abschneiden der Hierarchie. Außerdem wird gezeigt, dass sich der temperaturabhängige Einfluss der Umgebung durch einen stochastischen Hermiteschen Beitrag zum System-Hamiltonoperator berücksichtigen lässt. Vor allem bei starker Kopplung ist diese Variante besonders geeignet für den HOPS Zugang. Um die Genauigkeitsbehauptung der HOPS Methode zu überprüfen wird die Übereinstimmung mit anderen Methode gezeigt, wobei das vermeintlich einfachste, jedoch nicht triviale spin-boson-Modell als Testsystem verwendet wird. Diese Untersuchung belegt, dass die HOPS Methode für eine Vielzahl an Szenarien geeignet ist. Das beinhaltet schwache und starke Kopplung an die Umgebung, sowie Temperatur null als auch hohe Temperaturen. Mit dem gewonnenen Wissen, dass die HOPS Methode vielseitig einsetzbar ist und genaue Ergebnisse liefert wird anschließend der spezielle Fall zweier Qubits untersucht. Im Hinblick auf die Ausnutzung von Quanteneffekten in Bereichen wie Rechentechnik, Kommunikation oder Messtechnik liegt der primäre Fokus auf der Dynamik der Verschränkung zwischen den Qubits. Es ist bekannt, dass durch von außen induziertes Rauschen die Verschränkung im Laufe der Zeit abnimmt. Andererseits weiß man auch, dass eine gemeinsame Umgebung zu einer effektiven Wechselwirkung zwischen den Qubits führt, welche Verschränkung aufbauen kann. In dieser Arbeit wird das Wechselspiel zwischen diesen beiden gegensätzlichen Effekten untersucht, wobei die folgenden Aspekte beleuchtet werden. Für den Fall schwacher Kopplung, wo eine störungstheoretische Behandlung in Frage kommt, werden die Probleme der rotating wave approximation (RWA) analysiert. Diese Näherung wird häufig verwendet um die Positivität des reduzierten Zustands zu allen Zeiten zu gewährleisten. Es wird gezeigt, dass sich diese Probleme am besten vermeiden lassen, wenn die RWA einfach weggelassen wird. Die auf den ersten Blick nicht-physikalische Dynamik ist sehr gut geeignet um die exakte Verschränkungsdynamik näherungsweise wiederzugeben. Des Weiteren wird der Einfluss der Renormalisierung des sogenannten counter terms untersucht. Unter bestimmten Voraussetzungen (adiabatisches Regime) ist zu erwarten, dass der Verschränkungsaufbau durch den counter term verhindert wird. Es zeigt sich, dass für eine sehr sub-Ohm’sche Umgebung (deep sub-Ohmic regime) diese Erwartung nicht zutrifft. Weiterhin wird der Fall starker Kopplung zwischen dem zwei-Qubit-System und der Umgebung betrachtet. Die Berechnungen zeigen das generelle Bild, dass sich zwei nicht wechselwirkende Qubits durch den Einfluss einer gemeinsamen Umgebung verschränken. Selbst bei Temperaturen größer als null kann Verschränkung aufgebaut werden und auch für beliebig lange Zeiten erhalten bleiben. In einem letzten Punkt wird das Maximum der stationären Verschränkung (Langzeit-Limes) in Abhängigkeit von der Kopplungsstärke bestimmt. Dabei wird gezeigt, dass sich der bekannte Phasenübergang von Delokalisierzung zu Lokalisierung auch in der Langzeitdynamik der Verschränkung widerspiegelt. All diese Erkenntnisse erfordern eine exakte Behandlung der offenen Systemdynamik und erweitern somit das fundamentalen Verständnis der Verschränkungsdynamik offener Quantensysteme

    Low rank surrogates for polymorphic fields with application to fuzzy-stochastic partial differential equations

    Get PDF
    We consider a general form of fuzzy-stochastic PDEs depending on the interaction of probabilistic and non-probabilistic ("possibilistic") influences. Such a combined modelling of aleatoric and epistemic uncertainties for instance can be applied beneficially in an engineering context for real-world applications, where probabilistic modelling and expert knowledge has to be accounted for. We examine existence and well-definedness of polymorphic PDEs in appropriate function spaces. The fuzzy-stochastic dependence is described in a high-dimensional parameter space, thus easily leading to an exponential complexity in practical computations. To aleviate this severe obstacle in practise, a compressed low-rank approximation of the problem formulation and the solution is derived. This is based on the Hierarchical Tucker format which is constructed with solution samples by a non-intrusive tensor reconstruction algorithm. The performance of the proposed model order reduction approach is demonstrated with two examples. One of these is the ubiquitous groundwater flow model with Karhunen-Loeve coefficient field which is generalized by a fuzzy correlation length

    Low rank surrogates for polymorphic fields with application to fuzzy-stochastic partial differential equations

    Get PDF
    We consider a general form of fuzzy-stochastic PDEs depending on the interaction of probabilistic and non-probabilistic ("possibilistic") influences. Such a combined modelling of aleatoric and epistemic uncertainties for instance can be applied beneficially in an engineering context for real-world applications, where probabilistic modelling and expert knowledge has to be accounted for. We examine existence and well-definedness of polymorphic PDEs in appropriate function spaces. The fuzzy-stochastic dependence is described in a high-dimensional parameter space, thus easily leading to an exponential complexity in practical computations. To aleviate this severe obstacle in practise, a compressed low-rank approximation of the problem formulation and the solution is derived. This is based on the Hierarchical Tucker format which is constructed with solution samples by a non-intrusive tensor reconstruction algorithm. The performance of the proposed model order reduction approach is demonstrated with two examples. One of these is the ubiquitous groundwater flow model with Karhunen-Loeve coefficient field which is generalized by a fuzzy correlation length

    Full Page Ads

    Get PDF

    Quadrature methods for elliptic PDEs with random diffusion

    Get PDF
    In this thesis, we consider elliptic boundary value problems with random diffusion coefficients. Such equations arise in many engineering applications, for example, in the modelling of subsurface flows in porous media, such as rocks. To describe the subsurface flow, it is convenient to use Darcy's law. The key ingredient in this approach is the hydraulic conductivity. In most cases, this hydraulic conductivity is approximated from a discrete number of measurements and, hence, it is common to endow it with uncertainty, i.e. model it as a random field. This random field is usually characterized by its mean field and its covariance function. Naturally, this randomness propagates through the model which yields that the solution is a random field as well. The thesis on hand is concerned with the effective computation of statistical quantities of this random solution, like the expectation, the variance, and higher order moments. In order to compute these quantities, a suitable representation of the random field which describes the hydraulic conductivity needs to be computed from the mean field and the covariance function. This is realized by the Karhunen-Loeve expansion which separates the spatial variable and the stochastic variable. In general, the number of random variables and spatial functions used in this expansion is infinite and needs to be truncated appropriately. The number of random variables which are required depends on the smoothness of the covariance function and grows with the desired accuracy. Since the solution also depends on these random variables, each moment of the solution appears as a high-dimensional Bochner integral over the image space of the collection of random variables. This integral has to be approximated by quadrature methods where each function evaluation corresponds to a PDE solve. In this thesis, the Monte Carlo, quasi-Monte Carlo, Gaussian tensor product, and Gaussian sparse grid quadrature is analyzed to deal with this high-dimensional integration problem. In the first part, the necessary regularity requirements of the integrand and its powers are provided in order to guarantee convergence of the different methods. It turns out that all the powers of the solution depend, like the solution itself, anisotropic on the different random variables which means in this case that there is a decaying dependence on the different random variables. This dependence can be used to overcome, at least up to a certain extent, the curse of dimensionality of the quadrature problem. This is reflected in the proofs of the convergence rates of the different quadrature methods which can be found in the second part of this thesis. The last part is concerned with multilevel quadrature approaches to keep the computational cost low. As mentioned earlier, we need to solve a partial differential equation for each quadrature point. The common approach is to apply a finite element approximation scheme on a refinement level which corresponds to the desired accuracy. Hence, the total computational cost is given by the product of the number of quadrature points times the cost to compute one finite element solution on a relatively high refinement level. The multilevel idea is to use a telescoping sum decomposition of the quantity of interest with respect to different spatial refinement levels and use quadrature methods with different accuracies for each summand. Roughly speaking, the multilevel approach spends a lot of quadrature points on a low spatial refinement and only a few on the higher refinement levels. This reduces the computational complexity but requires further regularity on the integrand which is proven for the considered problems in this thesis

    Surrogate Groundwater Models

    Get PDF
    The spatially and temporally variable parameters and inputs to complex groundwater models typically result in long runtimes which hinder comprehensive analysis. These analyses typically involving calibration, sensitivity analysis and uncertainty propagation. Surrogate modelling aims to provide a simpler, and hence faster, model which emulates the specified output of a more complex model as a function of its inputs and parameters. A faster model enables more model runs, critical for understanding models through methods such as sensitivity and uncertainty analysis. Three broad categories of surrogate models are data-driven, projection, and hierarchical-based. Data-driven surrogates approximate a groundwater model through an empirical model that captures the input-output mapping of the original model. Projection-based models reduce the dimensionality of the parameter space by projecting the governing equations onto basis vectors. In hierarchical or multi-fidelity methods the surrogate is created by simplifying the representation of the physical system, such as by ignoring certain processes, or reducing the numerical resolution. A surrogate method can only be of practical value if it significantly reduces model runtimes, robustly emulates the output and can be implemented simply. A gamut of surrogate techniques have been applied to groundwater and similar partial differential equation based simulators, but the practicability of all approaches is not clear. Among the promising approaches are Polynomial Chaos Expansions (PCE), Multi-fidelity Stochastic Collocation (MFSC) and modern Deep Learning (DL) Neural Networks (NN). These are investigated in the thesis. They represent the three categories above as projection-based (depending on implementation), multi-fidelity and data-driven methods respectively. However all three methods are black box in that they do not require re-implementation of the complex model, making them relevant to practitioners. PCEs are an efficient and statistically rigorous approach with a number of well developed methods for their calibration. In the framework we present, they are suited to accelerating sensitivity analysis and uncertainty propagation of models with a moderate number of parameters. MFSC overcomes many shortcomings of other surrogate methods by employing a lower resolution model as the surrogate. The approach is shown to faithfully emulate spatially and temporally distributed parameters, and allows simply parallelization. While traditional NN are not the most promising surrogate technique, there is potential in the DL software frameworks associated with the recent boom in their popularity. This promise extends not just to efficient uncertainty analysis and data assimilation for groundwater modelling, but numerical modelling in general. Emulation using either PCE, MFSC or DL as demonstrated in this thesis will add value to practical groundwater modelling by not only reducing model runtimes but deepening understanding of the underlying model. The PCE approach iteratively selects the underlying model samples, training a surrogate with less than 1% error in under 200 model runs. The MFSC method achieves similar accuracy with less than 30 full model runs. The DL approaches are less efficient, requiring 500 model runs. However they emulate the full spatially distributed output of the underlying model, and can be applied in situations with 100s of uncertain parameters. Further contributions of this work include two improvements to the MFSC algorithm, reducing surrogate error by two orders of magnitude. We identify a gap between existing research in applied DL, theory-rich applied mathematics and the increasing quantity of spatially distributed data. We create a new surrogate form which combines PCE theory with a DL implementation, and develop another which captures physical aquifer properties during the training of a state of the art DL architecture

    Decision-making with gaussian processes: sampling strategies and monte carlo methods

    Get PDF
    We study Gaussian processes and their application to decision-making in the real world. We begin by reviewing the foundations of Bayesian decision theory and show how these ideas give rise to methods such as Bayesian optimization. We investigate practical techniques for carrying out these strategies, with an emphasis on estimating and maximizing acquisition functions. Finally, we introduce pathwise approaches to conditioning Gaussian processes and demonstrate key benefits for representing random variables in this manner.Open Acces
    • …
    corecore