8 research outputs found

    Full Page Ads

    Get PDF

    Hybrid PDE solver for data-driven problems and modern branching

    Full text link
    The numerical solution of large-scale PDEs, such as those occurring in data-driven applications, unavoidably require powerful parallel computers and tailored parallel algorithms to make the best possible use of them. In fact, considerations about the parallelization and scalability of realistic problems are often critical enough to warrant acknowledgement in the modelling phase. The purpose of this paper is to spread awareness of the Probabilistic Domain Decomposition (PDD) method, a fresh approach to the parallelization of PDEs with excellent scalability properties. The idea exploits the stochastic representation of the PDE and its approximation via Monte Carlo in combination with deterministic high-performance PDE solvers. We describe the ingredients of PDD and its applicability in the scope of data science. In particular, we highlight recent advances in stochastic representations for nonlinear PDEs using branching diffusions, which have significantly broadened the scope of PDD. We envision this work as a dictionary giving large-scale PDE practitioners references on the very latest algorithms and techniques of a non-standard, yet highly parallelizable, methodology at the interface of deterministic and probabilistic numerical methods. We close this work with an invitation to the fully nonlinear case and open research questions.Comment: 23 pages, 7 figures; Final SMUR version; To appear in the European Journal of Applied Mathematics (EJAM

    Mean exit times and the multilevel Monte Carlo method

    Get PDF
    Numerical methods for stochastic differential equations are relatively inefficient when used to approximate mean exit times. In particular, although the basic Eulerā€“Maruyama method has weak order equal to one for approximating the expected value of the solution, the order reduces to one half when it is used in a straightforward manner to approximate the mean value of a (stopped) exit time. Consequently, the widely used standard approach of combining an Eulerā€“Maruyama discretization with a Monte Carlo simulation leads to a computationally expensive procedure. In this work, we show that the multilevel approach developed by Giles [Oper. Res., 56 (2008), pp. 607ā€“617] can be adapted to the mean exit time context. In order to justify the algorithm, we analyze the strong error of the discretization method in terms of its ability to approximate the exit time. We then show that the resulting multilevel algorithm improves the expected computational complexity by an order of magnitude, in terms of the required accuracy. Numerical results are provided to illustrate the analysis

    Multilevel Monte Carlo Approximation of Distribution Functions and Densities

    Full text link

    Statistical and numerical methods for diffusion processes with multiple scales

    No full text
    In this thesis we address the problem of data-driven coarse-graining, i.e. the process of inferring simplified models, which describe the evolution of the essential characteristics of a complex system, from available data (e.g. experimental observation or simulation data). Specifically, we consider the case where the coarse-grained model can be formulated as a stochastic differential equation. The main part of this work is concerned with data-driven coarse-graining when the underlying complex system is characterised by processes occurring across two widely separated time scales. It is known that in this setting commonly used statistical techniques fail to obtain reasonable estimators for parameters in the coarse-grained model, due to the multiscale structure of the data. To enable reliable data-driven coarse-graining techniques for diffusion processes with multiple time scales, we develop a novel estimation procedure which decisively relies on combining techniques from mathematical statistics and numerical analysis. We demonstrate, both rigorously and by means of extensive simulations, that this methodology yields accurate approximations of coarse-grained SDE models. In the final part of this work, we then discuss a systematic framework to analyse and predict complex systems using observations. Specifically, we use data-driven techniques to identify simple, yet adequate, coarse-grained models, which in turn allow to study statistical properties that cannot be investigated directly from the time series. The value of this generic framework is exemplified through two seemingly unrelated data sets of real world phenomena.Open Acces

    On the Techniques for Efficient Sampling, Uncertainty Quantification and Robust Control of Stochastic Multiscale Systems

    Get PDF
    In order to better understand and leverage natural phenomena to design materials and devices (e.g. biomedical coatings, catalytic reactors, thin conductive films for microprocessors, etc.), stochastic multiscale models have been developed that explicitly model the interactions and feedbacks between the electronic, atomistic/molecular, mesoscopic and macroscopic scales. These models attempt to use the accurate results from the fine scales to inform industrially relevant domain sizes and thereby improve product quality through optimal control actions during industrial manufacturing. However, the presence of stochastic calculations increases the computational cost of such modeling approaches and makes their direct application in uncertainty quantification, optimization and online control challenging. Uncertainty cannot be ignored from simulations, otherwise there will be model-plant mismatch and loss in performance. The added computational intensity necessitates the development of more efficient computational methods that can leverage the accurate predictions of stochastic multiscale models in the industrial setting where accuracy, efficiency and speed are of utmost importance. A lot of research has been done in the area of stochastic multiscale models over the past few decades, but some gaps in knowledge remain. For instance, the performance of traditional uncertainty quantification techniques such as power series (PSE) and polynomial chaos expansions (PCE) has not been compared in the context of stochastic multiscale systems. Furthermore, a novel sampling technique called Multilevel Monte Carlo (MLMC) sampling emerged from the field of computational finance with the aim of preserving accuracy of estimation of model observables while decreasing the required computational cost. However, its applications in the field of chemical engineering and in particular for stochastic multiscale systems remain limited. Also, the advancements in computing power caused the usefulness of machine learning methods such as Artificial Neural Networks (ANNs) to increase. Because of their flexibility, accuracy and computational efficiency, ANNs are experiencing a resurgence of research interest, but their application for stochastic multiscale chemical engineering systems are still limited at the moment. This thesis aims to fill the identified gaps in knowledge. The results of the conducted research indicate that PCE can be more computationally efficient and accurate than PSE for stochastic multiscale systems, but it may be vulnerable to the effects of stochastic noise. MLMC sampling provides an attractive advantage over the heuristic methods for uncertainty propagation in stochastic multiscale systems because it allows to estimate the level of noise in the observables. However, the stochastic noise imposes a limit on the maximum achievable MLMC accuracy, which was not observed for continuous systems that were originally used in MLMC development. ANNs appear to be a very promising method for online model predictive control of stochastic multiscale systems because of their computational efficiency, accuracy and robustness to large disturbances not seen in the training data
    corecore