Publications Server of the Weierstrass Institute for Applied Analysis and Stochastics
Not a member yet
7260 research outputs found
Sort by
Hellinger--Kantorovich gradient flows: Global exponential decay of entropy functionals
We investigate a family of gradient flows of positive and probability measures, focusing on the Hellinger--Kantorovich (HK) geometry, which unifies transport mechanism of Otto--Wasserstein, and the birth-death mechanism of Hellinger (or Fisher--Rao). A central contribution is a complete characterization of global exponential decay behaviors of entropy functionals under Otto--Wasserstein and Hellinger-type gradient flows. In particular, for the more challenging analysis of HK gradient flows on positive measures---where the typical log-Sobolev arguments fail---we develop a specialized shape-mass decomposition that enables new analysis results. Our approach also leverages the Polyak--Łojasiewicz-type functional inequalities and a careful extension of classical dissipation estimates. These findings provide a unified and complete theoretical framework for gradient flows and underpin applications in computational algorithms for statistical inference, optimization, and machine learning
First- and second-order optimality conditions in the sparse optimal control of Cahn--Hilliard systems
This paper deals with the sparse distributed control of viscous and nonviscous Cahn--Hilliard systems. We report on results concerning first-order necessary and second-order sufficient optimality conditions that have recently established by the authors. The analysis covers both the cases when the nonlinear double well potential governing the evolution is of either regular or logarithmic type. A major difficulty originates from the sparsity-enhancing term in the cost functional which typically is nondifferentiable
Out-of-core Constrained Delaunay Tetrahedralizations for Large Scenes
Tetrahedralization algorithms are used for many applications such as Ray Tracing and Finite Element Methods. For most of the applications, constrained tetrahedralization algorithms are chosen because they can preserve input triangles. The constrained tetrahedralization algorithms developed so far might suffer from a lack of memory. We propose an out-of-core near Delaunay constrained tetrahedralization algorithm using the divide-and-conquer paradigm to decrease memory usage. If the expected memory usage is below the user-defined memory limit, we tetrahedralize using TetGen. Otherwise, we subdivide the set of input points into two halves and recursively apply the same idea to the two halves. When compared with the TetGen, our algorithm tetrahedralizes the point clouds using less amount of memory but takes more time and generates tetrahedralizations that do not satisfy the Delaunay criterion at the boundaries of the merged regions. We quantify the error using the aspect-ratio metric. The difference between the tetrahedralizations that our approach produce and the Delaunay tetrahedralization are small and the results are acceptable for most applications
Robustness in Stochastic Filtering and Maximum Likelihood Estimation for SDEs
We consider complex stochastic systems in continuous time and space where the objects of interest are modelled via stochastic differential equations, in general high dimensional and with nonlinear coefficients. The extraction of quantifiable information from such systems has a long history and many aspects. We shall focus here on the perhaps most classical problems in this context: the filtering problem for nonlinear diffusions and the problem of parameter estimation, also for nonlinear and multidimensional diffusions. More specifically, we return to the question of robustness, first raised in the filtering community in the mid-1970s: will it be true that the conditional expectation of some observable of the signal process, given an observation (sample) path, depends continuously on the latter? Sadly, the answer here is no, as simple counterexamples show. Clearly, this is an unhappy state of affairs for users who effectively face an ill-posed situation: close observations may lead to vastly different predictions. A similar question can be asked in the context of (maximum likelihood) parameter estimation for diffusions. Some (apparently novel) counter examples show that, here again, the answer is no. Our contribution (Crisan et al., Ann Appl Probab 23(5):2139–2160, 2013); Diehl et al., A Levy-area between Brownian motion and rough paths with applications to robust non-linear filtering and RPDEs (2013, arXiv:1301.3799; Diehl et al., Pathwise stability of likelihood estimators for diffusions via rough paths (2013, arXiv:1311.1061) changed to yes, in other words: well-posedness is restored, provided one is willing or able to regard observations as rough paths in the sense of T. Lyons
Ratio limits and simulation algorithms for the Palm version of stationary iterated tessellations
Distributional properties and a simulation algorithm for the Palm version of stationary iterated tessellations are considered. In particular we study the limit behavior of functionals related to Cox-Voronoi cells (such as typical shortest path lengths) if either the intensity γ0 of the initial tessellation or the intensity γ1 of the component tessellation converges to 0. We develop an explicit description of the Palm version of Poisson-Delaunay tessellations (PDT) which provides a new direct simulation algorithm for the typical Cox-Voronoi cell based on PDT. It allows us to simulate the Palm version of stationary iterated tessellations where either the initial or component tessellation is a PDT and can furthermore be used in order to show numerically that the qualitative and quantitative behavior of certain functionals related to Cox-Voronoi cells strongly depends on the type of the underlying iterated tessellation
The latent variable proximal point algorithm for variational problems with inequality constraints
The latent variable proximal point (LVPP) algorithm is a framework for solving infinite-dimensional variational problems with pointwise inequality constraints. The algorithm is a saddle point reformulation of the Bregman proximal point algorithm. At the continuous level, the two formulations are equivalent, but the saddle point formulation is more amenable to discretization because it introduces a structure-preserving transformation between a latent function space and the feasible set. Working in this latent space is much more convenient for enforcing inequality constraints than the feasible set, as discretizations can employ general linear combinations of suitable basis functions, and nonlinear solvers can involve general additive updates. LVPP yields numerical methods with observed mesh-independence for obstacle problems, contact, fracture, plasticity, and others besides; in many cases, for the first time. The framework also extends to more complex constraints, providing means to enforce convexity in the Monge?Ampère equation and handling quasi-variational inequalities, where the underlying constraint depends implicitly on the unknown solution. In this paper, we describe the LVPP algorithm in a general form and apply it to twelve problems from across mathematics
Physics-guided sequence modeling for fast simulation and design exploration of 2D memristive devices
Modeling hysteretic switching dynamics in memristive devices is computationally demanding due to coupled ionic and electronic transport processes. This challenge is particularly relevant for emerging two-dimensional (2D) devices, which feature high-dimensional design spaces that remain largely unexplored. We introduce a physics-guided modeling framework that integrates high-fidelity finite-volume (FV) charge transport simulations with a long short-term memory (LSTM) artificial neural network (ANN) to predict dynamic current-voltage behavior. Trained on physically grounded simulation data, the ANN surrogate achieves more than four orders of magnitude speedup compared to the FV model, while maintaining direct access to physically meaningful input parameters and high accuracy with typical normalized errors <1%. This enables iterative tasks that were previously computationally prohibitive, including inverse modeling from experimental data, design space exploration via metric mapping and sensitivity analysis, as well as constrained multi-objective design optimization. Importantly, the framework preserves physical interpretability via access to detailed spatial dynamics, including carrier densities, vacancy distributions, and electrostatic potentials, through a direct link to the underlying FV model. Our approach establishes a scalable framework for efficient exploration, interpretation, and model-driven design of emerging 2D memristive and neuromorphic devices
Mathematical Modeling of Blood Flow in the Cardiovascular System
This chapter gives a short overview of the mathematical modeling of blood flow at different resolutions, from the large vessel scale (three-dimensional, one-dimensional, and zero-dimensional modeling) to microcirculation and tissue perfusion. The chapter focuses first on the formulation of the mathematical modeling, discussing the underlying physical laws, the need for suitable boundary conditions, and the link to clinical data. Recent applications related to medical imaging are then discussed, in order to highlight the potential of computer simulation and of the interplay between modeling, imaging, and experiments in order to improve clinical diagnosis and treatment. The chapter ends presenting some current challenges and perspectives
Discrete Transparent Boundary Conditions for Multi-Band Effective Mass Approximations
This chapter is concerned with the derivation and numerical testing of discrete transparent boundary conditions (DTBCs) for stationary multi-band effective mass approximations (MEMAs). We analyze the continuous problem and introduce transparent boundary conditions (TBCs). The discretization of the differential equations is done with the help of finite difference schemes.A fully discrete approach is used in order to develop DTBCs that are completely reflection-free. The analytical and discrete dispersion relations are analyzed in depth and the limitations of the numerical computations are shown. We extend the results of earlier works on DTBCs for the scalar Schrödinger equation by considering alternative finite difference schemes.The introduced schemes and their corresponding DTBCs are tested numerically on an example with a single barrier potential. The d-band k⋅p-model is introduced as most general MEMA. We derive DTBCs for the d-band k⋅p-model and test our results on a quantum well nanostructure