218 research outputs found
Computation of the inverse Laplace Transform based on a Collocation method which uses only real values
We develop a numerical algorithm for inverting a Laplace transform (LT), based on Laguerre polynomial series expansion of the
inverse function under the assumption that the LT is known on the real axis only. The method belongs to the class of Collocation
methods (C-methods), and is applicable when the LT function is regular at infinity. Difficulties associated with these problems are due
to their intrinsic ill-posedness. The main contribution of this paper is to provide computable estimates of truncation, discretization,
conditioning and roundoff errors introduced by numerical computations. Moreover, we introduce the pseudoaccuracy which will be
used by the numerical algorithm in order to provide uniform scaled accuracy of the computed approximation for any x with respect
to ex . These estimates are then employed to dynamically truncate the series expansion. In other words, the number of the terms of
the series acts like the regularization parameter which provides the trade-off between errors.
With the aim to validate the reliability and usability of the algorithm experiments were carried out on several test functions
A scalable space-time domain decomposition approach for solving large-scale nonlinear regularized inverse ill-posed problems in 4D variational data assimilation
We develop innovative algorithms for solving the strong-constraint
formulation of four-dimensional variational data assimilation in large-scale
applications. We present a space-time decomposition approach that employs
domain decomposition along both the spatial and temporal directions in the
overlapping case and involves partitioning of both the solution and the
operators. Starting from the global functional defined on the entire domain, we
obtain a type of regularized local functionals on the set of subdomains
providing the order reduction of both the predictive and the data assimilation
models. We analyze the algorithm convergence and its performance in terms of
reduction of time complexity and algorithmic scalability. The numerical
experiments are carried out on the shallow water equation on the sphere
according to the setup available at the Ocean Synthesis/Reanalysis Directory
provided by Hamburg University.Comment: Received: 10 March 2020 / Revised: 29 November 2021 / Accepted: 7
March 202
Space-Time Decomposition of Kalman Filter
We present an innovative interpretation of Kalman Filter (KF, for short)
combining the ideas of Schwarz Domain Decomposition (DD) and Parallel in Time
(PinT) approaches. Thereafter we call it DD-KF. In contrast to standard DD
approaches which are already incorporated in KF and other state estimation
models, implementing a straightforward data parallelism inside the loop over
time, DD-KF ab-initio partitions the whole model, including filter equations
and dynamic model along both space and time directions/steps. As a consequence,
we get local KFs reproducing the original filter at smaller dimensions on local
domains. Also, sub problems could be solved in parallel. In order to enforce
the matching of local solutions on overlapping regions, and then to achieve the
same global solution of KF, local KFs are slightly modified by adding a
correction term keeping track of contributions of adjacent subdomains to
overlapping regions. Such a correction term balances localization errors along
overlapping regions, acting as a regularization constraint on local solutions.
Furthermore, such a localization excludes remote observations from each
analyzed location improving the conditioning of the error covariance matrices.
As dynamic model we consider Shallow Water equations which can be regarded a
consistent tool to get a proof of concept of the reliability assessment of
DD-KF in monitoring and forecasting of weather systems and ocean current
Parallel framework for dynamic domain decomposition of data assimilation problems: a case study on Kalman Filter algorithm
We focus on Partial Differential Equation (PDE)âbased Data Assimilation problems (DA) solved by means of variational approaches and Kalman filter algorithm. Recently, we presented a Domain Decomposition framework (we call it DDâDA, for short) performing a decomposition of the whole physical domain along space and time directions, and joining the idea of Schwarz's methods and parallel in time approaches. For effective parallelization of DDâDA algorithms, the computational load assigned to subdomains must be equally distributed. Usually computational cost is proportional to the amount of data entities assigned to partitions. Good quality partitioning also requires the volume of communication during calculation to be kept at its minimum. In order to deal with DDâDA problems where the observations are nonuniformly distributed and general sparse, in the present work we employ a parallel load balancing algorithm based on adaptive and dynamic defining of boundaries of DDâwhich is aimed to balance workload according to data location. We call it DyDD. As the numerical model underlying DA problems arising from the soâcalled discretizeâthenâoptimize approach is the constrained least square model (CLS), we will use CLS as a reference state estimation problem and we validate DyDD on different scenario
A scalable approach for Variational Data Assimilation
Data assimilation (DA) is a methodology for combining mathematical models
simulating complex systems (the background knowledge) and measurements (the reality or
observational data) in order to improve the estimate of the system state (the forecast). The DA is an inverse and ill posed problem usually used to handle a huge amount of data, so, it is a large and computationally expensive problem. Here we focus on scalable methods that
makes DA applications feasible for a huge number of background data and observations. We
present a scalable algorithm for solving variational DA which is highly parallel. We provide a mathematical formalization of this approach and we also study the performance of the resulted algorith
Driving NEMO Towards Exascale: Introduction of a New Software Layer in the NEMO Stack Software
This paper addresses scientific challenges related to high level implementation strategies that leads NEMO to effectively use of the opportunities of exascale systems. We consider two software modules as proof-of-concept: the Sea Surface Height equation solver and the Variational Data Assimilation system, which are components of the NEMO ocean model (OPA). Advantages rising from the introduction of consolidated scientific libraries in NEMO are highlighted: such advantages concern both the "software quality" improvement (see the software quality parameters like robustness, portability, resilence, etc.) and time reduction of software development
- âŠ