18 research outputs found

    Approximate iterations for structured matrices

    Get PDF
    Important matrix-valued functions f (A) are, e.g., the inverse A −1 , the square root √ A and the sign function. Their evaluation for large matrices arising from pdes is not an easy task and needs techniques exploiting appropriate structures of the matrices A and f (A) (often f (A) possesses this structure only approximately). However, intermediate matrices arising during the evaluation may lose the structure of the initial matrix. This would make the computations inefficient and even infeasible. However, the main result of this paper is that an iterative fixed-point like process for the evaluation of f (A) can be transformed, under certain general assumptions, into another process which preserves the convergence rate and benefits from the underlying structure. It is shown how this result applies to matrices in a tensor format with a bounded tensor rank and to the structure of the hierarchical matrix technique. We demonstrate our results by verifying all requirements in the case of the iterative computation of A −1 and √ A

    Low-rank Linear Fluid-structure Interaction Discretizations

    Full text link
    Fluid-structure interaction models involve parameters that describe the solid and the fluid behavior. In simulations, there often is a need to vary these parameters to examine the behavior of a fluid-structure interaction model for different solids and different fluids. For instance, a shipping company wants to know how the material, a ship's hull is made of, interacts with fluids at different Reynolds and Strouhal numbers before the building process takes place. Also, the behavior of such models for solids with different properties is considered before the prototype phase. A parameter-dependent linear fluid-structure interaction discretization provides approximations for a bundle of different parameters at one step. Such a discretization with respect to different material parameters leads to a big block-diagonal system matrix that is equivalent to a matrix equation as discussed in [KressnerTobler 2011]. The unknown is then a matrix which can be approximated using a low-rank approach that represents the iterate by a tensor. This paper discusses a low-rank GMRES variant and a truncated variant of the Chebyshev iteration. Bounds for the error resulting from the truncation operations are derived. Numerical experiments show that such truncated methods applied to parameter-dependent discretizations provide approximations with relative residual norms smaller than 10810^{-8} within a twentieth of the time used by individual standard approaches.Comment: 30 pages, 7 figure

    Parameter Identification in a Probabilistic Setting

    Get PDF
    Parameter identification problems are formulated in a probabilistic language, where the randomness reflects the uncertainty about the knowledge of the true values. This setting allows conceptually easily to incorporate new information, e.g. through a measurement, by connecting it to Bayes's theorem. The unknown quantity is modelled as a (may be high-dimensional) random variable. Such a description has two constituents, the measurable function and the measure. One group of methods is identified as updating the measure, the other group changes the measurable function. We connect both groups with the relatively recent methods of functional approximation of stochastic problems, and introduce especially in combination with the second group of methods a new procedure which does not need any sampling, hence works completely deterministically. It also seems to be the fastest and more reliable when compared with other methods. We show by example that it also works for highly nonlinear non-smooth problems with non-Gaussian measures.Comment: 29 pages, 16 figure

    A literature survey of low-rank tensor approximation techniques

    Full text link
    During the last years, low-rank tensor approximation has been established as a new tool in scientific computing to address large-scale linear and multilinear algebra problems, which would be intractable by classical techniques. This survey attempts to give a literature overview of current developments in this area, with an emphasis on function-related tensors

    H-matrix based second moment analysis for rough random fields and finite element discretizations

    Get PDF
    We consider the efficient solution of strongly elliptic partial differential equations with random load based on the finite element method. The solution's two-point correlation can efficiently be approximated by means of an H-matrix, in particular if the correlation length is rather short or the correlation kernel is non-smooth. Since the inverses of the finite element matrices which correspond to the differential operator under consideration can likewise efficiently be approximated in the H-matrix format, we can solve the correspondent H-matrix equation in essentially linear time by using the H-matrix arithmetic. Numerical experiments for three dimensional finite element discretizations for several correlation lengths and different smoothness are provided. They validate the presented method and demonstrate that the computation times do not increase for non-smooth or shortly correlated data
    corecore