32 research outputs found

    Digital Filtering Algorithms for Decorrelation within Large Least Squares Problems

    Get PDF
    The GOCE (Gravity Field and steady-state Ocean Circulation Explorer) mission is dedicated to the determination of the Earth's gravity field. During the mission period of at least one year the GOCE satellite will collect approximately 100 million highly correlated observations. The gravity field will be described in terms of approximately 70,000 spherical harmonic coefficients. This leads to a least squares adjustment, in which the design matrix occupies 51 terabytes while the covariance matrix of the observations requires 72,760 terabytes of memory. The very large design matrix is typically computed in parallel using supercomputers like the JUMP (Juelich Multi Processor) supercomputer in Jülich, Germany. However, such a brute force approach does not work for the covariance matrix. Here, we have to exploit certain features of the observations, e.g. that the observations can be interpreted as a stationary time series. This allows for a very sparse representation of the covariance matrix by digital filters. This thesis is concerned with the use of digital filters for decorrelation within large least squares problems. First, it is analyzed, which conditions the observations must meet, such that digital filters can be used to represent their covariance matrix. After that, different filter implementations are introduced and compared with each other, especially with respect to the calculation time of filtering. This is of special concern, as for many applications the very large design matrix has to be filtered at least once. One special problem arising by the use of digital filters is the so-called warm-up effect. For the first time, methods are developed in this thesis for determining the length of the effect and for avoiding this effect. Next, a new algorithm is developed to deal with the problem of short data gaps within the observation time series. Finally, it is investigated which filter methods are best adopted for the application scenario GOCE, and several numerical simulations are performed.Digitale Filteralgorithmen zur Dekorrelation in großen kleinste-Quadrate Problemen Die GOCE (Gravity Field and steady-state Ocean Circulation Explorer) Mission ist der Bestimmung des Erdschwerefeldes gewidmet. Während der Missionsdauer von mindestens einem Jahr wird der GOCE Satellit circa 100 Millionen hoch korrelierte Beobachtungen sammeln. Das Erdschwerefeld wird durch circa 70.000 sphärisch harmonische Koeffizienten beschrieben. Dies führt zu einem kleinste-Quadrate Ausgleich, wobei die Designmatrix 51 Terabytes benötigt während die Kovarianzmatrix der Beobachtungen 72.760 Terabytes erfordert. Die sehr große Designmatrix wird typischerweise parallel berechnet, wobei Supercomputer wie JUMP (Juelich Multi Processor) in Jülich (Deutschland) zum Einsatz kommen. Ein solcher Ansatz, bei dem das Problem durch geballte Rechenleistung gelöst wird, funktioniert bei der Kovarianzmatrix der Beobachtungen nicht mehr. Hier müssen bestimmte Eigenschaften der Beobachtungen ausgenutzt werden, z.B. dass die Beobachtungen als stationäre Zeitreihe aufgefasst werden können. Dies ermöglicht es die Kovarianzmatrix durch digitale Filter zu repräsentieren. Diese Arbeit beschäftigt sich mit der Nutzung von digitalen Filtern zur Dekorrelation in großen kleinste-Quadrate Problemen. Zuerst wird analysiert, welche Bedingungen die Beobachtungen erfüllen müssen, damit digitale Filter zur Repräsentation ihrer Kovarianzmatrix benutzt werden können. Danach werden verschiedene Filterimplementierungen vorgestellt und miteinander verglichen, wobei spezielles Augenmerk auf die Rechenzeit für das Filtern gelegt wird. Dies ist von besonderer Bedeutung, da in vielen Anwendungen die sehr große Designmatrix mindestens einmal gefiltert werden muss. Ein spezielles Problem, welches beim Benutzen der Filter entsteht, ist der sogenannte Warmlaufzeiteffekt. Zum ersten Mal werden in dieser Arbeit Methoden entwickelt, um die Länge des Effekts zu bestimmen und um den Effekt zu vermeiden. Als Nächstes wird ein neuer Algorithmus zur Lösung des Problems von kurzen Datenlücken in der Beobachtungszeitreihe entwickelt. Schließlich wird untersucht, welche Filtermethoden man am besten für das Anwendungsszenario GOCE verwendet und es werden verschiedene numerische Simulationen durchgeführt

    Preconditioned fast solvers for large linear systems with specific sparse and/or Toeplitz-like structures and applications

    Get PDF
    In this thesis, the design of the preconditioners we propose starts from applications instead of treating the problem in a completely general way. The reason is that not all types of linear systems can be addressed with the same tools. In this sense, the techniques for designing efficient iterative solvers depends mostly on properties inherited from the continuous problem, that has originated the discretized sequence of matrices. Classical examples are locality, isotropy in the PDE context, whose discrete counterparts are sparsity and matrices constant along the diagonals, respectively. Therefore, it is often important to take into account the properties of the originating continuous model for obtaining better performances and for providing an accurate convergence analysis. We consider linear systems that arise in the solution of both linear and nonlinear partial differential equation of both integer and fractional type. For the latter case, an introduction to both the theory and the numerical treatment is given. All the algorithms and the strategies presented in this thesis are developed having in mind their parallel implementation. In particular, we consider the processor-co-processor framework, in which the main part of the computation is performed on a Graphics Processing Unit (GPU) accelerator. In Part I we introduce our proposal for sparse approximate inverse preconditioners for either the solution of time-dependent Partial Differential Equations (PDEs), Chapter 3, and Fractional Differential Equations (FDEs), containing both classical and fractional terms, Chapter 5. More precisely, we propose a new technique for updating preconditioners for dealing with sequences of linear systems for PDEs and FDEs, that can be used also to compute matrix functions of large matrices via quadrature formula in Chapter 4 and for optimal control of FDEs in Chapter 6. At last, in Part II, we consider structured preconditioners for quasi-Toeplitz systems. The focus is towards the numerical treatment of discretized convection-diffusion equations in Chapter 7 and on the solution of FDEs with linear multistep formula in boundary value form in Chapter 8

    Preconditioned fast solvers for large linear systems with specific sparse and/or Toeplitz-like structures and applications

    Get PDF
    In this thesis, the design of the preconditioners we propose starts from applications instead of treating the problem in a completely general way. The reason is that not all types of linear systems can be addressed with the same tools. In this sense, the techniques for designing efficient iterative solvers depends mostly on properties inherited from the continuous problem, that has originated the discretized sequence of matrices. Classical examples are locality, isotropy in the PDE context, whose discrete counterparts are sparsity and matrices constant along the diagonals, respectively. Therefore, it is often important to take into account the properties of the originating continuous model for obtaining better performances and for providing an accurate convergence analysis. We consider linear systems that arise in the solution of both linear and nonlinear partial differential equation of both integer and fractional type. For the latter case, an introduction to both the theory and the numerical treatment is given. All the algorithms and the strategies presented in this thesis are developed having in mind their parallel implementation. In particular, we consider the processor-co-processor framework, in which the main part of the computation is performed on a Graphics Processing Unit (GPU) accelerator. In Part I we introduce our proposal for sparse approximate inverse preconditioners for either the solution of time-dependent Partial Differential Equations (PDEs), Chapter 3, and Fractional Differential Equations (FDEs), containing both classical and fractional terms, Chapter 5. More precisely, we propose a new technique for updating preconditioners for dealing with sequences of linear systems for PDEs and FDEs, that can be used also to compute matrix functions of large matrices via quadrature formula in Chapter 4 and for optimal control of FDEs in Chapter 6. At last, in Part II, we consider structured preconditioners for quasi-Toeplitz systems. The focus is towards the numerical treatment of discretized convection-diffusion equations in Chapter 7 and on the solution of FDEs with linear multistep formula in boundary value form in Chapter 8

    Accelerated iterative solvers for the solution of electromagnetic scattering and wave propagation propagation problems

    Get PDF
    The aim of this work is to contribute to the development of accelerated iterative methods for the solution of electromagnetic scattering and wave propagation problems. In spite of recent advances in computer science, there are great demands for efficient and accurate techniques for the analysis of electromagnetic problems. This is due to the increase of the electrical size of electromagnetic problems and a large amount of design and analytical work dependent on simulation tools. This dissertation concentrates on the use of iterative techniques, which are expedited by appropriate acceleration methods, to accurately solve electromagnetic problems. There are four main contributions attributed to this dissertation. The first two contributions focus on the development of stationary iterative methods while the other two focus on the use of Krylov iterative methods. The contributions are summarised as follows: • The modified multilevel fast multipole method is proposed to accelerate the performance of stationary iterative solvers. The proposed method is combined with the buffered block forward backward method and the overlapping domain decomposition method for the solution of perfectly conducting three dimensional scattering problems. The proposed method is more efficient than the standard multilevel fast multipole method when applied to stationary iterative solvers. • The modified improvement step is proposed to improve the convergence rate of stationary iterative solvers. The proposed method is applied for the solution of random rough surface scattering problems. Simulation results suggest that the proposed algorithm requires significantly fewer iterations to achieve a desired accuracy as compared to the conventional improvement step. • The comparison between the volume integral equation and the surface integral equation is presented for the solution of two dimensional indoor wave propagation problems. The linear systems resulting from the discretisation of the integral equations are solved using Krylov iterative solvers. Both approaches are expedited by appropriate acceleration techniques, the fast Fourier transform for the volumetric approach and the fast far field approximation for the surface approach. The volumetric approach demonstrates a better convergence rate than the surface approach. • A novel algorithm is proposed to compute wideband results of three dimensional forward scattering problems. The proposed algorithm is a combination of Krylov iterative solvers, the fast Fourier transform and the asymptotic waveform evaluation technique. The proposed method is more efficient to compute the wideband results than the conventional method which separately computes the results at individual frequency points

    Electromagnetic simulations in frequency and time domain using adaptive integral method

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Author index for volumes 101–200

    Get PDF

    Preconditioned and Randomized Methods for Efficient Bayesian Inversion of Large Data Sets and their Application to Flow and Transport in Porous Media

    Get PDF
    The efficient and reliable estimation of model parameters is important for the simulation and optimization of physical processes. Most models contain variables that have to be adjusted, e.g. in the form of material properties, and the uncertainty of state estimates and predictions is directly linked to the uncertainty of these parameters. Therefore, efficient methods for parameter estimation and uncertainty quantification are required. If the physical system is spatially highly heterogeneous, then the number of model parameters can be very large. At the same time, imaging techniques and time series can provide a large number of measurements for model calibration. Many of the available methods become inefficient or outright unfeasible if both the number of model parameters and the number of state observations are large. This thesis is concerned with the development of methods that remain efficient when a large number of measurements is used to estimate an even larger number of model parameters. The main result is a special preconditioned Conjugate Gradients method that can achieve both quasilinear complexity in the number of parameters and pseudo-constant complexity in the number of measurements. The thesis also provides randomized methods that allow linearized uncertainty quantification for large systems, taking redundancy in the measurements into account if applicable

    Compressed Optical Imaging

    Get PDF
    We address the resolution of inverse problems where visual data must be recovered from incomplete information optically acquired in the spatial domain. The optical acquisition models that are involved share a common mathematical structure consisting of a linear operator followed by optional pointwise nonlinearities. The linear operator generally includes lowpass filtering effects and, in some cases, downsampling. Both tend to make the problems ill-posed. Our general resolution strategy is to rely on variational principles, which allows for a tight control on the objective or perceptual quality of the reconstructed data. The three related problems that we investigate and propose to solve are 1. The reconstruction of images from sparse samples. Following a non-ideal acquisition framework, the measurements take the form of spatial-domain samples whose locations are specified a priori. The reconstruction algorithm that we propose is linked to PDE flows with tensor-valued diffusivities. We demonstrate through several experiments that our approach preserves finer visual features than standard interpolation techniques do, especially at very low sampling rates. 2. The reconstruction of images from binary measurements. The acquisition model that we consider relies on optical principles and fits in a compressed-sensing framework. We develop a reconstruction algorithm that allows us to recover grayscale images from the available binary data. It substantially improves upon the state of the art in terms of quality and computational performance. Our overall approach is physically relevant; moreover, it can handle large amounts of data efficiently. 3. The reconstruction of phase and amplitude profiles from single digital holographic acquisitions. Unlike conventional approaches that are based on demodulation, our iterative reconstruction method is able to accurately recover the original object from a single downsampled intensity hologram, as shown in simulated and real measurement settings. It also consistently outperforms the state of the art in terms of signal-to-noise ratio and with respect to the size of the field of view. The common goal of the proposed reconstruction methods is to yield an accurate estimate of the original data from all available measurements. In accordance with the forward model, they are typically capable of handling samples that are sparse in the spatial domain and/or distorted due to pointwise nonlinear effects, as demonstrated in our experiments

    Seventh Copper Mountain Conference on Multigrid Methods

    Get PDF
    The Seventh Copper Mountain Conference on Multigrid Methods was held on April 2-7, 1995 at Copper Mountain, Colorado. This book is a collection of many of the papers presented at the conference and so represents the conference proceedings. NASA Langley graciously provided printing of this document so that all of the papers could be presented in a single forum. Each paper was reviewed by a member of the conference organizing committee under the coordination of the editors. The vibrancy and diversity in this field are amply expressed in these important papers, and the collection clearly shows the continuing rapid growth of the use of multigrid acceleration techniques

    Custom optimization algorithms for efficient hardware implementation

    No full text
    The focus is on real-time optimal decision making with application in advanced control systems. These computationally intensive schemes, which involve the repeated solution of (convex) optimization problems within a sampling interval, require more efficient computational methods than currently available for extending their application to highly dynamical systems and setups with resource-constrained embedded computing platforms. A range of techniques are proposed to exploit synergies between digital hardware, numerical analysis and algorithm design. These techniques build on top of parameterisable hardware code generation tools that generate VHDL code describing custom computing architectures for interior-point methods and a range of first-order constrained optimization methods. Since memory limitations are often important in embedded implementations we develop a custom storage scheme for KKT matrices arising in interior-point methods for control, which reduces memory requirements significantly and prevents I/O bandwidth limitations from affecting the performance in our implementations. To take advantage of the trend towards parallel computing architectures and to exploit the special characteristics of our custom architectures we propose several high-level parallel optimal control schemes that can reduce computation time. A novel optimization formulation was devised for reducing the computational effort in solving certain problems independent of the computing platform used. In order to be able to solve optimization problems in fixed-point arithmetic, which is significantly more resource-efficient than floating-point, tailored linear algebra algorithms were developed for solving the linear systems that form the computational bottleneck in many optimization methods. These methods come with guarantees for reliable operation. We also provide finite-precision error analysis for fixed-point implementations of first-order methods that can be used to minimize the use of resources while meeting accuracy specifications. The suggested techniques are demonstrated on several practical examples, including a hardware-in-the-loop setup for optimization-based control of a large airliner.Open Acces
    corecore