519 research outputs found

    Fast Algorithms for the computation of Fourier Extensions of arbitrary length

    Get PDF
    Fourier series of smooth, non-periodic functions on [1,1][-1,1] are known to exhibit the Gibbs phenomenon, and exhibit overall slow convergence. One way of overcoming these problems is by using a Fourier series on a larger domain, say [T,T][-T,T] with T>1T>1, a technique called Fourier extension or Fourier continuation. When constructed as the discrete least squares minimizer in equidistant points, the Fourier extension has been shown shown to converge geometrically in the truncation parameter NN. A fast O(Nlog2N){\mathcal O}(N \log^2 N) algorithm has been described to compute Fourier extensions for the case where T=2T=2, compared to O(N3){\mathcal O}(N^3) for solving the dense discrete least squares problem. We present two O(Nlog2N){\mathcal O}(N\log^2 N ) algorithms for the computation of these approximations for the case of general TT, made possible by exploiting the connection between Fourier extensions and Prolate Spheroidal Wave theory. The first algorithm is based on the explicit computation of so-called periodic discrete prolate spheroidal sequences, while the second algorithm is purely algebraic and only implicitly based on the theory

    Acoustic source localization : exploring theory and practice

    Get PDF
    Over the past few decades, noise pollution became an important issue in modern society. This has led to an increased effort in the industry to reduce noise. Acoustic source localization methods determine the location and strength of the vibrations which are the cause of sound based onmeasurements of the sound field. This thesis describes a theoretical study of many facets of the acoustic source localization problem as well as the development, implementation and validation of new source localization methods. The main objective is to increase the range of applications of inverse acoustics and to develop accurate and computationally efficient methods for each of these applications. Four applications are considered. Firstly, the inverse acoustic problem is considered where the source and the measurement points are located on two parallel planes. A new fast method to solve this problem is developed and it is compared to the existing method planar nearfield acoustic holography (PNAH) from a theoretical point of view, as well as by means of simulations and experiments. Both methods are fast but the newmethod yields more robust and accurate results. Secondly, measurements in inverse acoustics are often point-by-point or full array measurements. However a straightforward and cost-effective alternative to these approaches is a sensor or array which moves through the sound field during the measurement to gather sound field information. The same numerical techniques make it possible to apply inverse acoustics to the case where the source moves and the sensors are fixed in space. It is shown that the inverse methods such as the inverse boundary element method (IBEM) can be applied to this problem. To arrive at an accurate representation of the sound field, an optimized signal processing method is applied and it is shown experimentally that this method leads to accurate results. Thirdly, a theoretical framework is established for the inverse acoustical problem where the sound field and the source are represented by a cross-spectral matrix. This problem is important in inverse acoustics because it occurs in the inverse calculation of sound intensity. The existing methods for this problem are analyzed from a theoretical point of view using this framework and a new method is derived from it. A simulation study indicates that the new method improves the results by 30% in some cases and the results are similar otherwise. Finally, the localization of point sources in the acoustic near field is considered. MUltiple SIgnal Classification (MUSIC) is newly applied to the Boundary element method (BEM) for this purpose. It is shown that this approach makes it possible to localize point sources accurately even if the noise level is extremely high or if the number of sensors is low

    First order algorithms in variational image processing

    Get PDF
    Variational methods in imaging are nowadays developing towards a quite universal and flexible tool, allowing for highly successful approaches on tasks like denoising, deblurring, inpainting, segmentation, super-resolution, disparity, and optical flow estimation. The overall structure of such approaches is of the form D(Ku)+αR(u)minu{\cal D}(Ku) + \alpha {\cal R} (u) \rightarrow \min_u ; where the functional D{\cal D} is a data fidelity term also depending on some input data ff and measuring the deviation of KuKu from such and R{\cal R} is a regularization functional. Moreover KK is a (often linear) forward operator modeling the dependence of data on an underlying image, and α\alpha is a positive regularization parameter. While D{\cal D} is often smooth and (strictly) convex, the current practice almost exclusively uses nonsmooth regularization functionals. The majority of successful techniques is using nonsmooth and convex functionals like the total variation and generalizations thereof or 1\ell_1-norms of coefficients arising from scalar products with some frame system. The efficient solution of such variational problems in imaging demands for appropriate algorithms. Taking into account the specific structure as a sum of two very different terms to be minimized, splitting algorithms are a quite canonical choice. Consequently this field has revived the interest in techniques like operator splittings or augmented Lagrangians. Here we shall provide an overview of methods currently developed and recent results as well as some computational studies providing a comparison of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure

    An Inverse POD-RBF Network Approach to Parameter Estimation in Mechanics

    Get PDF
    An inverse approach is formulated using proper orthogonal decomposition (POD) integrated with a trained radial basis function (RBF) network to estimate various physical parameters of a specimen with little prior knowledge of the system. To generate the truncated POD-RBF network utilized in the inverse problem, a series of direct solutions based on FEM, BEM or exact analytical solutions are used to generate a data set of temperatures or deformations within the system or body, each produced for a unique set of physical parameters. The data set is then transformed via POD to generate an orthonormal basis to accurately solve for the desired material characteristics using the Levenberg-Marquardt (LM) algorithm to minimize the objective least squares functional. While the POD-RBF inverse approach outlined in this paper focuses primarily in application to conduction heat transfer, elasticity, and fracture mechanics, this technique is designed to be directly applicable to other realistic conditions and/or relevant industrial problems

    Learning Theory and Approximation

    Get PDF
    Learning theory studies data structures from samples and aims at understanding unknown function relations behind them. This leads to interesting theoretical problems which can be often attacked with methods from Approximation Theory. This workshop - the second one of this type at the MFO - has concentrated on the following recent topics: Learning of manifolds and the geometry of data; sparsity and dimension reduction; error analysis and algorithmic aspects, including kernel based methods for regression and classification; application of multiscale aspects and of refinement algorithms to learning

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of 2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem
    corecore