121,666 research outputs found

    Sigma-Delta Quantization: Number Theoretic Aspects of Refining Quantization Error

    Get PDF
    The linear reconstruction phase of analog-to-digital (A/D) conversion in signal processing is analyzed in quantizing finite frame expansions for R^d. The specific setting is a K-level first order Sigma-Delta quantization with step size delta. Based on basic analysis, the d-dimensional Euclidean 2-norm of quantization error of Sigma-Delta quantization with input of elements in R^d decays like O(1/N) as the frame size N approaches infinity; while the L-infinity norm of quantization error of Sigma-Delta quantization with input of bandlimited functions decays like O(T) as the sampling ratio T approaches zero. It has been, however, observed via numerical simulation that, with input of bandlimited functions, the mean square error norm of quantization error seems to decay like O(T^(3/2)) as T approaches zero. Since the frame size N can be taken to correspond to the reciprocal of the sampling ratio T, this belief suggests that the corresponding behavior of quantization error, namely O(1/N^(3/2)), holds in the setting of finite frame expansions in R^d as well. A number theoretic technique involving uniform distribution of sequences of real numbers and approximation of exponential sums is introduced to derive a better quantization error than O(1/N) as N tends to infinity. This estimate is signal dependent

    A robust error estimator and a residual-free error indicator for reduced basis methods

    Full text link
    The Reduced Basis Method (RBM) is a rigorous model reduction approach for solving parametrized partial differential equations. It identifies a low-dimensional subspace for approximation of the parametric solution manifold that is embedded in high-dimensional space. A reduced order model is subsequently constructed in this subspace. RBM relies on residual-based error indicators or {\em a posteriori} error bounds to guide construction of the reduced solution subspace, to serve as a stopping criteria, and to certify the resulting surrogate solutions. Unfortunately, it is well-known that the standard algorithm for residual norm computation suffers from premature stagnation at the level of the square root of machine precision. In this paper, we develop two alternatives to the standard offline phase of reduced basis algorithms. First, we design a robust strategy for computation of residual error indicators that allows RBM algorithms to enrich the solution subspace with accuracy beyond root machine precision. Secondly, we propose a new error indicator based on the Lebesgue function in interpolation theory. This error indicator does not require computation of residual norms, and instead only requires the ability to compute the RBM solution. This residual-free indicator is rigorous in that it bounds the error committed by the RBM approximation, but up to an uncomputable multiplicative constant. Because of this, the residual-free indicator is effective in choosing snapshots during the offline RBM phase, but cannot currently be used to certify error that the approximation commits. However, it circumvents the need for \textit{a posteriori} analysis of numerical methods, and therefore can be effective on problems where such a rigorous estimate is hard to derive

    Probabilistic error estimation for non-intrusive reduced models learned from data of systems governed by linear parabolic partial differential equations

    Full text link
    This work derives a residual-based a posteriori error estimator for reduced models learned with non-intrusive model reduction from data of high-dimensional systems governed by linear parabolic partial differential equations with control inputs. It is shown that quantities that are necessary for the error estimator can be either obtained exactly as the solutions of least-squares problems in a non-intrusive way from data such as initial conditions, control inputs, and high-dimensional solution trajectories or bounded in a probabilistic sense. The computational procedure follows an offline/online decomposition. In the offline (training) phase, the high-dimensional system is judiciously solved in a black-box fashion to generate data and to set up the error estimator. In the online phase, the estimator is used to bound the error of the reduced-model predictions for new initial conditions and new control inputs without recourse to the high-dimensional system. Numerical results demonstrate the workflow of the proposed approach from data to reduced models to certified predictions
    • …
    corecore