397 research outputs found

    Model-Based Calibration of Filter Imperfections in the Random Demodulator for Compressive Sensing

    Full text link
    The random demodulator is a recent compressive sensing architecture providing efficient sub-Nyquist sampling of sparse band-limited signals. The compressive sensing paradigm requires an accurate model of the analog front-end to enable correct signal reconstruction in the digital domain. In practice, hardware devices such as filters deviate from their desired design behavior due to component variations. Existing reconstruction algorithms are sensitive to such deviations, which fall into the more general category of measurement matrix perturbations. This paper proposes a model-based technique that aims to calibrate filter model mismatches to facilitate improved signal reconstruction quality. The mismatch is considered to be an additive error in the discretized impulse response. We identify the error by sampling a known calibrating signal, enabling least-squares estimation of the impulse response error. The error estimate and the known system model are used to calibrate the measurement matrix. Numerical analysis demonstrates the effectiveness of the calibration method even for highly deviating low-pass filter responses. The proposed method performance is also compared to a state of the art method based on discrete Fourier transform trigonometric interpolation.Comment: 10 pages, 8 figures, submitted to IEEE Transactions on Signal Processin

    Automated Dynamic Error Analysis Methods for Optimization of Computer Arithmetic Systems

    Get PDF
    Computer arithmetic is one of the more important topics within computer science and engineering. The earliest implementations of computer systems were designed to perform arithmetic operations and cost if not all digital systems will be required to perform some sort of arithmetic as part of their normal operations. This reliance on the arithmetic operations of computers means the accurate representation of real numbers within digital systems is vital, and an understanding of how these systems are implemented and their possible drawbacks is essential in order to design and implement modern high performance systems. At present the most widely implemented system for computer arithmetic is the IEEE754 Floating Point system, while this system is deemed to the be the best available implementation it has several features that can result in serious errors of computation if not implemented correctly. Lack of understanding of these errors and their effects has led to real world disasters in the past on several occasions. Systems for the detection of these errors are highly important and fast, efficient and easy to use implementations of these detection systems is a high priority. Detection of floating point rounding errors normally requires run-time analysis in order to be effective. Several systems have been proposed for the analysis of floating point arithmetic including Interval Arithmetic, Affine Arithmetic and Monte Carlo Arithmetic. While these systems have been well studied using theoretical and software based approaches, implementation of systems that can be applied to real world situations has been limited due to issues with implementation, performance and scalability. The majority of implementations have been software based and have not taken advantage of the performance gains associated with hardware accelerated computer arithmetic systems. This is especially problematic when it is considered that systems requiring high accuracy will often require high performance. The aim of this thesis and associated research is to increase understanding of error and error analysis methods through the development of easy to use and easy to understand implementations of these techniques

    Automated Dynamic Error Analysis Methods for Optimization of Computer Arithmetic Systems

    Get PDF
    Computer arithmetic is one of the more important topics within computer science and engineering. The earliest implementations of computer systems were designed to perform arithmetic operations and cost if not all digital systems will be required to perform some sort of arithmetic as part of their normal operations. This reliance on the arithmetic operations of computers means the accurate representation of real numbers within digital systems is vital, and an understanding of how these systems are implemented and their possible drawbacks is essential in order to design and implement modern high performance systems. At present the most widely implemented system for computer arithmetic is the IEEE754 Floating Point system, while this system is deemed to the be the best available implementation it has several features that can result in serious errors of computation if not implemented correctly. Lack of understanding of these errors and their effects has led to real world disasters in the past on several occasions. Systems for the detection of these errors are highly important and fast, efficient and easy to use implementations of these detection systems is a high priority. Detection of floating point rounding errors normally requires run-time analysis in order to be effective. Several systems have been proposed for the analysis of floating point arithmetic including Interval Arithmetic, Affine Arithmetic and Monte Carlo Arithmetic. While these systems have been well studied using theoretical and software based approaches, implementation of systems that can be applied to real world situations has been limited due to issues with implementation, performance and scalability. The majority of implementations have been software based and have not taken advantage of the performance gains associated with hardware accelerated computer arithmetic systems. This is especially problematic when it is considered that systems requiring high accuracy will often require high performance. The aim of this thesis and associated research is to increase understanding of error and error analysis methods through the development of easy to use and easy to understand implementations of these techniques

    An Algebraic Framework for the Real-Time Solution of Inverse Problems on Embedded Systems

    Full text link
    This article presents a new approach to the real-time solution of inverse problems on embedded systems. The class of problems addressed corresponds to ordinary differential equations (ODEs) with generalized linear constraints, whereby the data from an array of sensors forms the forcing function. The solution of the equation is formulated as a least squares (LS) problem with linear constraints. The LS approach makes the method suitable for the explicit solution of inverse problems where the forcing function is perturbed by noise. The algebraic computation is partitioned into a initial preparatory step, which precomputes the matrices required for the run-time computation; and the cyclic run-time computation, which is repeated with each acquisition of sensor data. The cyclic computation consists of a single matrix-vector multiplication, in this manner computation complexity is known a-priori, fulfilling the definition of a real-time computation. Numerical testing of the new method is presented on perturbed as well as unperturbed problems; the results are compared with known analytic solutions and solutions acquired from state-of-the-art implicit solvers. The solution is implemented with model based design and uses only fundamental linear algebra; consequently, this approach supports automatic code generation for deployment on embedded systems. The targeting concept was tested via software- and processor-in-the-loop verification on two systems with different processor architectures. Finally, the method was tested on a laboratory prototype with real measurement data for the monitoring of flexible structures. The problem solved is: the real-time overconstrained reconstruction of a curve from measured gradients. Such systems are commonly encountered in the monitoring of structures and/or ground subsidence.Comment: 24 pages, journal articl

    An Algorithm for Gluinos on the Lattice

    Get PDF
    L\"uscher's local bosonic algorithm for Monte Carlo simulations of quantum field theories with fermions is applied to the simulation of a possibly supersymmetric Yang-Mills theory with a Majorana fermion in the adjoint representation. Combined with a correction step in a two-step polynomial approximation scheme, the obtained algorithm seems to be promising and could be competitive with more conventional algorithms based on discretized classical (``molecular dynamics'') equations of motion. The application of the considered polynomial approximation scheme to optimized hopping parameter expansions is also discussed.Comment: latex2e, 23 pages, 4 figures with epsfig. Section 5 is rewritten, more data are added and the discussion is extende

    The Cauchy-Lagrangian method for numerical analysis of Euler flow

    Full text link
    A novel semi-Lagrangian method is introduced to solve numerically the Euler equation for ideal incompressible flow in arbitrary space dimension. It exploits the time-analyticity of fluid particle trajectories and requires, in principle, only limited spatial smoothness of the initial data. Efficient generation of high-order time-Taylor coefficients is made possible by a recurrence relation that follows from the Cauchy invariants formulation of the Euler equation (Zheligovsky & Frisch, J. Fluid Mech. 2014, 749, 404-430). Truncated time-Taylor series of very high order allow the use of time steps vastly exceeding the Courant-Friedrichs-Lewy limit, without compromising the accuracy of the solution. Tests performed on the two-dimensional Euler equation indicate that the Cauchy-Lagrangian method is more - and occasionally much more - efficient and less prone to instability than Eulerian Runge-Kutta methods, and less prone to rapid growth of rounding errors than the high-order Eulerian time-Taylor algorithm. We also develop tools of analysis adapted to the Cauchy-Lagrangian method, such as the monitoring of the radius of convergence of the time-Taylor series. Certain other fluid equations can be handled similarly.Comment: 30 pp., 13 figures, 45 references. Minor revision. In press in Journal of Scientific Computin

    The evolution of a magnetic field subject to Taylor′s constraint using a projection operator

    Get PDF
    In the rapidly rotating, low-viscosity limit of the magnetohydrodynamic equations as relevant to the conditions in planetary cores, any generated magnetic field likely evolves while simultaneously satisfying a particular continuous family of invariants, termed Taylor′s constraint. It is known that, analytically, any magnetic field will evolve subject to these constraints through the action of a time-dependent coaxially cylindrical geostrophic flow. However, severe numerical problems limit the accuracy of this procedure, leading to rapid violation of the constraints. By judicious choice of a certain truncated Galerkin representation of the magnetic field, Taylor′s constraint reduces to a finite set of conditions of size O(N), significantly less than the O(N3) degrees of freedom, where N denotes the spectral truncation in both solid angle and radius. Each constraint is homogeneous and quadratic in the magnetic field and, taken together, the constraints define the finite-dimensional Taylor manifolδ whose tangent plane can be evaluated. The key result of this paper is a description of a stable numerical method in which the evolution of a magnetic field in a spherical geometry is constrained to the manifold by projecting its rate of change onto the local tangent hyperplane. The tangent plane is evaluated by contracting the vector of spectral coefficients with the Taylor tensor, a large but very sparse 3-D array that we define. We demonstrate by example the numerical difficulties in finding the geostrophic flow numerically and how the projection method can correct for inaccuracies. Further, we show that, in a simplified system using projection, the normalized measure of Taylorization, t, may be maintained smaller than O(10-10) (where t= 0 is an exact Taylor state) over 1/10 of a dipole decay time, eight orders of magnitude smaller than analogous measures applied to recent low Ekman-number geodynamo model
    • …
    corecore