706 research outputs found

    Real-time flutter analysis

    Get PDF
    The important algorithm issues necessary to achieve a real time flutter monitoring system; namely, the guidelines for choosing appropriate model forms, reduction of the parameter convergence transient, handling multiple modes, the effect of over parameterization, and estimate accuracy predictions, both online and for experiment design are addressed. An approach for efficiently computing continuous-time flutter parameter Cramer-Rao estimate error bounds were developed. This enables a convincing comparison of theoretical and simulation results, as well as offline studies in preparation for a flight test. Theoretical predictions, simulation and flight test results from the NASA Drones for Aerodynamic and Structural Test (DAST) Program are compared

    A Recursive Algorithm for Computing Cramer-Rao-Type Bouads on Estimator Covariance

    Full text link
    We give a recursive algorithm to calculate submatrices of the Cramer-Rao (CR) matrix bound on the covariance of any unbiased estimator of a vector parameter ?_. Our algorithm computes a sequence of lower bounds that converges monotonically to the CR bound with exponential speed of convergence. The recursive algorithm uses an invertible “splitting matrix” to successively approximate the inverse Fisher information matrix. We present a statistical approach to selecting the splitting matrix based on a “complete-data-incomplete-data” formulation similar to that of the well-known EM parameter estimation algorithm. As a concrete illustration we consider image reconstruction from projections for emission computed tomography.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85950/1/Fessler104.pd

    A Recursive Restricted Total Least-squares Algorithm

    No full text
    International audienceWe show that the generalized total least squares (GTLS) problem with a singular noise covariance matrix is equivalent to the restricted total least squares (RTLS) problem and propose a recursive method for its numerical solution. The method is based on the generalized inverse iteration. The estimation error covariance matrix and the estimated augmented correction are also characterized and computed recursively. The algorithm is cheap to compute and is suitable for online implementation. Simulation results in least squares (LS), data least squares (DLS), total least squares (TLS), and RTLS noise scenarios show fast convergence of the parameter estimates to their optimal values obtained by corresponding batch algorithms

    A methodology for airplane parameter estimation and confidence interval determination in nonlinear estimation problems

    Get PDF
    An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. With the fitted surface, sensitivity information can be updated at each iteration with less computational effort than that required by either a finite-difference method or integration of the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, and thus provides flexibility to use model equations in any convenient format. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. The degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels and to predict the degree of agreement between CR bounds and search estimates

    Recursive Algorithms for Computing the Cramer-Rao Bound

    Full text link
    Computation of the Cramer-Rao bound (CRB) on estimator variance requires the inverse or the pseudo-inverse Fisher information matrix (FIM). Direct matrix inversion can be computationally intractable when the number of unknown parameters is large. In this correspondence, we compare several iterative methods for approximating the CRB using matrix splitting and preconditioned conjugate gradient algorithms. For a large class of inverse problems, we show that nonmonotone Gauss-Seidel and preconditioned conjugate gradient algorithms require significantly fewer flops for convergence than monotone “bound preserving” algorithms.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85866/1/Fessler94.pd

    A Recursive Restricted Total Least-squares Algorithm

    Get PDF
    We show that the generalized total least squares (GTLS) problem with a singular noise covariance matrix is equivalent to the restricted total least squares (RTLS) problem and propose a recursive method for its numerical solution. The method is based on the generalized inverse iteration. The estimation error covariance matrix and the estimated augmented correction are also characterized and computed recursively. The algorithm is cheap to compute and is suitable for online implementation. Simulation results in least squares (LS), data least squares (DLS), total least squares (TLS), and RTLS noise scenarios show fast convergence of the parameter estimates to their optimal values obtained by corresponding batch algorithms

    Bayesian Cramer-Rao Bound for Mobile Terminal Tracking in Mixed LOS/NLOS Environments

    Full text link

    Recursive estimation of prior probabilities using the mixture approach

    Get PDF
    The problem of estimating the prior probabilities q sub k of a mixture of known density functions f sub k(X), based on a sequence of N statistically independent observations is considered. It is shown that for very mild restrictions on f sub k(X), the maximum likelihood estimate of Q is asymptotically efficient. A recursive algorithm for estimating Q is proposed, analyzed, and optimized. For the M = 2 case, it is possible for the recursive algorithm to achieve the same performance with the maximum likelihood one. For M 2, slightly inferior performance is the price for having a recursive algorithm. However, the loss is computable and tolerable

    Approximate Gaussian conjugacy: parametric recursive filtering under nonlinearity, multimodality, uncertainty, and constraint, and beyond

    Get PDF
    Since the landmark work of R. E. Kalman in the 1960s, considerable efforts have been devoted to time series state space models for a large variety of dynamic estimation problems. In particular, parametric filters that seek analytical estimates based on a closed-form Markov–Bayes recursion, e.g., recursion from a Gaussian or Gaussian mixture (GM) prior to a Gaussian/GM posterior (termed ‘Gaussian conjugacy’ in this paper), form the backbone for a general time series filter design. Due to challenges arising from nonlinearity, multimodality (including target maneuver), intractable uncertainties (such as unknown inputs and/or non-Gaussian noises) and constraints (including circular quantities), etc., new theories, algorithms, and technologies have been developed continuously to maintain such a conjugacy, or to approximate it as close as possible. They had contributed in large part to the prospective developments of time series parametric filters in the last six decades. In this paper, we review the state of the art in distinctive categories and highlight some insights that may otherwise be easily overlooked. In particular, specific attention is paid to nonlinear systems with an informative observation, multimodal systems including Gaussian mixture posterior and maneuvers, and intractable unknown inputs and constraints, to fill some gaps in existing reviews and surveys. In addition, we provide some new thoughts on alternatives to the first-order Markov transition model and on filter evaluation with regard to computing complexity

    Correspondence A Recursive Algorithm for Computing Cramer-Rao-Type Bouads on Estimator Covariance

    Get PDF
    I. INTRODUC~ON The Cramer-Rao (CR) bound on estimator covariance is an important tool for predicting fundamental limits on best achievable parameter estimation performance. For a vector parameter 8 E 8 c R", an observation Y, and probability density function In this correspondence we give an iterative algorithm for computing columns of the CR bound which requires only O(pn2) floating point operations per iteration. This algorithm falls into the class of "splitting matrix iterations" [2] with the imposition of an additional requirement: the splitting matrix must be chosen to ensure that a valid lower bound results at each iteration of the algorithm. While a purely algebraic approach to specifying a suitable splitting matrix can also be adopted, here we exploit specific properties of Fisher information matrices arising from the statistical model. Specifically, we formulate the parameter estimation problem in a complete-data-incomplete-data setting and apply a version of the "data processing theorem" [31 for Fisher information matrices. This setting is similar to that which underlies the classical formulation of the maximum likelihood expectation maximization (ML-EM) parameter estimation algor $-. The ML-EM algorithm generates a sequence of estimates @ck)), for 8 which successively increases the likelihood function and converges to the maximum likelihood estimator. In a similar manner, our algorithm generates a sequence of tighter and tighter lower bounds on estimator covariance which converges to the actual CR matrix bound. The algorithms given here converge monotonically with exponential rate, where the asymptotic speed of convergence increases as the spectral radius p(Z -Fi'F,) decreases. Here Z is the n X n identity matrix and Fx and Fy are the completeand incomplete-data Fisher information matrices, respectively. Thus when the complete data is only moderately more informative than the incomplete data, F y is close to Fx so that p(Z -F;'Fy) is close to 0 and the algorithm converges very quickly. To implement the algorithm, one must 1) precompute the first p columns of F;', and 2) provide a subroutine that can multiply Fi'F, or F;'E,[V"Q(@;8)] by a column vector (see (18)). By appropriately ch-&sing the complete-data space, this precomputation can be quite simple, e.g., X can frequently be chosen to make F, sparse or even diagonal. If the complete-data space is chosen intelligently, only a few iterations may be required to produce a bound which closely approximates the CR bound. In this case the proposed algorithm gives an order of magnitude computational savings as compared to conventional exact methods of computing the CR bound. This allows one to examine small submatrices of the CR bound for estimation problems that would have been intractable by exact methods due to the large dimension of Fy. The paper concludes with an implementation of the recursive algorithm for bounding the minimum achievable error of reconstruction for a small region of interest (ROI) in an image reconstruction problem arising in emission computed tomography. By using the complete data specified for the standard EM algorithm for PET reconstruction [4], [5], F, is shown to be diagonal and the implementation of the recursive CR bound algorithm is very simple. As in the ML-EM PET reconstruction algorithm, the rate of convergence of the iterative CR bound algorithm depends on the image intensity and F e tomographic system response matrix. Furthermore, due to the sparseness of the tomographic system response matrix, the computation of each column of the CR bound matrix recursion requires onl
    corecore