9 research outputs found
Array algorithms for H^2 and H^∞ estimation
Currently, the preferred method for implementing H^2 estimation algorithms is what is called the array form, and includes two main families: square-root array algorithms, that are typically more stable than conventional ones, and fast array algorithms, which, when the system is time-invariant, typically offer an order of magnitude reduction in the computational effort. Using our recent observation that H^∞ filtering coincides with Kalman filtering in Krein space, in this chapter we develop array algorithms for H^∞ filtering. These can be regarded as natural generalizations of their H^2 counterparts, and involve propagating the indefinite square roots of the quantities of interest. The H^∞ square-root and fast array algorithms both have the interesting feature that one does not need to explicitly check for the positivity conditions required for the existence of H^∞ filters. These conditions are built into the algorithms themselves so that an H^∞ estimator of the desired level exists if, and only if, the algorithms can be executed. However, since H^∞ square-root algorithms predominantly use J-unitary transformations, rather than the unitary transformations required in the H^2 case, further investigation is needed to determine the numerical behavior of such algorithms
Fast error analysis of continuous GPS observations
It has been generally accepted that the noise in continuous GPS observations can be well described by a power-law plus white noise model. Using maximum likelihood estimation (MLE) the numerical values of the noise model can be estimated. Current methods require calculating the data covariance matrix and inverting it, which is a significant computational burden. Analysing 10 years of daily GPS solutions of a single station can take around 2 h on a regular computer such as a PC with an AMD AthlonTM 64 X2 dual core processor. When one analyses large networks with hundreds of stations or when one analyses hourly instead of daily solutions, the long computation times becomes a problem. In case the signal only contains power-law noise, the MLE computations can be simplified to a process where N is the number of observations. For the general case of power-law plus white noise, we present a modification of the MLE equations that allows us to reduce the number of computations within the algorithm from a cubic to a quadratic function of the number of observations when there are no data gaps. For time-series of three and eight years, this means in practise a reduction factor of around 35 and 84 in computation time without loss of accuracy. In addition, this modification removes the implicit assumption that there is no environment noise before the first observation. Finally, we present an analytical expression for the uncertainty of the estimated trend if the data only contains power-law nois
A global algorithm to estimate the expectations of the components of an observed univariate mixture
Subspace-Based Noise Reduction for Speech Signals via Diagonal and Triangular Matrix Decompositions: Survey and Analysis
We survey the definitions and use of rank-revealing matrix decompositions in single-channel noise reduction algorithms for speech signals. Our algorithms are based on the rank-reduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using both diagonal (eigenvalue and singular value) decompositions and rank-revealing triangular decompositions (ULV, URV, VSV, ULLV, and ULLIV). In addition, we show how the subspace-based algorithms can be analyzed and compared by means of simple FIR filter interpretations. The algorithms are illustrated with working Matlab code and applications in speech processing
