28 research outputs found

    On Sparse Associative Networks:

    No full text
    This report is a complement to the working document [4], where a sparse associative network is described. This report shows that the net learning rule in [4] can be viewed as the solution to a weighted least squares problem. This means that we can apply the theory framework of least squares problems, and compare the net rule with some other iterative algorithms that solve the same problem. The learning rule is compared with the gradient search algorithm and the RPROP algorithm in a simple synthetic experiment. The gradient rule has the slowest convergence while the associative and the RPROP rules have similar convergence. The associative learning rule has a smaller initial error than the RPROP rule though

    A Connection Between Half-Quadratic Criteria and EM Algorithms

    No full text
    Residual Steepest descent (RSD) algorithms of robust statistics arise as special cases of half-quadratic schemes [1]. Here, we adopt a statistical framework and we show that both algorithms are instances of the EM algorithm. The augmented dataset respectively involves a scale and a location mixture of Gaussians. The sufficient conditions for the construction cover a broad class of already known robust statistics. Index Terms—EM algorithm, half-quadratic criteria, iteratively reweighted least squares (IRLS), residual steepest descent (RSD), scale mixtures. I

    Series A Journal of Chinese Universities Aug. 2005 COMPONENTWISE CONDITION NUMBERS FOR GENERALIZED MATRIX INVERSION AND LINEAR

    No full text
    Abstract We present componentwise condition numbers for the problems of Moore-Penrose generalized matrix inversion and linear least squares. Also, the condition numbers for these condition numbers are given. Key words Condition numbers, componentwise analysis, generalized matrix inverses, linear least squares. AMS(2000)subject classifications 15A12, 65F20, 65F35

    H∞ Bounds for Least-Squares Estimators

    No full text
    In this paper we obtain upper and lower bounds for the H 1 norm of the Kalman filter and RLS algorithm, with respect to prediction and filtered errors. These bounds can be used to study the robustness properties of such estimators. One main conclusion is that, unlike H 1 -optimal estimators which do not allow for any amplification of the disturbances, the least-squares estimators do allow for such amplification. This fact can be especially pronounced in the prediction error case, whereas in the filtered error case the energy amplification is at most four. Moreover, it is shown that the H 1 norm for RLS is data-dependent, whereas for LMS and normalized LMS the H 1 norm is simply unity. 1 Introduction Since its inception in the early 1960's, the Kalman filter (and the closely related recursive-least-squares (RLS) algorithm) has played a central role in estimation theory and adaptive filtering. Recently, on the other hand, there has been growing interest in (so-called) H 1 esti..
    corecore