50 research outputs found

    The relaxation method for learning in artificial neural networks

    Get PDF
    A new mathematical approach for deriving learning algorithms for various neural network models including the Hopfield model, Bidirectional Associative Memory, Dynamic Heteroassociative Neural Memory, and Radial Basis Function Networks is presented. The mathematical approach is based on the relaxation method for solving systems of linear inequalities. The newly developed learning algorithms are fast and they guarantee convergence to a solution in a finite number of steps. The new algorithms are highly insensitive to choice of parameters and the initial set of weights. They also exhibit high scalability on binary random patterns. Rigorous mathematical foundations for the new algorithms and their simulation studies are included

    Some preconditioners for systems of linear inequalities

    Full text link
    We show that a combination of two simple preprocessing steps would generally improve the conditioning of a homogeneous system of linear inequalities. Our approach is based on a comparison among three different but related notions of conditioning for linear inequalities

    On the von Neumann and Frank-Wolfe Algorithms with Away Steps

    Full text link
    The von Neumann algorithm is a simple coordinate-descent algorithm to determine whether the origin belongs to a polytope generated by a finite set of points. When the origin is in the of the polytope, the algorithm generates a sequence of points in the polytope that converges linearly to zero. The algorithm's rate of convergence depends on the radius of the largest ball around the origin contained in the polytope. We show that under the weaker condition that the origin is in the polytope, possibly on its boundary, a variant of the von Neumann algorithm that includes generates a sequence of points in the polytope that converges linearly to zero. The new algorithm's rate of convergence depends on a certain geometric parameter of the polytope that extends the above radius but is always positive. Our linear convergence result and geometric insights also extend to a variant of the Frank-Wolfe algorithm with away steps for minimizing a strongly convex function over a polytope

    Successive projection under a quasi-cyclic order

    Get PDF
    Cover title.Includes bibliographical references (leaves 8-10).Research supported by the U.S. Army Research Office. DAAL03-86-K-0171by Paul Tseng

    Nonnegative Moore–Penrose inverses of Gram operators

    Get PDF
    AbstractThis paper is concerned with necessary and sufficient conditions for the nonnegativity of Moore–Penrose inverses of Gram operators between real Hilbert spaces. These conditions include statements on acuteness (or obtuseness) of certain closed convex cones. The main result generalizes a well known result for inverses in the finite dimensional case over the nonnegative orthant to Moore–Penrose inverses in (possibly) infinite dimensional Hilbert spaces over any general closed convex cone
    corecore