2 research outputs found

    List of contents

    Get PDF

    Convergence theorems for a class of learning algorithms with VLRPs

    No full text
    We first consider the convergence of the simple competitive learning with vanishing learning rate parameters (VLRPs), Examples show that even in this setting the learning fails to converge in general, This brings us to consider the following problem, to find out a family of VLRPs such that an algorithm with the VLRPs reaches the global minima with probability one, Here, we present an approach different from stochastic approximation theory and determine a new family of VLRPs such that the corresponding learning algorithm gets out of the metastable states with probability one, In the literature it is generally believed that a family of reasonable VLRPs is of the order of 1/t(alpha) for 1/2 < alpha less than or equal to 1, where t is the time, However, we find that a family of VLRPs which makes the algorithm go to the global minima should be between 1/log t and 1/root log t
    corecore