1 research outputs found

    Conjugate and natural gradient rules for BYY harmony learning on Gaussian mixture with automated model selection

    No full text
    Under the Bayesian Ying–Yang (BYY) harmony learning theory, a harmony function has been developed on a BI-directional architecture of the BYY system for Gaussian mixture with an important feature that, via its maximization through a general gradient rule, a model selection can be made automatically during parameter learning on a set of sample data from a Gaussian mixture. This paper further proposes the conjugate and natural gradient rules to efficiently implement the maximization of the harmony function, i.e. the BYY harmony learning, on Gaussian mixture. It is demonstrated by simulation experiments that these two new gradient rules not only work well, but also converge more quickly than the general gradient ones. Keywords: Bayesian Ying–Yang learning; Gaussian mixture; automated model selection; conjugate gradient; natural gradient
    corecore