7 research outputs found

    Learning capability of the rescaled pure greedy algorithm with non-iid sampling

    Get PDF
    We consider the rescaled pure greedy learning algorithm (RPGLA) with the dependent samples drawn according to a non-identical sequence of probability distributions. The generalization performance is provided by applying the independent-blocks technique and adding the drift error. We derive the satisfactory learning rate for the algorithm under the assumption that the process satisfies stationary β \beta -mixing, and also find that the optimal rate O(n−1) O(n^{-1}) can be obtained for i.i.d. processes

    Learning Capability of Relaxed Greedy Algorithms

    No full text
    In the practice of machine learning, one often encounters problems in which noisy data are abundant while the learning targets are imprecise and elusive. To these challenges, most of the traditional learning algorithms employ hypothesis spaces of large capacity. This has inevitably led to high computational burdens and caused considerable machine sluggishness. Utilizing greedy algorithms in this kind of learning environment has greatly improved machine performance. The best existing learning rate of various greedy algorithms is proved to achieve the order of (m/logm) -1/2 , where m is the sample size. In this paper, we provide a relaxed greedy algorithm and study its learning capability. We prove that the learning rate of the new relaxed greedy algorithm is faster than the order m -1/2 . Unlike many other greedy algorithms, which are often indecisive issuing a stopping order to the iteration process, our algorithm has a clearly established stopping criteria

    IEEE Transactions On Neural Networks And Learning Systems : Vol. 24, No. 10, October 2013

    No full text
    1. Adaptive optimal control of unknown constrained-input systems using policy iteration an neural networks. 2. Lattice computing extension of the FAM neural classifier for human facial expression recognition. 3. Rapid feedforward computation by temporal encoding and learning with spiking neurons. 4. Mean vector component analysis for visualization and clustering of nonnegative data. 5. RBF-Based technique for statistical demodulation of pathological tremor. 6. Automated induction of heterogenous proximity measures for supervised spectral embedding. 7. Coordination of multiagents interacting under independent position and velocity topologies. 8. Learning capability of relaxed greedy algorithms. 9. Minimax sparse logistic regression for very high-dimensional features selection. 10. Ensemble learning in fixed expansion layer networks for mitigating catastropic forgetting. 11. SVR learning-based spatiotemporal fuzzy logic controller for nonlinier spatially distributed dynamic systems 12. Single image super-resolution with multiscale similarity learning. 13. A Robust elicitation algorithm for discovering DNA motifs using fuzzy self-organizing maps. 14. EEG-Based learning system for online motion sickness level estimation in a dynamic vehicle environment. 15. New algebraic criteria for synchronization stability of chaotic memristive neural networks with time-varying Etc
    corecore