118,575 research outputs found

    Optimal learning rates for least squares regularized regression with unbounded sampling

    Get PDF
    AbstractA standard assumption in theoretical study of learning algorithms for regression is uniform boundedness of output sample values. This excludes the common case with Gaussian noise. In this paper we investigate the learning algorithm for regression generated by the least squares regularization scheme in reproducing kernel Hilbert spaces without the assumption of uniform boundedness for sampling. By imposing some incremental conditions on moments of the output variable, we derive learning rates in terms of regularity of the regression function and capacity of the hypothesis space. The novelty of our analysis is a new covering number argument for bounding the sample error

    Utilizing Priming to Identify Optimal Class Ordering to Alleviate Catastrophic Forgetting

    Full text link
    In order for artificial neural networks to begin accurately mimicking biological ones, they must be able to adapt to new exigencies without forgetting what they have learned from previous training. Lifelong learning approaches to artificial neural networks attempt to strive towards this goal, yet have not progressed far enough to be realistically deployed for natural language processing tasks. The proverbial roadblock of catastrophic forgetting still gate-keeps researchers from an adequate lifelong learning model. While efforts are being made to quell catastrophic forgetting, there is a lack of research that looks into the importance of class ordering when training on new classes for incremental learning. This is surprising as the ordering of "classes" that humans learn is heavily monitored and incredibly important. While heuristics to develop an ideal class order have been researched, this paper examines class ordering as it relates to priming as a scheme for incremental class learning. By examining the connections between various methods of priming found in humans and how those are mimicked yet remain unexplained in life-long machine learning, this paper provides a better understanding of the similarities between our biological systems and the synthetic systems while simultaneously improving current practices to combat catastrophic forgetting. Through the merging of psychological priming practices with class ordering, this paper is able to identify a generalizable method for class ordering in NLP incremental learning tasks that consistently outperforms random class ordering.Comment: Accepted to IEEE International Conference on Semantic Computing (ICSC) 202

    Combining offline and online machine learning to estimate state of health of lithium-ion batteries

    Get PDF
    This article reports a new state of health (SOH) estimation method for lithium-ion batteries using machine learning. Practical problems with cell inconsistency and online implementability are addressed using a proposed individualized estimation scheme that blends a model migration method with ensemble learning. A set of candidate models, based on slope-bias correction (SBC) and radial basis function neural networks (RBFNNs), are first trained offline by choosing a single-point feature on the incremental capacity curve as the model input. For online operation, the prediction errors due to cell inconsistency in the target new cell are next mitigated by a proposed modified random forest regression (mRFR) for high adaptability. The results show that compared to prevailing methods, the proposed SBC-RBFNN-mRFR-based scheme can achieve considerably high SOH estimation accuracy with only a small amount of early data and online measurements are needed for practical operation
    • …
    corecore