2,430 research outputs found

    Convergence of Online Mirror Descent

    Full text link
    In this paper we consider online mirror descent (OMD) algorithms, a class of scalable online learning algorithms exploiting data geometric structures through mirror maps. Necessary and sufficient conditions are presented in terms of the step size sequence {ηt}t\{\eta_t\}_{t} for the convergence of an OMD algorithm with respect to the expected Bregman distance induced by the mirror map. The condition is limtηt=0,t=1ηt=\lim_{t\to\infty}\eta_t=0, \sum_{t=1}^{\infty}\eta_t=\infty in the case of positive variances. It is reduced to t=1ηt=\sum_{t=1}^{\infty}\eta_t=\infty in the case of zero variances for which the linear convergence may be achieved by taking a constant step size sequence. A sufficient condition on the almost sure convergence is also given. We establish tight error bounds under mild conditions on the mirror map, the loss function, and the regularizer. Our results are achieved by some novel analysis on the one-step progress of the OMD algorithm using smoothness and strong convexity of the mirror map and the loss function.Comment: Published in Applied and Computational Harmonic Analysis, 202

    Fuel Cell Fundamentals

    Get PDF

    Modification Method of Tooth Profile of Locomotive Traction Gear Based on Rodent Arm Variation

    Get PDF
    Locomotive traction gear is the key component to power transmission and speed control in locomotive transmission system, which plays an important role in locomotive running speed and load-carrying torque. Considering that there is not universal rule for the method of modification of locomotive gear at present, in this paper, the tooth profile modification is considered with the combination of the increased contact ratio and the variation of the moment arm of action. Based on the principle of modification, according to the load direction after modification, the change rule of moment arm of action after modification is determined, and the interval range of tooth profile modification is also determined. Taking a certain locomotive traction gear as an example, the results obtained through the method of modification which based on combining moment arm of action variation with the increase of contact ratio and the method based on the traditional empirical formula are compared through finite element simulation respectively, on this account to verify the superiority of the theory of modification, which has important theoretical significance for profile modification of locomotive traction gear

    Electromagnetic buffering considering PM eddy current loss under intensive impact load

    Get PDF
    The intensive impact load will generate a huge acceleration in the primary part of the Electromagnetic buffers (EMBs), resulting in an instantaneous increase in the eddy current loss of the permanent magnet (PM). In this paper, the PM eddy current loss is taken into account in the electromagnetic buffering under intensive impact load. The reason why the eddy current damping force differs between two different buffer stages is analyzed. The experimental results signify that the model considering the PM eddy current loss is more accurate

    Differentially Private Stochastic Gradient Descent with Low-Noise

    Full text link
    Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection. This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy. In this paper, we focus on the privacy and utility (measured by excess risk bounds) performances of differentially private stochastic gradient descent (SGD) algorithms in the setting of stochastic convex optimization. Specifically, we examine the pointwise problem in the low-noise setting for which we derive sharper excess risk bounds for the differentially private SGD algorithm. In the pairwise learning setting, we propose a simple differentially private SGD algorithm based on gradient perturbation. Furthermore, we develop novel utility bounds for the proposed algorithm, proving that it achieves optimal excess risk rates even for non-smooth losses. Notably, we establish fast learning rates for privacy-preserving pairwise learning under the low-noise condition, which is the first of its kind
    corecore