55,912 research outputs found

    Geometric Correction in Diffusive Limit of Neutron Transport Equation in 2D Convex Domains

    Full text link
    Consider the steady neutron transport equation with diffusive boundary condition. In [Wu and Guo(2015) Comm. Math. Phys.] and [Wu and Yang and Guo(2016) Preprint], it was discovered that geometric correction is necessary for the Milne problem of Knudsen-layer construction in a disk or annulus. In this paper, we establish diffusive limit for a 2D convex domain. Our contribution relies on novel W1,∞W^{1,\infty} estimates for the Milne problem with geometric correction in the presence of a convex domain, as well as an L2mβˆ’L∞L^{2m}-L^{\infty} framework which yields stronger remainder estimates.Comment: 60 page

    Geometric Correction for Diffusive Expansion of Steady Neutron Transport Equation

    Full text link
    We revisit the diffusive limit of a steady neutron transport equation in a 22-D unit disk with one-speed velocity. We show the classical result in [4] with Milne expansion is incorrect in L∞L^{\infty} and we give the right answer in studying the ϡ\epsilon-Milne expansion with geometric correction.Comment: 62 page

    Convergence of Unregularized Online Learning Algorithms

    Full text link
    In this paper we study the convergence of online gradient descent algorithms in reproducing kernel Hilbert spaces (RKHSs) without regularization. We establish a sufficient condition and a necessary condition for the convergence of excess generalization errors in expectation. A sufficient condition for the almost sure convergence is also given. With high probability, we provide explicit convergence rates of the excess generalization errors for both averaged iterates and the last iterate, which in turn also imply convergence rates with probability one. To our best knowledge, this is the first high-probability convergence rate for the last iterate of online gradient descent algorithms without strong convexity. Without any boundedness assumptions on iterates, our results are derived by a novel use of two measures of the algorithm's one-step progress, respectively by generalization errors and by distances in RKHSs, where the variances of the involved martingales are cancelled out by the descent property of the algorithm
    • …
    corecore