754 research outputs found

    A unified variance-reduced accelerated gradient method for convex optimization

    Full text link
    We propose a novel randomized incremental gradient algorithm, namely, VAriance-Reduced Accelerated Gradient (Varag), for finite-sum optimization. Equipped with a unified step-size policy that adjusts itself to the value of the condition number, Varag exhibits the unified optimal rates of convergence for solving smooth convex finite-sum problems directly regardless of their strong convexity. Moreover, Varag is the first accelerated randomized incremental gradient method that benefits from the strong convexity of the data-fidelity term to achieve the optimal linear convergence. It also establishes an optimal linear rate of convergence for solving a wide class of problems only satisfying a certain error bound condition rather than strong convexity. Varag can also be extended to solve stochastic finite-sum problems.Comment: 33rd Conference on Neural Information Processing Systems (NeurIPS 2019

    Light sterile neutrinos and lepton-number-violating kaon decays in effective field theory

    Full text link
    We investigate lepton-number-violating decays Kβˆ“β†’Ο€Β±lβˆ“lβˆ“K^\mp \rightarrow \pi^\pm l^\mp l^\mp in the presence of sterile neutrinos. We consider minimal interactions with Standard-Model fields through Yukawa couplings as well as higher-dimensional operators in the framework of the neutrino-extended Standard Model Effective Field Theory. We use SU(3)SU(3) chiral perturbation theory to match to mesonic interactions and compute the lepton-number-violating decay rate in terms of the neutrino masses and the Wilson coefficients of higher-dimensional operators. For neutrinos that can be produced on-shell, the decay rates are highly enhanced and higher-dimensional interactions can be probed up to very high scales around O \mathcal{O}(30) TeV.Comment: 31 pages, 6 figure
    • …
    corecore