24 research outputs found

    One-loop expressions for h→llˉγh\rightarrow l\bar{l}\gamma in Higgs extensions of the Standard Model

    Full text link
    A systematic study of one-loop contributions to the decay channels h→llˉγh\rightarrow l\bar{l}\gamma with l=νe,μ,τ,e,μl=\nu_{e,\mu, \tau}, e, \mu, performed in Higgs extended versions of the Standard Model, is presented in the 't Hooft-Veltman gauge. Analytic formulas for one-loop form factors are expressed in terms of the logarithm and di-logarithmic functions. As a result, these form factors can be reduced to those relating to the loop-induced decay processes h→γγ,Zγh\rightarrow \gamma\gamma, Z\gamma, confirming not only previous results using different approaches but also close relations between the three kinds of the loop-induced Higgs decay rates. For phenomenological study, we focus on the two observables, namely the enhancement factors defined as ratios of the decay rates calculated between the Higgs extended versions and the standard model, and the forward-backward asymmetries of fermions, which can be used to search for Higgs extensions of the SM. We show that direct effects of mixing between neutral Higgs bosons and indirect contributions of charged Higg boson exchanges can be probed at future colliders.Comment: 39 pages, 9 Figures, 11 Tables of dat

    Finite-Sum Smooth Optimization with SARAH

    Get PDF
    The total complexity (measured as the total number of gradient computations) of a stochastic first-order optimization algorithm that finds a first-order stationary point of a finite-sum smooth nonconvex objective function F(w)=1n∑i=1nfi(w)F(w)=\frac{1}{n} \sum_{i=1}^n f_i(w) has been proven to be at least Ω(n/ϵ)\Omega(\sqrt{n}/\epsilon) for n≤O(ϵ−2)n \leq \mathcal{O}(\epsilon^{-2}) where ϵ\epsilon denotes the attained accuracy E[∥∇F(w~)∥2]≤ϵ\mathbb{E}[ \|\nabla F(\tilde{w})\|^2] \leq \epsilon for the outputted approximation w~\tilde{w} (Fang et al., 2018). In this paper, we provide a convergence analysis for a slightly modified version of the SARAH algorithm (Nguyen et al., 2017a;b) and achieve total complexity that matches the lower-bound worst case complexity in (Fang et al., 2018) up to a constant factor when n≤O(ϵ−2)n \leq \mathcal{O}(\epsilon^{-2}) for nonconvex problems. For convex optimization, we propose SARAH++ with sublinear convergence for general convex and linear convergence for strongly convex problems; and we provide a practical version for which numerical experiments on various datasets show an improved performance

    Finite-Sum Smooth Optimization with SARAH

    Get PDF
    The total complexity (measured as the total number of gradient computations) of a stochastic first-order optimization algorithm that finds a first-order stationary point of a finite-sum smooth nonconvex objective function F(w)=1n∑ni=1fi(w) has been proven to be at least Ω(n−−√/ϵ) for n≤O(ϵ−2) where ϵ denotes the attained accuracy E[∥∇F(w~)∥2]≤ϵ for the outputted approximation w~ (Fang et al., 2018). In this paper, we provide a convergence analysis for a slightly modified version of the SARAH algorithm (Nguyen et al., 2017a;b) and achieve total complexity that matches the lower-bound worst case complexity in (Fang et al., 2018) up to a constant factor when n≤O(ϵ−2) for nonconvex problems. For convex optimization, we propose SARAH++ with sublinear convergence for general convex and linear convergence for strongly convex problems; and we provide a practical version for which numerical experiments on various datasets show an improved performance
    corecore