24 research outputs found
One-loop expressions for in Higgs extensions of the Standard Model
A systematic study of one-loop contributions to the decay channels
with , performed in
Higgs extended versions of the Standard Model, is presented in the 't
Hooft-Veltman gauge. Analytic formulas for one-loop form factors are expressed
in terms of the logarithm and di-logarithmic functions. As a result, these form
factors can be reduced to those relating to the loop-induced decay processes
, confirming not only previous results
using different approaches but also close relations between the three kinds of
the loop-induced Higgs decay rates. For phenomenological study, we focus on the
two observables, namely the enhancement factors defined as ratios of the decay
rates calculated between the Higgs extended versions and the standard model,
and the forward-backward asymmetries of fermions, which can be used to search
for Higgs extensions of the SM. We show that direct effects of mixing between
neutral Higgs bosons and indirect contributions of charged Higg boson exchanges
can be probed at future colliders.Comment: 39 pages, 9 Figures, 11 Tables of dat
Finite-Sum Smooth Optimization with SARAH
The total complexity (measured as the total number of gradient computations)
of a stochastic first-order optimization algorithm that finds a first-order
stationary point of a finite-sum smooth nonconvex objective function
has been proven to be at least
for where
denotes the attained accuracy for the outputted approximation
(Fang et al., 2018). In this paper, we provide a convergence analysis for a
slightly modified version of the SARAH algorithm (Nguyen et al., 2017a;b) and
achieve total complexity that matches the lower-bound worst case complexity in
(Fang et al., 2018) up to a constant factor when for nonconvex problems. For convex optimization, we
propose SARAH++ with sublinear convergence for general convex and linear
convergence for strongly convex problems; and we provide a practical version
for which numerical experiments on various datasets show an improved
performance
Finite-Sum Smooth Optimization with SARAH
The total complexity (measured as the total number of gradient computations) of a stochastic first-order optimization algorithm that finds a first-order stationary point of a finite-sum smooth nonconvex objective function F(w)=1n∑ni=1fi(w) has been proven to be at least Ω(n−−√/ϵ) for n≤O(ϵ−2) where ϵ denotes the attained accuracy E[∥∇F(w~)∥2]≤ϵ for the outputted approximation w~ (Fang et al., 2018). In this paper, we provide a convergence analysis for a slightly modified version of the SARAH algorithm (Nguyen et al., 2017a;b) and achieve total complexity that matches the lower-bound worst case complexity in (Fang et al., 2018) up to a constant factor when n≤O(ϵ−2) for nonconvex problems. For convex optimization, we propose SARAH++ with sublinear convergence for general convex and linear convergence for strongly convex problems; and we provide a practical version for which numerical experiments on various datasets show an improved performance