27,784 research outputs found
Strange Quark Mass from the Invariant Mass Distribution of Cabibbo-Suppressed Tau Decays
Quark mass corrections to the tau hadronic width play a significant role only
for the strange quark, hence providing a method for determining its mass. The
experimental input is the vector plus axial-vector strange spectral function
derived from a complete study of tau decays into strange hadronic final states
performed by ALEPH. New results on strange decay modes from other experiments
are also incorporated. The present analysis determines the strange quark mass
at the Mtau mass scale using moments of the spectral function. Justified
theoretical constraints are applied to the nonperturbative components and
careful attention is paid to the treatment of the perturbative expansions of
the moments which exhibit convergence problems. The result obtained,
m_s(Mtau^2) = (120 +- 11_exp +- 8_Vus +- 19_th) MeV = (120^+21_-26) MeV, is
stable over the scale from Mtau down to about 1.4 GeV. Evolving this result to
customary scales yields m_s(1 GeV^2) = (160^+28_-35) MeV and m_s(4 GeV^2) =
(116^+20_-25) MeV.Comment: LaTex, 8 pages, 4 figures (EPS
Multiscale Decompositions and Optimization
In this paper, the following type Tikhonov regularization problem will be
systematically studied: [(u_t,v_t):=\argmin_{u+v=f} {|v|_X+t|u|_Y},] where
is a smooth space such as a \BV space or a Sobolev space and is the pace
in which we measure distortion. Examples of the above problem occur in
denoising in image processing, in numerically treating inverse problems, and in
the sparse recovery problem of compressed sensing. It is also at the heart of
interpolation of linear operators by the real method of interpolation. We shall
characterize of the minimizing pair for
(X,Y)=(L_2(\Omega),\BV(\Omega)) as a primary example and generalize Yves
Meyer's result in [11] and Antonin Chambolle's result in [6]. After that, the
following multiscale decomposition scheme will be studied:
[u_{k+1}:=\argmin_{u\in \BV(\Omega)\cap L_2(\Omega)}
{1/2|f-u|^2_{L_2}+t_{k}|u-u_k|_{\BV}},] where and is a bounded
Lipschitz domain in . This method was introduced by Eitan Tadmor et al.
and we will improve the convergence result in \cite{Tadmor}. Other pairs
such as and will also be
mentioned. In the end, the numerical implementation for
(X,Y)=(L_2(\Omega),\BV(\Omega)) and the corresponding convergence results
will be given.Comment: 33 page
Limited-Memory Greedy Quasi-Newton Method with Non-asymptotic Superlinear Convergence Rate
Non-asymptotic convergence analysis of quasi-Newton methods has gained
attention with a landmark result establishing an explicit superlinear rate of
O. The methods that obtain this rate, however, exhibit a
well-known drawback: they require the storage of the previous Hessian
approximation matrix or instead storing all past curvature information to form
the current Hessian inverse approximation. Limited-memory variants of
quasi-Newton methods such as the celebrated L-BFGS alleviate this issue by
leveraging a limited window of past curvature information to construct the
Hessian inverse approximation. As a result, their per iteration complexity and
storage requirement is O where is the size of the window
and is the problem dimension reducing the O computational cost and
memory requirement of standard quasi-Newton methods. However, to the best of
our knowledge, there is no result showing a non-asymptotic superlinear
convergence rate for any limited-memory quasi-Newton method. In this work, we
close this gap by presenting a limited-memory greedy BFGS (LG-BFGS) method that
achieves an explicit non-asymptotic superlinear rate. We incorporate
displacement aggregation, i.e., decorrelating projection, in post-processing
gradient variations, together with a basis vector selection scheme on variable
variations, which greedily maximizes a progress measure of the Hessian estimate
to the true Hessian. Their combination allows past curvature information to
remain in a sparse subspace while yielding a valid representation of the full
history. Interestingly, our established non-asymptotic superlinear convergence
rate demonstrates a trade-off between the convergence speed and memory
requirement, which to our knowledge, is the first of its kind. Numerical
results corroborate our theoretical findings and demonstrate the effectiveness
of our method
- β¦