425 research outputs found
RenewNAT: Renewing Potential Translation for Non-Autoregressive Transformer
Non-autoregressive neural machine translation (NAT) models are proposed to
accelerate the inference process while maintaining relatively high performance.
However, existing NAT models are difficult to achieve the desired
efficiency-quality trade-off. For one thing, fully NAT models with efficient
inference perform inferior to their autoregressive counterparts. For another,
iterative NAT models can, though, achieve comparable performance while
diminishing the advantage of speed. In this paper, we propose RenewNAT, a
flexible framework with high efficiency and effectiveness, to incorporate the
merits of fully and iterative NAT models. RenewNAT first generates the
potential translation results and then renews them in a single pass. It can
achieve significant performance improvements at the same expense as traditional
NAT models (without introducing additional model parameters and decoding
latency). Experimental results on various translation benchmarks (e.g.,
\textbf{4} WMT) show that our framework consistently improves the performance
of strong fully NAT methods (e.g., GLAT and DSLP) without additional speed
overhead.Comment: Accepted by AAAI2
Halpern iteration of Cesà ro means for asymptotically nonexpansive mappings
ABSTRACT: Using a new proof technique which is independent of the approximation fixed point of T (limn→∞ xn − T xn = 0) and the convergence of the Browder type iteration path (zt = tu + (1 − t)T zt), the strong convergence of the Halpern iteration {xn} of Cesà ro means for asymptotically nonexpansive self-mappings T , defined by xn+1 = αnu + (1 − αn)(n + 1) −1 n j=0 T j xn for n 0, is proved in a uniformly convex Banach space E with a uniformly Gâteaux differentiable norm whenever {αn} is a real sequence in (0, 1) satisfying the conditions limn→∞ bn/αn = 0 and limn→∞ αn = 0 and ∞ n=0 αn = ∞
- …