6,384 research outputs found

    On error-spectrum shaping in state-space digital filters

    Get PDF
    A new scheme for shaping the error spectrum in state-space digital filter structures is proposed. The scheme is based on the application of diagonal second-order error feedback, and can be used in any arbitrary state-space structure having arbitrary order. A method to obtain noise-optimal state-space structures for fixed error feedback coefficients, starting from noise optimal structures in absence of error feedback (the Mullis and Roberts Structures), is also outlined. This optimization is based on the theory of continuous equivalence for state-space structures

    Knowledge-based intelligent error feedback in a Spanish ICALL system

    Get PDF
    This paper describes the Spanish ICALL system ESPADA which helps language learners to improve their syntactical knowledge. The most important parts of ESPADA for the learner are a Demonstration Module and an Analysis Module. The Demonstration Module provides animated presentation of selected grammatical information. The Analysis Module is able to parse ill-formed sentences and to give adequate feedback on 28 different error types from different levels of language use (syntax, semantics, agreement). It contains a robust chart-based island parser which uses a combination of mal-rules and constraint relaxation to ensure that learner input can be analysed and appropriate error feedback can be generated

    Visuomotor Learning Enhanced by Augmenting Instantaneous Trajectory Error Feedback during Reaching

    Get PDF
    We studied reach adaptation to a 30u visuomotor rotation to determine whether augmented error feedback can promote faster and more complete motor learning. Four groups of healthy adults reached with their unseen arm to visual targets surrounding a central starting point. A manipulandum tracked hand motion and projected a cursor onto a display immediately above the horizontal plane of movement. For one group, deviations from the ideal movement were amplified with a gain of 2 whereas another group experienced a gain of 3.1. The third group experienced an offset equal to the average error seen in the initial perturbations, while a fourth group served as controls. Learning in the gain 2 and offset groups was nearly twice as fast as controls. Moreover, the offset group averaged more reduction in error. Such error augmentation techniques may be useful for training novel visuomotor transformations as required of robotic teleoperators or in movement rehabilitation of the neurologically impaired

    Momentum Provably Improves Error Feedback!

    Full text link
    Due to the high communication overhead when training machine learning models in a distributed environment, modern algorithms invariably rely on lossy communication compression. However, when untreated, the errors caused by compression propagate, and can lead to severely unstable behavior, including exponential divergence. Almost a decade ago, Seide et al [2014] proposed an error feedback (EF) mechanism, which we refer to as EF14, as an immensely effective heuristic for mitigating this issue. However, despite steady algorithmic and theoretical advances in the EF field in the last decade, our understanding is far from complete. In this work we address one of the most pressing issues. In particular, in the canonical nonconvex setting, all known variants of EF rely on very large batch sizes to converge, which can be prohibitive in practice. We propose a surprisingly simple fix which removes this issue both theoretically, and in practice: the application of Polyak's momentum to the latest incarnation of EF due to Richt\'{a}rik et al. [2021] known as EF21. Our algorithm, for which we coin the name EF21-SGDM, improves the communication and sample complexities of previous error feedback algorithms under standard smoothness and bounded variance assumptions, and does not require any further strong assumptions such as bounded gradient dissimilarity. Moreover, we propose a double momentum version of our method that improves the complexities even further. Our proof seems to be novel even when compression is removed from the method, and as such, our proof technique is of independent interest in the study of nonconvex stochastic optimization enriched with Polyak's momentum

    Clip21: Error Feedback for Gradient Clipping

    Full text link
    Motivated by the increasing popularity and importance of large-scale training under differential privacy (DP) constraints, we study distributed gradient methods with gradient clipping, i.e., clipping applied to the gradients computed from local information at the nodes. While gradient clipping is an essential tool for injecting formal DP guarantees into gradient-based methods [1], it also induces bias which causes serious convergence issues specific to the distributed setting. Inspired by recent progress in the error-feedback literature which is focused on taming the bias/error introduced by communication compression operators such as Top-kk [2], and mathematical similarities between the clipping operator and contractive compression operators, we design Clip21 -- the first provably effective and practically useful error feedback mechanism for distributed methods with gradient clipping. We prove that our method converges at the same O(1K)\mathcal{O}\left(\frac{1}{K}\right) rate as distributed gradient descent in the smooth nonconvex regime, which improves the previous best O(1K)\mathcal{O}\left(\frac{1}{\sqrt{K}}\right) rate which was obtained under significantly stronger assumptions. Our method converges significantly faster in practice than competing methods
    corecore