21 research outputs found

    Pragmatics and the invisible language of relationships: the case of Korean and KFL learners

    Get PDF
    This thesis finds itself at the intersection of two fast-moving global trends in linguistics research: first, the swiftly growing interest in pragmatics as a field in need of further development, definition, and delineation between global languages with vastly differing sociocultural patterns of behavior, including politeness (Kiaer (2020b), Taguchi (2019)); and second, unprecedented international attention on the Korean peninsula, leading to a wave of Korean language study by learners of Korean as a foreign language (KFL learners), with steadily increasing enrollment numbers continuing to outperform other languages (Pickles, 2018). These two phenomena have created a specific set of circumstances in which the study of Korean pragmatics and Korean pragmatics education has become extremely relevant. This thesis seeks to address that set of circumstances by setting out, through a variety of ethnographic methodologies, to understand how native Korean speakers perceive their own pragmatic experiences. It then compares Korean speakers’ pragmatic experiences with those of Chinese and Japanese speakers, in order to identify the unique pragmatic features of Korean. Finally, this thesis explores the KFL classroom and examines how KFL learners are currently being exposed to Korean pragmatics in their studies. Through the methodologies pursued here, this thesis finds that Korean speakers are uniquely required by their linguistic circumstances to constantly engage with pragmatic skills, including speech styles and address terms, in order to engage in culturally required relational dynamics, and that KFL learners are being underprepared by current KFL curricula to engage in Korean relational dynamics using those pragmatic skills

    The asymptotic equivalence of fixed heat flux and fixed temperature thermal boundary conditions for rapidly rotating convection

    Full text link
    The influence of fixed temperature and fixed heat flux thermal boundary conditions on rapidly rotating convection in the plane layer geometry is investigated for the case of stress-free mechanical boundary conditions. It is shown that whereas the leading order system satisfies fixed temperature boundary conditions implicitly, a double boundary layer structure is necessary to satisfy the fixed heat flux thermal boundary conditions. The boundary layers consist of a classical Ekman layer adjacent to the solid boundaries that adjust viscous stresses to zero, and a layer in thermal wind balance just outside the Ekman layers adjusts the temperature such that the fixed heat flux thermal boundary conditions are satisfied. The influence of these boundary layers on the interior geostrophically balanced convection is shown to be asymptotically weak, however. Upon defining a simple rescaling of the thermal variables, the leading order reduced system of governing equations are therefore equivalent for both boundary conditions. These results imply that any horizontal thermal variation along the boundaries that varies on the scale of the convection has no leading order influence on the interior convection

    Accelerating Variance-Reduced Stochastic Gradient Methods

    Get PDF
    Variance reduction is a crucial tool for improving the slow convergence of stochastic gradient descent. Only a few variance-reduced methods, however, have yet been shown to directly benefit from Nesterov’s acceleration techniques to match the convergence rates of accelerated gradient methods. Such approaches rely on “negative momentum”, a technique for further variance reduction that is generally specific to the SVRG gradient estimator. In this work, we show for the first time that negative momentum is unnecessary for acceleration and develop a universal acceleration framework that allows all popular variance-reduced methods to achieve accelerated convergence rates. The constants appearing in these rates, including their dependence on the number of functions n, scale with the mean-squared-error and bias of the gradient estimator. In a series of numerical experiments, we demonstrate that versions of SAGA, SVRG, SARAH, and SARGE using our framework significantly outperform non-accelerated versions and compare favourably with algorithms using negative momentum.</p

    Practical Acceleration of the Condat–Vũ Algorithm

    Get PDF
    The Condat-Vũ algorithm is a widely used primal-dual method for optimizing composite objectives of three functions. Several algorithms for optimizing composite objectives of two functions are special cases of Condat-Vũ, including proximal gradient descent (PGD). It is well-known that PGD exhibits suboptimal performance, and a simple adjustment to PGD can accelerate its convergence rate from O(1/T) to O(1/T2) on convex objectives, and this accelerated rate is optimal. In this work, we show that a simple adjustment to the Condat-Vũ algorithm allows it to recover accelerated PGD (APGD) as a special case, instead of PGD. We prove that this accelerated Condat--Vũ algorithm achieves optimal convergence rates and significantly outperforms the traditional Condat-Vũ algorithm in regimes where the Condat--Vũ algorithm approximates the dynamics of PGD. We demonstrate the effectiveness of our approach in various applications in machine learning and computational imaging

    Practical Acceleration of the Condat–Vũ Algorithm

    Get PDF
    The Condat-Vũ algorithm is a widely used primal-dual method for optimizing composite objectives of three functions. Several algorithms for optimizing composite objectives of two functions are special cases of Condat-Vũ, including proximal gradient descent (PGD). It is well-known that PGD exhibits suboptimal performance, and a simple adjustment to PGD can accelerate its convergence rate from O(1/T) to O(1/T2) on convex objectives, and this accelerated rate is optimal. In this work, we show that a simple adjustment to the Condat-Vũ algorithm allows it to recover accelerated PGD (APGD) as a special case, instead of PGD. We prove that this accelerated Condat--Vũ algorithm achieves optimal convergence rates and significantly outperforms the traditional Condat-Vũ algorithm in regimes where the Condat--Vũ algorithm approximates the dynamics of PGD. We demonstrate the effectiveness of our approach in various applications in machine learning and computational imaging

    Practical Acceleration of the Condat-Vũ Algorithm

    Get PDF
    The Condat-V\~u algorithm is a widely used primal-dual method for optimizing composite objectives of three functions. Several algorithms for optimizing composite objectives of two functions are special cases of Condat-V\~u, including proximal gradient descent (PGD). It is well-known that PGD exhibits suboptimal performance, and a simple adjustment to PGD can accelerate its convergence rate from O(1/T)\mathcal{O}(1/T) to O(1/T2)\mathcal{O}(1/T^2) on convex objectives, and this accelerated rate is optimal. In this work, we show that a simple adjustment to the Condat-V\~u algorithm allows it to recover accelerated PGD (APGD) as a special case, instead of PGD. We prove that this accelerated Condat--V\~u algorithm achieves optimal convergence rates and significantly outperforms the traditional Condat-V\~u algorithm in regimes where the Condat--V\~u algorithm approximates the dynamics of PGD. We demonstrate the effectiveness of our approach in various applications in machine learning and computational imaging

    A stochastic proximal alternating method for non-smooth non-convex optimization

    Get PDF
    We introduce SPRING, a novel stochastic proximal alternating linearized minimization algorithm for solving a class of non-smooth and non-convex optimization problems. Large-scale imaging problems are becoming increasingly prevalent due to advances in data acquisition and computational capabilities. Motivated by the success of stochastic optimization methods, we propose a stochastic variant of proximal alternating linearized minimization (PALM) algorithm \cite{bolte2014proximal}. We provide global convergence guarantees, demonstrating that our proposed method with variance-reduced stochastic gradient estimators, such as SAGA \cite{SAGA} and SARAH \cite{sarah}, achieves state-of-the-art oracle complexities. We also demonstrate the efficacy of our algorithm via several numerical examples including sparse non-negative matrix factorization, sparse principal component analysis, and blind image deconvolution.Comment: 28 pages, 11 page appendi
    corecore