4,652 research outputs found

    Why Chinese Neo-Confucian Women Made a Fetish of Small Feet

    Get PDF
    Abstract for “Why Chinese Neo-Confucian Women Made a Fetish of Small Feet” This paper explores the source of the traditional practice of Chinese footbinding which first gained popularity at the end of the Tang dynasty and continued to flourish until the last half of the twentieth century.[1] Derived initially from court concubines whose feet were formed to represent an attractive “deer lady” from an Indian tale, footbinding became a wide-spread symbol among the Chinese of obedience, pecuniary reputability, and Confucianism, among other things.[2],[3] Drawing on the analyses of such scholars as Beverly Jackson, Valerie Steele and John S. Major as well as historical personal accounts, the article concludes that the underlying goal in engaging in the footbinding practice was to raise a girl’s chances of being married into a family of the highest social class possible. [1] Valerie Steele and John S. Major, China Chic: East Meets West (Singapore: Yale University Press, 1999), 37. [2] Wang Ping, Aching for Beauty: Footbinding in China (University of Minnesota Press, 2000), 12. [3] Harold Koda, Extreme Beauty: The Body Transformed (The Metropolitan Museum of Art, New York: Yale University Press, New Haven and London, 2001), 152

    Listening to the World Improves Speech Command Recognition

    Full text link
    We study transfer learning in convolutional network architectures applied to the task of recognizing audio, such as environmental sound events and speech commands. Our key finding is that not only is it possible to transfer representations from an unrelated task like environmental sound classification to a voice-focused task like speech command recognition, but also that doing so improves accuracies significantly. We also investigate the effect of increased model capacity for transfer learning audio, by first validating known results from the field of Computer Vision of achieving better accuracies with increasingly deeper networks on two audio datasets: UrbanSound8k and the newly released Google Speech Commands dataset. Then we propose a simple multiscale input representation using dilated convolutions and show that it is able to aggregate larger contexts and increase classification performance. Further, the models trained using a combination of transfer learning and multiscale input representations need only 40% of the training data to achieve similar accuracies as a freshly trained model with 100% of the training data. Finally, we demonstrate a positive interaction effect for the multiscale input and transfer learning, making a case for the joint application of the two techniques.Comment: 8 page

    Four particle exchange in solid He-3

    Get PDF
    Calculations which demonstrate a physically important four-atom exchange process in bodycentered cubic He crystal and thus an important four-spin term in the exchange Hamiltonian are discussed. A simple, mean-field analysis of this Hamiltonian appears to account for a number of the perplexing properties of bodycentered cubic He crystal. It is suggested that an understanding of other properties may require treatment of the exact four-spin term

    Adaptive Bound Optimization for Online Convex Optimization

    Full text link
    We introduce a new online convex optimization algorithm that adaptively chooses its regularization function based on the loss functions observed so far. This is in contrast to previous algorithms that use a fixed regularization function such as L2-squared, and modify it only via a single time-dependent parameter. Our algorithm's regret bounds are worst-case optimal, and for certain realistic classes of loss functions they are much better than existing bounds. These bounds are problem-dependent, which means they can exploit the structure of the actual problem instance. Critically, however, our algorithm does not need to know this structure in advance. Rather, we prove competitive guarantees that show the algorithm provides a bound within a constant factor of the best possible bound (of a certain functional form) in hindsight.Comment: Updates to match final COLT versio

    Unconstrained Online Linear Learning in Hilbert Spaces: Minimax Algorithms and Normal Approximations

    Full text link
    We study algorithms for online linear optimization in Hilbert spaces, focusing on the case where the player is unconstrained. We develop a novel characterization of a large class of minimax algorithms, recovering, and even improving, several previous results as immediate corollaries. Moreover, using our tools, we develop an algorithm that provides a regret bound of O(UTlog(UTlog2T+1))\mathcal{O}\Big(U \sqrt{T \log(U \sqrt{T} \log^2 T +1)}\Big), where UU is the L2L_2 norm of an arbitrary comparator and both TT and UU are unknown to the player. This bound is optimal up to loglogT\sqrt{\log \log T} terms. When TT is known, we derive an algorithm with an optimal regret bound (up to constant factors). For both the known and unknown TT case, a Normal approximation to the conditional value of the game proves to be the key analysis tool.Comment: Proceedings of the 27th Annual Conference on Learning Theory (COLT 2014
    corecore