9 research outputs found

    Autonomy and Its Role in English Language Learning: Practice and Research

    Get PDF
    This chapter picks up discussion in the previous edition of this handbook of how the concept of autonomy has influenced language education and applied linguistics in recent years. It begins by discussing the philosophical and practical origins of learner autonomy in language education and particularly in English language teaching and how these have developed over the last 10 years. Key practical initiatives and research findings are reviewed to illuminate how autonomy has been interpreted in relation to learners, teachers, and the learning situation; how it has been linked or contrasted with other constructs; and how fostering autonomy has been seen as a part of pedagogy. Recent developments from the earlier edition are discussed regarding metacognition and, in particular, various contextual dimensions of learner autonomy. Other emerging topics are also reviewed, including learner autonomy in the world of digital/social media, learner autonomy in curriculum design and published materials, and the relation of learner autonomy to plurilingual perspectives. The chapter discusses issues in each of these areas, potential strategies for developing autonomy and effective learning, and possible future directions for research and practice

    Scalable uncertainty for computer vision with functional variational inference

    Full text link
    As Deep Learning continues to yield successful applications in Computer Vision, the ability to quantify all forms of uncertainty is a paramount requirement for its safe and reliable deployment in the real-world. In this work, we leverage the formulation of variational inference in function space, where we associate Gaussian Processes (GPs) to both Bayesian CNN priors and variational family. Since GPs are fully determined by their mean and covariance functions, we are able to obtain predictive uncertainty estimates at the cost of a single forward pass through any chosen CNN architecture and for any supervised learning task. By leveraging the structure of the induced covariance matrices, we propose numerically efficient algorithms which enable fast training in the context of high-dimensional tasks such as depth estimation and semantic segmentation. Additionally, we provide sufficient conditions for constructing regression loss functions whose probabilistic counterparts are compatible with aleatoric uncertainty quantification

    Review of Literature

    Full text link
    corecore