10,047 research outputs found

    Students’ Evolving Meaning About Tangent Line with the Mediation of a Dynamic Geometry Environment and an Instructional Example Space

    Get PDF
    In this paper I report a lengthy episode from a teaching experiment in which fifteen Year 12 Greek students negotiated their definitions of tangent line to a function graph. The experiment was designed for the purpose of introducing students to the notion of derivative and to the general case of tangent to a function graph. Its design was based on previous research results on students’ perspectives on tangency, especially in their transition from Geometry to Analysis. In this experiment an instructional example space of functions was used in an electronic environment utilising Dynamic Geometry software with Function Grapher tools. Following the Vygotskian approach according to which students’ knowledge develops in specific social and cultural contexts, students’ construction of the meaning of tangent line was observed in the classroom throughout the experiment. The analysis of the classroom data collected during the experiment focused on the evolution of students’ personal meanings about tangent line of function graph in relation to: the electronic environment; the pre-prepared as well as spontaneous examples; students’ engagement in classroom discussion; and, the role of researcher as a teacher. The analysis indicated that the evolution of students’ meanings towards a more sophisticated understanding of tangency was not linear. Also it was interrelated with the evolution of the meaning they had about the inscriptions in the electronic environment; the instructional example space; the classroom discussion; and, the role of the teacher

    Using theoretical-computational conflicts to enrich the concept name of derivative

    Get PDF
    Recent literature has pointed out pedagogical obstacles associated with the use of computational environments in the learning of mathematics. In this paper, we focus on the pedagogical role of the computer's inherent limitations in the development of learners' concept images of derivative. In particular, we intend to discuss how the approach to this concept can be designed to prompt a positive conversion of those limitations for the enrichment of concept images. We present results of a case study with six undergraduate students in Brazil, dealing with situation of theoretical-computational conflict

    Effective Approaches to Attention-based Neural Machine Translation

    Full text link
    An attentional mechanism has lately been used to improve neural machine translation (NMT) by selectively focusing on parts of the source sentence during translation. However, there has been little work exploring useful architectures for attention-based NMT. This paper examines two simple and effective classes of attentional mechanism: a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. We demonstrate the effectiveness of both approaches over the WMT translation tasks between English and German in both directions. With local attention, we achieve a significant gain of 5.0 BLEU points over non-attentional systems which already incorporate known techniques such as dropout. Our ensemble model using different attention architectures has established a new state-of-the-art result in the WMT'15 English to German translation task with 25.9 BLEU points, an improvement of 1.0 BLEU points over the existing best system backed by NMT and an n-gram reranker.Comment: 11 pages, 7 figures, EMNLP 2015 camera-ready version, more training detail

    Recurrent Models of Visual Attention

    Full text link
    Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so
    • 

    corecore