50,721 research outputs found

    Toward a Semiotic Framework for Using Technology in Mathematics Education: The Case of Learning 3D Geometry

    Get PDF
    This paper proposes and examines a semiotic framework to inform the use of technology in mathematics education. Semiotics asserts that all cognition is irreducibly triadic, of the nature of a sign, fallible, and thoroughly immersed in a continuing process of interpretation (Halton, 1992). Mathematical meaning-making or meaningful knowledge construction is a continuing process of interpretation within multiple semiotic resources including typological, topological, and social-actional resources. Based on this semiotic framework, an application named VRMath has been developed to facilitate the learning of 3D geometry. VRMath utilises innovative virtual reality (VR) technology and integrates many semiotic resources to form a virtual reality learning environment (VRLE) as well as a mathematical microworld (Edwards, 1995) for learning 3D geometry. The semiotic framework and VRMath are both now being evaluated and will be re-examined continuously

    Looking away from faces: influence of high-level visual processes on saccade programming

    No full text
    Human faces capture attention more than other visual stimuli. Here we investigated whether such face-specific biases rely on automatic (involuntary) or voluntary orienting responses. To this end, we used an anti-saccade paradigm, which requires the ability to inhibit a reflexive automatic response and to generate a voluntary saccade in the opposite direction of the stimulus. To control for potential low-level confounds in the eye-movement data, we manipulated the high-level visual properties of the stimuli while normalizing their global low-level visual properties. Eye movements were recorded in 21 participants who performed either pro- or anti-saccades to a face, car, or noise pattern, randomly presented to the left or right of a fixation point. For each trial, a symbolic cue instructed the observer to generate either a pro-saccade or an anti-saccade. We report a significant increase in anti-saccade error rates for faces compared to cars and noise patterns, as well as faster pro-saccades to faces and cars in comparison to noise patterns. These results indicate that human faces induce stronger involuntary orienting responses than other visual objects, i.e., responses that are beyond the control of the observer. Importantly, this involuntary processing cannot be accounted for by global low-level visual factors

    Four approaches to teaching programming

    No full text
    Based on a survey of literature, four different approaches to teaching introductory programming are identified and described. Examples of the practice of each approach are identified representing procedural, visual, and object-oriented programming language paradigms. Each approach is then further analysed, identifying advantages and disadvantages for the student and the teacher. The first approach, code analysis, is analogous to reading before writing, that is, recognising the parts and what they mean. It requires learners to analyse and understand existing code prior to producing their own. An alternative is the building blocks approach, analogous to learning vocabulary, nouns and verbs, before constructing sentences. A third approach is identified as simple units in which learners master solutions to small problems before applying the learned logic to more complex problems. The final approach, full systems, is analogous to learning a foreign language by immersion whereby learners design a solution to a non-trivial problem and the programming concepts and language constructs are introduced only when the solution to the problem requires their application. The conclusion asserts that competency in programming cannot be achieved without mastering each of the approaches, at least to some extent. Use of the approaches in combination could provide novice programmers with the opportunities to acquire a full range of knowledge, understanding, and skills. Several orders for presenting the approaches in the classroom are proposed and analysed reflecting the needs of the learners and teachers. Further research is needed to better understand these and other approaches to teaching programming, not in terms of learner outcomes, but in terms of teachers’ actions and techniques employed to facilitate the construction of new knowledge by the learners. Effective classroom teaching practices could be informed by further investigations into the effect on progression of different toolset choices and combinations of teaching approache

    Learning and Acting in Peripersonal Space: Moving, Reaching, and Grasping

    Get PDF
    The young infant explores its body, its sensorimotor system, and the immediately accessible parts of its environment, over the course of a few months creating a model of peripersonal space useful for reaching and grasping objects around it. Drawing on constraints from the empirical literature on infant behavior, we present a preliminary computational model of this learning process, implemented and evaluated on a physical robot. The learning agent explores the relationship between the configuration space of the arm, sensing joint angles through proprioception, and its visual perceptions of the hand and grippers. The resulting knowledge is represented as the peripersonal space (PPS) graph, where nodes represent states of the arm, edges represent safe movements, and paths represent safe trajectories from one pose to another. In our model, the learning process is driven by intrinsic motivation. When repeatedly performing an action, the agent learns the typical result, but also detects unusual outcomes, and is motivated to learn how to make those unusual results reliable. Arm motions typically leave the static background unchanged, but occasionally bump an object, changing its static position. The reach action is learned as a reliable way to bump and move an object in the environment. Similarly, once a reliable reach action is learned, it typically makes a quasi-static change in the environment, moving an object from one static position to another. The unusual outcome is that the object is accidentally grasped (thanks to the innate Palmar reflex), and thereafter moves dynamically with the hand. Learning to make grasps reliable is more complex than for reaches, but we demonstrate significant progress. Our current results are steps toward autonomous sensorimotor learning of motion, reaching, and grasping in peripersonal space, based on unguided exploration and intrinsic motivation.Comment: 35 pages, 13 figure

    Differential recruitment of brain networks following route and cartographic map learning of spatial environments.

    Get PDF
    An extensive neuroimaging literature has helped characterize the brain regions involved in navigating a spatial environment. Far less is known, however, about the brain networks involved when learning a spatial layout from a cartographic map. To compare the two means of acquiring a spatial representation, participants learned spatial environments either by directly navigating them or learning them from an aerial-view map. While undergoing functional magnetic resonance imaging (fMRI), participants then performed two different tasks to assess knowledge of the spatial environment: a scene and orientation dependent perceptual (SOP) pointing task and a judgment of relative direction (JRD) of landmarks pointing task. We found three brain regions showing significant effects of route vs. map learning during the two tasks. Parahippocampal and retrosplenial cortex showed greater activation following route compared to map learning during the JRD but not SOP task while inferior frontal gyrus showed greater activation following map compared to route learning during the SOP but not JRD task. We interpret our results to suggest that parahippocampal and retrosplenial cortex were involved in translating scene and orientation dependent coordinate information acquired during route learning to a landmark-referenced representation while inferior frontal gyrus played a role in converting primarily landmark-referenced coordinates acquired during map learning to a scene and orientation dependent coordinate system. Together, our results provide novel insight into the different brain networks underlying spatial representations formed during navigation vs. cartographic map learning and provide additional constraints on theoretical models of the neural basis of human spatial representation

    Eye movement patterns during the recognition of three-dimensional objects: Preferential fixation of concave surface curvature minima

    Get PDF
    This study used eye movement patterns to examine how high-level shape information is used during 3D object recognition. Eye movements were recorded while observers either actively memorized or passively viewed sets of novel objects, and then during a subsequent recognition memory task. Fixation data were contrasted against different algorithmically generated models of shape analysis based on: (1) regions of internal concave or (2) convex surface curvature discontinuity or (3) external bounding contour. The results showed a preference for fixation at regions of internal local features during both active memorization and passive viewing but also for regions of concave surface curvature during the recognition task. These findings provide new evidence supporting the special functional status of local concave discontinuities in recognition and show how studies of eye movement patterns can elucidate shape information processing in human vision

    Pointing as an Instrumental Gesture : Gaze Representation Through Indication

    Get PDF
    The research of the first author was supported by a Fulbright Visiting Scholar Fellowship and developed in 2012 during a period of research visit at the University of Memphis.Peer reviewedPublisher PD
    corecore