1,861 research outputs found

    Effects of Embodied Learning and Digital Platform on the Retention of Physics Content: Centripetal Force

    Get PDF
    abstract: Embodiment theory proposes that knowledge is grounded in sensorimotor systems, and that learning can be facilitated to the extent that lessons can be mapped to these systems. This study with 109 college-age participants addresses two overarching questions: (a) how are immediate and delayed learning gains affected by the degree to which a lesson is embodied, and (b) how do the affordances of three different educational platforms affect immediate and delayed learning? Six 50 min-long lessons on centripetal force were created. The first factor was the degree of embodiment with two levels: (1) low and (2) high. The second factor was platform with three levels: (1) a large scale “mixed reality” immersive environment containing both digital and hands-on components called SMALLab, (2) an interactive whiteboard system, and (3) a mouse-driven desktop computer. Pre-tests, post-tests, and 1-week follow-up (retention or delayed learning gains) tests were administered resulting in a 2 × 3 × 3 design. Two knowledge subtests were analyzed, one that relied on more declarative knowledge and one that relied on more generative knowledge, e.g., hand-drawing vectors. Regardless of condition, participants made significant immediate learning gains from pre-test to post-test. There were no significant main effects or interactions due to platform or embodiment on immediate learning. However, from post-test to follow-up the level of embodiment interacted significantly with time, such that participants in the high embodiment conditions performed better on the subtest devoted to generative knowledge questions. We posit that better retention of certain types of knowledge can be seen over time when more embodiment is present during the encoding phase. This sort of retention may not appear on more traditional factual/declarative tests. Educational technology designers should consider using more sensorimotor feedback and gestural congruency when designing and opportunities for instructor professional development need to be provided as well.View the article as published at http://journal.frontiersin.org/article/10.3389/fpsyg.2016.01819/ful

    Interfaces and interfacings: posthuman ecologies, bodies and identities

    Get PDF
    This dissertation posits a posthuman theory for a technologically-driven ubiquitous computing (ubicomp) world, specifically theorizing cognition, intentionality and interface. The larger aim of this project is to open up discussions about human and technological relations and how these relations shape our understanding of what it means to be human. Situating my argument within posthuman and rhetorical theories, I discuss the metaphorical cyborg as a site of resistance, the everyday cyborg and its relations to technology through technogenesis and technology extension theories, and lastly the posthuman cyborg resulting from advances in biotechnology. I argue that this posthuman cyborg is an enmeshed network of biological and informatic code with neither having primacy. Building upon Anthony Miccoli, I see the interface (the space in between) as a functional myth, as humans are mutually constituted by material, biological, technological and social substrates of a networked ecology. I, then, reconfigure Kenneth Burke’s identification theory for the technological age and argue that the posthuman subject consubstantiates with the substrates, (or substances), to continuously invent a fluid intersubjectivity in a networked ecology. This project, then, explores both metaphorical and technological interfaces to better understand each. I argue that interfacing is a more thorough term to understand how humans, technologies, objects, spaces, language and code interact and thus constitute what we conceptualize as “human” and “reality.” This framework dismantles the interface as a space in between in favor of a networked ecology of dynamic relations. Then, I examine technological interfaces and their development as they have moved from the desktop to touchscreens to spaces wherein the body becomes a literal interface and site of interaction. These developments require rhetoric and composition scholars to interrogate not only the discourse of technologies but the interfaces themselves if we are to fully understand how human users come to identify with technologies that shape not only our communication but also our sense of subjectivity, autonomy, agency and intentionality. To make my claims clearer, I analyze science fiction representations of interfaces to chart more accessible means through which to understand the larger philosophical arcs in posthuman theory, intentionality as well as artificial intelligence. Using the films, then, this work seeks to elucidate the complexities of relations in the networked ecologies that define how we understand ourselves and the world in which we live

    Multimodal agents for cooperative interaction

    Get PDF
    2020 Fall.Includes bibliographical references.Embodied virtual agents offer the potential to interact with a computer in a more natural manner, similar to how we interact with other people. To reach this potential requires multimodal interaction, including both speech and gesture. This project builds on earlier work at Colorado State University and Brandeis University on just such a multimodal system, referred to as Diana. I designed and developed a new software architecture to directly address some of the difficulties of the earlier system, particularly with regard to asynchronous communication, e.g., interrupting the agent after it has begun to act. Various other enhancements were made to the agent systems, including the model itself, as well as speech recognition, speech synthesis, motor control, and gaze control. Further refactoring and new code were developed to achieve software engineering goals that are not outwardly visible, but no less important: decoupling, testability, improved networking, and independence from a particular agent model. This work, combined with the effort of others in the lab, has produced a "version 2'' Diana system that is well positioned to serve the lab's research needs in the future. In addition, in order to pursue new research opportunities related to developmental and intervention science, a "Faelyn Fox'' agent was developed. This is a different model, with a simplified cognitive architecture, and a system for defining an experimental protocol (for example, a toy-sorting task) based on Unity's visual state machine editor. This version too lays a solid foundation for future research

    RealitySketch: Embedding Responsive Graphics and Visualizations in AR through Dynamic Sketching

    Full text link
    We present RealitySketch, an augmented reality interface for sketching interactive graphics and visualizations. In recent years, an increasing number of AR sketching tools enable users to draw and embed sketches in the real world. However, with the current tools, sketched contents are inherently static, floating in mid air without responding to the real world. This paper introduces a new way to embed dynamic and responsive graphics in the real world. In RealitySketch, the user draws graphical elements on a mobile AR screen and binds them with physical objects in real-time and improvisational ways, so that the sketched elements dynamically move with the corresponding physical motion. The user can also quickly visualize and analyze real-world phenomena through responsive graph plots or interactive visualizations. This paper contributes to a set of interaction techniques that enable capturing, parameterizing, and visualizing real-world motion without pre-defined programs and configurations. Finally, we demonstrate our tool with several application scenarios, including physics education, sports training, and in-situ tangible interfaces.Comment: UIST 202

    The eyes have it

    Get PDF

    The eyes have it

    Get PDF
    • …
    corecore