426 research outputs found

    A Self-Organizing Neural Model of Motor Equivalent Reaching and Tool Use by a Multijoint Arm

    Full text link
    This paper describes a self-organizing neural model for eye-hand coordination. Called the DIRECT model, it embodies a solution of the classical motor equivalence problem. Motor equivalence computations allow humans and other animals to flexibly employ an arm with more degrees of freedom than the space in which it moves to carry out spatially defined tasks under conditions that may require novel joint configurations. During a motor babbling phase, the model endogenously generates movement commands that activate the correlated visual, spatial, and motor information that are used to learn its internal coordinate transformations. After learning occurs, the model is capable of controlling reaching movements of the arm to prescribed spatial targets using many different combinations of joints. When allowed visual feedback, the model can automatically perform, without additional learning, reaches with tools of variable lengths, with clamped joints, with distortions of visual input by a prism, and with unexpected perturbations. These compensatory computations occur within a single accurate reaching movement. No corrective movements are needed. Blind reaches using internal feedback have also been simulated. The model achieves its competence by transforming visual information about target position and end effector position in 3-D space into a body-centered spatial representation of the direction in 3-D space that the end effector must move to contact the target. The spatial direction vector is adaptively transformed into a motor direction vector, which represents the joint rotations that move the end effector in the desired spatial direction from the present arm configuration. Properties of the model are compared with psychophysical data on human reaching movements, neurophysiological data on the tuning curves of neurons in the monkey motor cortex, and alternative models of movement control.National Science Foundation (IRI 90-24877); Office of Naval Research (N00014-92-J-1309); Air Force Office of Scientific Research (F49620-92-J-0499); National Science Foundation (IRI 90-24877

    Literacy for digital futures : Mind, body, text

    Get PDF
    The unprecedented rate of global, technological, and societal change calls for a radical, new understanding of literacy. This book offers a nuanced framework for making sense of literacy by addressing knowledge as contextualised, embodied, multimodal, and digitally mediated. In today’s world of technological breakthroughs, social shifts, and rapid changes to the educational landscape, literacy can no longer be understood through established curriculum and static text structures. To prepare teachers, scholars, and researchers for the digital future, the book is organised around three themes – Mind and Materiality; Body and Senses; and Texts and Digital Semiotics – to shape readers’ understanding of literacy. Opening up new interdisciplinary themes, Mills, Unsworth, and Scholes confront emerging issues for next-generation digital literacy practices. The volume helps new and established researchers rethink dynamic changes in the materiality of texts and their implications for the mind and body, and features recommendations for educational and professional practice

    Multimodal interaction with mobile devices : fusing a broad spectrum of modality combinations

    Get PDF
    This dissertation presents a multimodal architecture for use in mobile scenarios such as shopping and navigation. It also analyses a wide range of feasible modality input combinations for these contexts. For this purpose, two interlinked demonstrators were designed for stand-alone use on mobile devices. Of particular importance was the design and implementation of a modality fusion module capable of combining input from a range of communication modes like speech, handwriting, and gesture. The implementation is able to account for confidence value biases arising within and between modalities and also provides a method for resolving semantically overlapped input. Tangible interaction with real-world objects and symmetric multimodality are two further themes addressed in this work. The work concludes with the results from two usability field studies that provide insight on user preference and modality intuition for different modality combinations, as well as user acceptance for anthropomorphized objects.Diese Dissertation prĂ€sentiert eine multimodale Architektur zum Gebrauch in mobilen UmstĂ€nden wie z. B. Einkaufen und Navigation. Außerdem wird ein großes Gebiet von möglichen modalen Eingabekombinationen zu diesen UmstĂ€nden analysiert. Um das in praktischer Weise zu demonstrieren, wurden zwei teilweise gekoppelte VorfĂŒhrungsprogramme zum \u27stand-alone\u27; Gebrauch auf mobilen GerĂ€ten entworfen. Von spezieller Wichtigkeit war der Entwurf und die AusfĂŒhrung eines ModalitĂ€ts-fusion Modul, das die Kombination einer Reihe von Kommunikationsarten wie Sprache, Handschrift und Gesten ermöglicht. Die AusfĂŒhrung erlaubt die VerĂ€nderung von ZuverlĂ€ssigkeitswerten innerhalb einzelner ModalitĂ€ten und außerdem ermöglicht eine Methode um die semantisch ĂŒberlappten Eingaben auszuwerten. Wirklichkeitsnaher Dialog mit aktuellen Objekten und symmetrische MultimodalitĂ€t sind zwei weitere Themen die in dieser Arbeit behandelt werden. Die Arbeit schließt mit Resultaten von zwei Feldstudien, die weitere Einsicht erlauben ĂŒber die bevorzugte Art verschiedener ModalitĂ€tskombinationen, sowie auch ĂŒber die Akzeptanz von anthropomorphisierten Objekten

    Multimodal interaction with mobile devices : fusing a broad spectrum of modality combinations

    Get PDF
    This dissertation presents a multimodal architecture for use in mobile scenarios such as shopping and navigation. It also analyses a wide range of feasible modality input combinations for these contexts. For this purpose, two interlinked demonstrators were designed for stand-alone use on mobile devices. Of particular importance was the design and implementation of a modality fusion module capable of combining input from a range of communication modes like speech, handwriting, and gesture. The implementation is able to account for confidence value biases arising within and between modalities and also provides a method for resolving semantically overlapped input. Tangible interaction with real-world objects and symmetric multimodality are two further themes addressed in this work. The work concludes with the results from two usability field studies that provide insight on user preference and modality intuition for different modality combinations, as well as user acceptance for anthropomorphized objects.Diese Dissertation prĂ€sentiert eine multimodale Architektur zum Gebrauch in mobilen UmstĂ€nden wie z. B. Einkaufen und Navigation. Außerdem wird ein großes Gebiet von möglichen modalen Eingabekombinationen zu diesen UmstĂ€nden analysiert. Um das in praktischer Weise zu demonstrieren, wurden zwei teilweise gekoppelte VorfĂŒhrungsprogramme zum 'stand-alone'; Gebrauch auf mobilen GerĂ€ten entworfen. Von spezieller Wichtigkeit war der Entwurf und die AusfĂŒhrung eines ModalitĂ€ts-fusion Modul, das die Kombination einer Reihe von Kommunikationsarten wie Sprache, Handschrift und Gesten ermöglicht. Die AusfĂŒhrung erlaubt die VerĂ€nderung von ZuverlĂ€ssigkeitswerten innerhalb einzelner ModalitĂ€ten und außerdem ermöglicht eine Methode um die semantisch ĂŒberlappten Eingaben auszuwerten. Wirklichkeitsnaher Dialog mit aktuellen Objekten und symmetrische MultimodalitĂ€t sind zwei weitere Themen die in dieser Arbeit behandelt werden. Die Arbeit schließt mit Resultaten von zwei Feldstudien, die weitere Einsicht erlauben ĂŒber die bevorzugte Art verschiedener ModalitĂ€tskombinationen, sowie auch ĂŒber die Akzeptanz von anthropomorphisierten Objekten

    Reimagining the Chalk Talk: Animated Handwriting as a Social Cue to Improve Motivation in Multimedia Video Lessons

    Get PDF
    Animated handwriting in multimedia video lessons, such as those popularized by the Khan Academy, has reimagined the classic teaching technique of writing on a chalkboard while lecturing for online delivery. This digital chalk talk effect mimics classroom lectures where words are written letter by letter on a chalkboard as they are spoken. Low-cost applications, tablets, and document cameras allow instructors at all levels to easily create their own animated handwritten videos. As adoption increases, it is important to understand the effects of this strategy. This study employed a true experimental, between-subjects, posttest design that compared multimedia lessons with different text display formats on outcomes of motivation, mental effort, and learning. Undergraduate student volunteers (n = 234) from a large U.S., West Coast, regional, four-year public university were randomly assigned to one of three treatments: Animated handwritten, animated typewritten, or static typewritten. Each group watched a different version of a five-segment, twelve-minute multimedia lesson about cryptography. Lessons differed only in the visual text display format and contained identical narration and content. Results indicated that multimedia with animated handwritten text produced strong social cues motivating learners. Participants who viewed the animated handwriting reported significantly greater social agency attitudes toward the learning experience than with static typewritten text. They perceived the narrator’s voice as more dynamic with animated handwriting when compared to static, even though the voice was identical. They also reported more attention to the lesson and materials with animated handwriting than either animated typewritten or static typewritten. These motivational gains are accomplished without introducing extraneous cognitive load or negatively impacting learning outcomes. Significant findings from this research demonstrated that animated handwritten text is more than just a signaling strategy. The combination of text being hand-drawn and appearing as if a real person is writing it in real time adds a powerful social cue. Results of this study demonstrate that using animated handwriting in multimedia video lessons is an effective way to increase motivation through social cues that can be accomplished without requiring expansive technical knowledge, expensive equipment or extensive time investments

    The Intersection of Young Children\u27s Play Activities and Multimodal Practices for Social Purposes

    Get PDF
    This qualitative study examined how children’s play activities and multimodal practices intersected for social purposes. Mediated Discourse Analysis (Scollon, 2001) informed the theory and methodology. Four questions guided the study: 1) In what types of play do preschoolers engage? 2) What strategies do preschoolers use to navigate the social boundaries of playframes? 3) What are the various modes and resources preschoolers use to engage in playframes and how do they use them? 4) What social positionings do preschoolers take on and resist in playframes? Participants included two co-teachers and twelve of the children in their pre-kindergarten classroom. Data was collected over a five-month period using participant observations and field notes. Analysis focused on multimodal discourse that took place during free play time. Four findings emerged from the study: 1) Six types of play emerged across playframes; 2) Children used entry, invitation, sustainment, and protection strategies in their playframes; 3) Children used various modes and resources, including their bodies, props, and alphabetic print, to enact character roles and social roles; and 4) Children moved fluidly within and across insider and outsider social positionings in playframes. This study extends research focused on the social dynamics of young children’s classroom play experiences and argues for an extended conceptualization of multimodal literacy as analyzed in young children’s play
    • 

    corecore