480 research outputs found

    Informing the design of a multisensory learning environment for elementary mathematics learning

    Get PDF
    It is well known that primary school children may face difficulties in acquiring mathematical competence, possibly because teaching is generally based on formal lessons with little opportunity to exploit more multisensory-based activities within the classroom. To overcome such difficulties, we report here the exemplary design of a novel multisensory learning environment for teaching mathematical concepts based on meaningful inputs from elementary school teachers. First, we developed and administered a questionnaire to 101 teachers asking them to rate based on their experience the learning difficulty for specific arithmetical and geometrical concepts encountered by elementary school children. Additionally, the questionnaire investigated the feasibility to use multisensory information to teach mathematical concepts. Results show that challenging concepts differ depending on children school level, thus providing a guidance to improve teaching strategies and the design of new and emerging learning technologies accordingly. Second, we obtained specific and practical design inputs with workshops involving elementary school teachers and children. Altogether, these findings are used to inform the design of emerging multimodal technological applications, that take advantage not only of vision but also of other sensory modalities. In the present work, we describe in detail one exemplary multisensory environment design based on the questionnaire results and design ideas from the workshops: the Space Shapes game, which exploits visual and haptic/proprioceptive sensory information to support mental rotation, 2D–3D transformation and percentages. Corroborating research evidence in neuroscience and pedagogy, our work presents a functional approach to develop novel multimodal user interfaces to improve education in the classroom

    Tangible user interfaces : past, present and future directions

    Get PDF
    In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this field. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research

    ShapeBots: Shape-changing Swarm Robots

    Full text link
    We introduce shape-changing swarm robots. A swarm of self-transformable robots can both individually and collectively change their configuration to display information, actuate objects, act as tangible controllers, visualize data, and provide physical affordances. ShapeBots is a concept prototype of shape-changing swarm robots. Each robot can change its shape by leveraging small linear actuators that are thin (2.5 cm) and highly extendable (up to 20cm) in both horizontal and vertical directions. The modular design of each actuator enables various shapes and geometries of self-transformation. We illustrate potential application scenarios and discuss how this type of interface opens up possibilities for the future of ubiquitous and distributed shape-changing interfaces.Comment: UIST 201

    Mapping Tasks to Interactions for Graph Exploration and Graph Editing on Interactive Surfaces

    Full text link
    Graph exploration and editing are still mostly considered independently and systems to work with are not designed for todays interactive surfaces like smartphones, tablets or tabletops. When developing a system for those modern devices that supports both graph exploration and graph editing, it is necessary to 1) identify what basic tasks need to be supported, 2) what interactions can be used, and 3) how to map these tasks and interactions. This technical report provides a list of basic interaction tasks for graph exploration and editing as a result of an extensive system review. Moreover, different interaction modalities of interactive surfaces are reviewed according to their interaction vocabulary and further degrees of freedom that can be used to make interactions distinguishable are discussed. Beyond the scope of graph exploration and editing, we provide an approach for finding and evaluating a mapping from tasks to interactions, that is generally applicable. Thus, this work acts as a guideline for developing a system for graph exploration and editing that is specifically designed for interactive surfaces.Comment: 21 pages, minor corrections (typos etc.

    Making Spatial Information Accessible on Touchscreens for Users who are Blind and Visually Impaired

    Get PDF
    Touchscreens have become a de facto standard of input for mobile devices as they most optimally use the limited input and output space that is imposed by their form factor. In recent years, people who are blind and visually impaired have been increasing their usage of smartphones and touchscreens. Although basic access is available, there are still many accessibility issues left to deal with in order to bring full inclusion to this population. One of the important challenges lies in accessing and creating of spatial information on touchscreens. The work presented here provides three new techniques, using three different modalities, for accessing spatial information on touchscreens. The first system makes geometry and diagram creation accessible on a touchscreen through the use of text-to-speech and gestural input. This first study is informed by a qualitative study of how people who are blind and visually impaired currently access and create graphs and diagrams. The second system makes directions through maps accessible using multiple vibration sensors without any sound or visual output. The third system investigates the use of binaural sound on a touchscreen to make various types of applications accessible such as physics simulations, astronomy, and video games

    Vision based interactive toys environment

    Get PDF

    Tangible Paper Interfaces: Interpreting Pupils' Manipulations

    Get PDF
    Paper interfaces merge the advantages of the digital and physical world. They can be created using normal paper augmented by a camera+projector system. They are particularly promising for applications in education, because paper is already fully integrated in the classroom, and computers can augment them with a dynamic display. However, people mostly use paper as a document, and rarely for its characteristics as a physical body. In this article, we show how the tangible nature of paper can be used to extract information about the learning activity. We present an augmented reality activity for pupils in primary schools to explore the classification of quadrilaterals based on sheets, cards, and cardboard shapes. We present a preliminary study and an in-situ, controlled study, making use of this activity. From the detected positions of the various interface elements, we show how to extract indicators about problem solving, hesitation, difficulty levels of the exercises, and the division of labor among the groups of pupils. Finally, we discuss how such indicators can be used, and how other interfaces can be designed to extract different indicators

    Maintaining Structured Experiences for Robots via Human Demonstrations: An Architecture To Convey Long-Term Robot\u2019s Beliefs

    Get PDF
    This PhD thesis presents an architecture for structuring experiences, learned through demonstrations, in a robot memory. To test our architecture, we consider a specific application where a robot learns how objects are spatially arranged in a tabletop scenario. We use this application as a mean to present a few software development guidelines for building architecture for similar scenarios, where a robot is able to interact with a user through a qualitative shared knowledge stored in its memory. In particular, the thesis proposes a novel technique for deploying ontologies in a robotic architecture based on semantic interfaces. To better support those interfaces, it also presents general-purpose tools especially designed for an iterative development process, which is suitable for Human-Robot Interaction scenarios. We considered ourselves at the beginning of the first iteration of the design process, and our objective was to build a flexible architecture through which evaluate different heuristic during further development iterations. Our architecture is based on a novel algorithm performing a oneshot structured learning based on logic formalism. We used a fuzzy ontology for dealing with uncertain environments, and we integrated the algorithm in the architecture based on a specific semantic interface. The algorithm is used for building experience graphs encoded in the robot\u2019s memory that can be used for recognising and associating situations after a knowledge bootstrapping phase. During this phase, a user is supposed to teach and supervise the beliefs of the robot through multimodal, not physical, interactions. We used the algorithm to implement a cognitive like memory involving the encoding, storing, retrieving, consolidating, and forgetting behaviours, and we showed that our flexible design pattern could be used for building architectures where contextualised memories are managed with different purposes, i.e. they contains representation of the same experience encoded with different semantics. The proposed architecture has the main purposes of generating and maintaining knowledge in memory, but it can be directly interfaced with perceiving and acting components if they provide, or require, symbolical knowledge. With the purposes of showing the type of data considered as inputs and outputs in our tests, this thesis also presents components to evaluate point clouds, engage dialogues, perform late data fusion and simulate the search of a target position. Nevertheless, our design pattern is not meant to be coupled only with those components, which indeed have a large room of improvement
    corecore