692 research outputs found

    Feel and Touch: A Haptic Mobile Game to Assess Tactile Processing

    Get PDF
    Haptic interfaces have great potential for assessing the tactile processing of children with Autism Spectrum Disorder (ASD), an area that has been under-explored due to the lack of tools to assess it. Until now, haptic interfaces for children have mostly been used as a teaching or therapeutic tool, so there are still open questions about how they could be used to assess tactile processing of children with ASD. This article presents the design process that led to the development of Feel and Touch, a mobile game augmented with vibrotactile stimuli to assess tactile processing. Our feasibility evaluation, with 5 children from 3 to 6 years old, shows that children accept vibrations and are able to use the proposed vibrotactile patterns. However, it is still necessary to work on the instructions to make the game dynamic clearer and rewards to keep the attention of children. We close this article by discussing future work and conclusions

    Establishing a Framework for the development of Multimodal Virtual Reality Interfaces with Applicability in Education and Clinical Practice

    Get PDF
    The development of Virtual Reality (VR) and Augmented Reality (AR) content with multiple sources of both input and output has led to countless contributions in a great many number of fields, among which medicine and education. Nevertheless, the actual process of integrating the existing VR/AR media and subsequently setting it to purpose is yet a highly scattered and esoteric undertaking. Moreover, seldom do the architectures that derive from such ventures comprise haptic feedback in their implementation, which in turn deprives users from relying on one of the paramount aspects of human interaction, their sense of touch. Determined to circumvent these issues, the present dissertation proposes a centralized albeit modularized framework that thus enables the conception of multimodal VR/AR applications in a novel and straightforward manner. In order to accomplish this, the aforesaid framework makes use of a stereoscopic VR Head Mounted Display (HMD) from Oculus RiftĀ©, a hand tracking controller from Leap MotionĀ©, a custom-made VR mount that allows for the assemblage of the two preceding peripherals and a wearable device of our own design. The latter is a glove that encompasses two core modules in its innings, one that is able to convey haptic feedback to its wearer and another that deals with the non-intrusive acquisition, processing and registering of his/her Electrocardiogram (ECG), Electromyogram (EMG) and Electrodermal Activity (EDA). The software elements of the aforementioned features were all interfaced through Unity3DĀ©, a powerful game engine whose popularity in academic and scientific endeavors is evermore increasing. Upon completion of our system, it was time to substantiate our initial claim with thoroughly developed experiences that would attest to its worth. With this premise in mind, we devised a comprehensive repository of interfaces, amid which three merit special consideration: Brain Connectivity Leap (BCL), Ode to Passive Haptic Learning (PHL) and a Surgical Simulator

    Design of Cognitive Interfaces for Personal Informatics Feedback

    Get PDF

    Beyond the icon: Core cognition and the bounds of perception

    Get PDF
    This paper refines a controversial proposal: that core systems belong to a perceptual kind, marked out by the format of its representational outputs. Following Susan Carey, this proposal has been understood in terms of core representations having an iconic format, like certain paradigmatically perceptual outputs. I argue that they donā€™t, but suggest that the proposal may be better formulated in terms of a broader analogue format type. Formulated in this way, the proposal accommodates the existence of genuine icons in perception, and avoids otherwise troubling objections

    Leveling the Playing Field: Supporting Neurodiversity via Virtual Realities

    Get PDF
    Neurodiversity is a term that encapsulates the diverse expression of human neurology. By thinking in broad terms about neurological development, we can become focused on delivering a diverse set of design features to meet the needs of the human condition. In this work, we move toward developing virtual environments that support variations in sensory processing. If we understand that people have differences in sensory perception that result in their own unique sensory traits, many of which are clustered by diagnostic labels such as Autism Spectrum Disorder (ASD), Sensory Processing Disorder, Attention-Deficit/Hyperactivity Disorder, Rett syndrome, dyslexia, and so on, then we can leverage that knowledge to create new input modalities for accessible and assistive technologies. In an effort to translate differences in sensory perception into new variations of input modalities, we focus this work on ASD. ASD has been characterized by a complex sensory signature that can impact social, cognitive, and communication skills. By providing assistance for these diverse sensory perceptual abilities, we create an opportunity to improve the interactions people have with technology and the world. In this paper, we describe, through a variety of examples, the ways to address sensory differences to support neurologically diverse individuals by leveraging advances in virtual reality

    VISIO-HAPTIC DEFORMABLE MODEL FOR HAPTIC DOMINANT PALPATION SIMULATOR

    Get PDF
    Vision and haptic are two most important modalities in a medical simulation. While visual cues assist one to see his actions when performing a medical procedure, haptic cues enable feeling the object being manipulated during the interaction. Despite their importance in a computer simulation, the combination of both modalities has not been adequately assessed, especially that in a haptic dominant environment. Thus, resulting in poor emphasis in resource allocation management in terms of effort spent in rendering the two modalities for simulators with realistic real-time interactions. Addressing this problem requires an investigation on whether a single modality (haptic) or a combination of both visual and haptic could be better for learning skills in a haptic dominant environment such as in a palpation simulator. However, before such an investigation could take place one main technical implementation issue in visio-haptic rendering needs to be addresse

    Making Graphical Information Accessible Without Vision Using Touch-based Devices

    Get PDF
    Accessing graphical material such as graphs, figures, maps, and images is a major challenge for blind and visually impaired people. The traditional approaches that have addressed this issue have been plagued with various shortcomings (such as use of unintuitive sensory translation rules, prohibitive costs and limited portability), all hindering progress in reaching the blind and visually-impaired users. This thesis addresses aspects of these shortcomings, by designing and experimentally evaluating an intuitive approach ā€”called a vibro-audio interfaceā€” for non-visual access to graphical material. The approach is based on commercially available touch-based devices (such as smartphones and tablets) where hand and finger movements over the display provide position and orientation cues by synchronously triggering vibration patterns, speech output and auditory cues, whenever an on-screen visual element is touched. Three human behavioral studies (Exp 1, 2, and 3) assessed usability of the vibro-audio interface by investigating whether its use leads to development of an accurate spatial representation of the graphical information being conveyed. Results demonstrated efficacy of the interface and importantly, showed that performance was functionally equivalent with that found using traditional hardcopy tactile graphics, which are the gold standard of non-visual graphical learning. One limitation of this approach is the limited screen real estate of commercial touch-screen devices. This means large and deep format graphics (e.g., maps) will not fit within the screen. Panning and zooming operations are traditional techniques to deal with this challenge but, performing these operations without vision (i.e., using touch) represents several computational challenges relating both to cognitive constraints of the user and technological constraints of the interface. To address these issues, two human behavioral experiments were conducted, that assessed the influence of panning (Exp 4) and zooming (Exp 5) operations in non-visual learning of graphical material and its related human factors. Results from experiments 4 and 5 indicated that the incorporation of panning and zooming operations enhances the non-visual learning process and leads to development of more accurate spatial representation. Together, this thesis demonstrates that the proposed approach ā€”using a vibro-audio interfaceā€” is a viable multimodal solution for presenting dynamic graphical information to blind and visually-impaired persons and supporting development of accurate spatial representations of otherwise inaccessible graphical materials

    FlipMe: Exploring Rich Peer-to-Peer Communication in On-Line-Learning

    Get PDF
    Department of Creative Design EngineeringFlipMe is an IoT (Internet of Things) companion augmenting peer-to-peer interaction for active online learning. Online learning continues to see rapid growth with millions of students now engaging online and remote courses as convenient alternatives to conventional classroom-based teaching. Despite these advantages, online learning can suffer from limited opportunities for peer-to-peer engagement between students, resulting in high drop-out rates. In order to address this challenge, we developed FlipMe. The FlipMe design provides three different tangible interfaces. First, the flipping top interface provides physicalized data of peer learning activities in real time. Second, a physical nudge function, through the interactive handle, is provided to support peer-to-peer interaction. Finally, group study activity is expressed through a playful 'rolling-ball' feedback. The result of in-lab user study showed that FlipMe encourages users to study through demonstrating learning activities between peers. In addition, we discovered that users are engaged and play with FlipMe as a playful companion to stimulate more active learning during extended study sessions. Lastly, we found that users had attachment through FlipMe???s aesthetic and also analog book flipping motion and sound. Through this distinctive approach FlipMe can play a motivational role for students to study actively with their peers. To sum up, this thesis contributes to the improvement of peer learning activities in online leaning through 1) designing and implementing the tabletop IoT companion providing information on peers??? learning activities and opportunity to communicate in online learning context, and 2) evaluated how a tangible interface can promote peer leaning activities and reveal the value and potential of FlipMe. 3) We suggest a mobile application screen that is designed to connect learning experience in both on/offline learning environment mutually.clos

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149ā€“164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by Ā±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
    • ā€¦
    corecore