29 research outputs found

    An Open Platform for Full Body Interactive Sonification Exergames

    Get PDF
    This paper addresses the use of a remote interactive platform to support home-based rehabilitation for children with motor and cognitive impairment. The interaction between user and platform is achieved on customizable full-body interactive serious games (exergames). These exergames perform real-time analysis of multimodal signals to quantify movement qualities and postural attitudes. Interactive sonification of movement is then applied for providing a real-time feedback based on "aesthetic resonance" and engagement of the children. The games also provide log file recordings therapists can use to assess the performance of the children and the effectiveness of the games. The platform allows the customization of the games to address the children's needs. The platform is based on the EyesWeb XMI software, and the games are designed for home usage, based on Kinect for Xbox One and simple sensors including 3-axis accelerometers available in low-cost Android smartphones

    Interactive sonification to assist children with autism during motor therapeutic interventions

    Get PDF
    Interactive sonification is an effective tool used to guide individuals when practicing movements. Little research has shown the use of interactive sonification in supporting motor therapeutic interventions for children with autism who exhibit motor impairments. The goal of this research is to study if children with autism understand the use of interactive sonification during motor therapeutic interventions, its potential impact of interactive sonification in the development of motor skills in children with autism, and the feasibility of using it in specialized schools for children with autism. We conducted two deployment studies in Mexico using Go-with-the-Flow, a framework to sonify movements previously developed for chronic pain rehabilitation. In the first study, six children with autism were asked to perform the forward reach and lateral upper-limb exercises while listening to three different sound structures (i.e., one discrete and two continuous sounds). Results showed that children with autism exhibit awareness about the sonification of their movements and engage with the sonification. We then adapted the sonifications based on the results of the first study, for motor therapy of children with autism. In the next study, nine children with autism were asked to perform upper-limb lateral, cross-lateral, and push movements while listening to five different sound structures (i.e., three discrete and two continues) designed to sonify the movements. Results showed that discrete sound structures engage the children in the performance of upper-limb movements and increase their ability to perform the movements correctly. We finally propose design considerations that could guide the design of projects related to interactive sonification

    Designing multimodal interactive systems using EyesWeb XMI

    Get PDF
    This paper introduces the EyesWeb XMI platform (for eXtended Multimodal Interaction) as a tool for fast prototyping of multimodal systems, including interconnection of multiple smart devices, e.g., smartphones. EyesWeb is endowed with a visual programming language enabling users to compose modules into applications. Modules are collected in several libraries and include support of many input devices (e.g., video, audio, motion capture, accelerometers, and physiological sensors), output devices (e.g., video, audio, 2D and 3D graphics), and synchronized multimodal data processing. Specific libraries are devoted to real-time analysis of nonverbal expressive motor and social behavior. The EyesWeb platform encompasses further tools such EyesWeb Mobile supporting the development of customized Graphical User Interfaces for specific classes of users. The paper will review the EyesWeb platform and its components, starting from its historical origins, and with a particular focus on the Human-Computer Interaction aspects

    A Person-Centric Design Framework for At-Home Motor Learning in Serious Games

    Get PDF
    abstract: In motor learning, real-time multi-modal feedback is a critical element in guided training. Serious games have been introduced as a platform for at-home motor training due to their highly interactive and multi-modal nature. This dissertation explores the design of a multimodal environment for at-home training in which an autonomous system observes and guides the user in the place of a live trainer, providing real-time assessment, feedback and difficulty adaptation as the subject masters a motor skill. After an in-depth review of the latest solutions in this field, this dissertation proposes a person-centric approach to the design of this environment, in contrast to the standard techniques implemented in related work, to address many of the limitations of these approaches. The unique advantages and restrictions of this approach are presented in the form of a case study in which a system entitled the "Autonomous Training Assistant" consisting of both hardware and software for guided at-home motor learning is designed and adapted for a specific individual and trainer. In this work, the design of an autonomous motor learning environment is approached from three areas: motor assessment, multimodal feedback, and serious game design. For motor assessment, a 3-dimensional assessment framework is proposed which comprises of 2 spatial (posture, progression) and 1 temporal (pacing) domains of real-time motor assessment. For multimodal feedback, a rod-shaped device called the "Intelligent Stick" is combined with an audio-visual interface to provide feedback to the subject in three domains (audio, visual, haptic). Feedback domains are mapped to modalities and feedback is provided whenever the user's performance deviates from the ideal performance level by an adaptive threshold. Approaches for multi-modal integration and feedback fading are discussed. Finally, a novel approach for stealth adaptation in serious game design is presented. This approach allows serious games to incorporate motor tasks in a more natural way, facilitating self-assessment by the subject. An evaluation of three different stealth adaptation approaches are presented and evaluated using the flow-state ratio metric. The dissertation concludes with directions for future work in the integration of stealth adaptation techniques across the field of exergames.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Multisensory learning in adaptive interactive systems

    Get PDF
    The main purpose of my work is to investigate multisensory perceptual learning and sensory integration in the design and development of adaptive user interfaces for educational purposes. To this aim, starting from renewed understanding from neuroscience and cognitive science on multisensory perceptual learning and sensory integration, I developed a theoretical computational model for designing multimodal learning technologies that take into account these results. Main theoretical foundations of my research are multisensory perceptual learning theories and the research on sensory processing and integration, embodied cognition theories, computational models of non-verbal and emotion communication in full-body movement, and human-computer interaction models. Finally, a computational model was applied in two case studies, based on two EU ICT-H2020 Projects, "weDRAW" and "TELMI", on which I worked during the PhD

    Enhancing the use of Haptic Devices in Education and Entertainment

    Get PDF
    This research was part of the two-years Horizon 2020 European Project "weDRAW". The aim of the project was that "specific sensory systems have specific roles to learn specific concepts". This work explores the use of the haptic modality, stimulated by the means of force-feedback devices, to convey abstract concepts inside virtual reality. After a review of the current use of haptic devices in education, available haptic software and game engines, we focus on the implementation of an haptic plugin for game engines (HPGE, based on state of the art rendering library CHAI3D) and its evaluation in human perception experiments and multisensory integration

    Grand Challenges in SportsHCI

    Get PDF
    The field of Sports Human-Computer Interaction (SportsHCI) investigates interaction design to support a physically active human being. Despite growing interest and dissemination of SportsHCI literature over the past years, many publications still focus on solving specific problems in a given sport. We believe in the benefit of generating fundamental knowledge for SportsHCI more broadly to advance the field as a whole. To achieve this, we aim to identify the grand challenges in SportsHCI, which can help researchers and practitioners in developing a future research agenda. Hence, this paper presents a set of grand challenges identified in a five-day workshop with 22 experts who have previously researched, designed, and deployed SportsHCI systems. Addressing these challenges will drive transformative advancements in SportsHCI, fostering better athlete performance, athlete-coach relationships, spectator engagement, but also immersive experiences for recreational sports or exercisemotivation, and ultimately, improve human well-being

    Accessibility of Health Data Representations for Older Adults: Challenges and Opportunities for Design

    Get PDF
    Health data of consumer off-the-shelf wearable devices is often conveyed to users through visual data representations and analyses. However, this is not always accessible to people with disabilities or older people due to low vision, cognitive impairments or literacy issues. Due to trade-offs between aesthetics predominance or information overload, real-time user feedback may not be conveyed easily from sensor devices through visual cues like graphs and texts. These difficulties may hinder critical data understanding. Additional auditory and tactile feedback can also provide immediate and accessible cues from these wearable devices, but it is necessary to understand existing data representation limitations initially. To avoid higher cognitive and visual overload, auditory and haptic cues can be designed to complement, replace or reinforce visual cues. In this paper, we outline the challenges in existing data representation and the necessary evidence to enhance the accessibility of health information from personal sensing devices used to monitor health parameters such as blood pressure, sleep, activity, heart rate and more. By creating innovative and inclusive user feedback, users will likely want to engage and interact with new devices and their own data
    corecore