4 research outputs found

    An Open Platform for Full Body Interactive Sonification Exergames

    Get PDF
    This paper addresses the use of a remote interactive platform to support home-based rehabilitation for children with motor and cognitive impairment. The interaction between user and platform is achieved on customizable full-body interactive serious games (exergames). These exergames perform real-time analysis of multimodal signals to quantify movement qualities and postural attitudes. Interactive sonification of movement is then applied for providing a real-time feedback based on "aesthetic resonance" and engagement of the children. The games also provide log file recordings therapists can use to assess the performance of the children and the effectiveness of the games. The platform allows the customization of the games to address the children's needs. The platform is based on the EyesWeb XMI software, and the games are designed for home usage, based on Kinect for Xbox One and simple sensors including 3-axis accelerometers available in low-cost Android smartphones

    Interactive sonification to assist children with autism during motor therapeutic interventions

    Get PDF
    Interactive sonification is an effective tool used to guide individuals when practicing movements. Little research has shown the use of interactive sonification in supporting motor therapeutic interventions for children with autism who exhibit motor impairments. The goal of this research is to study if children with autism understand the use of interactive sonification during motor therapeutic interventions, its potential impact of interactive sonification in the development of motor skills in children with autism, and the feasibility of using it in specialized schools for children with autism. We conducted two deployment studies in Mexico using Go-with-the-Flow, a framework to sonify movements previously developed for chronic pain rehabilitation. In the first study, six children with autism were asked to perform the forward reach and lateral upper-limb exercises while listening to three different sound structures (i.e., one discrete and two continuous sounds). Results showed that children with autism exhibit awareness about the sonification of their movements and engage with the sonification. We then adapted the sonifications based on the results of the first study, for motor therapy of children with autism. In the next study, nine children with autism were asked to perform upper-limb lateral, cross-lateral, and push movements while listening to five different sound structures (i.e., three discrete and two continues) designed to sonify the movements. Results showed that discrete sound structures engage the children in the performance of upper-limb movements and increase their ability to perform the movements correctly. We finally propose design considerations that could guide the design of projects related to interactive sonification

    Automated Analysis of Synchronization in Human Full-body Expressive Movement

    Get PDF
    The research presented in this thesis is focused on the creation of computational models for the study of human full-body movement in order to investigate human behavior and non-verbal communication. In particular, the research concerns the analysis of synchronization of expressive movements and gestures. Synchronization can be computed both on a single user (intra-personal), e.g., to measure the degree of coordination between the joints\u2019 velocities of a dancer, and on multiple users (inter-personal), e.g., to detect the level of coordination between multiple users in a group. The thesis, through a set of experiments and results, contributes to the investigation of both intra-personal and inter-personal synchronization applied to support the study of movement expressivity, and improve the state-of-art of the available methods by presenting a new algorithm to perform the analysis of synchronization

    Creating a real-time movement sonification system for hemiparetic upper limb rehabilitation for survivors of stroke

    Get PDF
    Upper limb paresis is a common problem for survivors of stroke, impeding their ability to live independently, and rehabilitation interventions to reduce impairment are highly sought after. The use of audio-based interventions, such as movement sonification, may improve rehabilitation outcomes in this application, however, they are relatively unexplored considering the potential that audio feedback has to enhance motor skill learning. Movement sonification is the process of converting movement associated data to the auditory domain and is touted to be a feasible and effective method for stroke survivors to obtain real-time audio feedback of their movements. To generate real-time audio feedback through movement sonification, a system is required to capture movements, process data, extract the physical domain of interest, convert to the auditory domain, and emit the generated audio. A commercial system that performs this process for gross upper limb movements is currently unavailable, therefore, system creation is required. To begin this process, a mapping review of movement sonification systems in the literature was completed. System components in the literature were identified, keyword coded, and grouped, to provide an overview of the components used within these systems. From these results, choices for components of new movement sonification systems were made based on the popularity and applicability, to create two movement sonification systems, one termed ‘Soniccup’, which uses an Inertial Measurement Unit, and the other termed ‘KinectSon’ which uses an Azure Kinect camera. Both systems were setup to translate position estimates into audio pitch, as an output of the sonification process. Both systems were subsequently used in a comparison study with a Vicon Nexus system to establish similarity of positional shape, and therefore establish audio output similarity. The results indicate that the Soniccup produced positional shape representative of the movement performed, for movements of duration under one second, but performance degraded as the movement duration increased. In addition, the Soniccup produced these results with a system latency of approximately 230 ms, which is beyond the limit of real-time perception. The KinectSon system was found to produce similar positional shape to the Vicon Nexus system for all movements, and obtained these results with a system latency of approximately 67 ms, which is within the limit of real-time perception. As such, the KinectSon system has been evaluated as a good candidate for generating real-time audio feedback, however further testing is required to identify suitability of the generated audio feedback. To evaluate the feedback, as part of usability testing, the KinectSon system was used in an agency study. Volunteers with and without upper-limb impairment performed reaching movements whilst using the KinectSon system, and reported the perceived association of the sound generated with the movements performed. For three of the four sonification conditions, a triangular wave pitch modulation component was added to distort the sound. The participants in this study associated their movements with the unmodulated sonification condition stronger than they did with the modulated sonification conditions, indicating that stroke survivors are able to use the KinectSon system and obtain a sense of agency whilst using the system. The thesis concludes with a discussion of the findings of the contributing chapters of this thesis, along with the implications, limitations, and identified future work, within the context of creating a suitable real-time movement sonification system for a large scale study involving an upper limb rehabilitation intervention.Upper limb paresis is a common problem for survivors of stroke, impeding their ability to live independently, and rehabilitation interventions to reduce impairment are highly sought after. The use of audio-based interventions, such as movement sonification, may improve rehabilitation outcomes in this application, however, they are relatively unexplored considering the potential that audio feedback has to enhance motor skill learning. Movement sonification is the process of converting movement associated data to the auditory domain and is touted to be a feasible and effective method for stroke survivors to obtain real-time audio feedback of their movements. To generate real-time audio feedback through movement sonification, a system is required to capture movements, process data, extract the physical domain of interest, convert to the auditory domain, and emit the generated audio. A commercial system that performs this process for gross upper limb movements is currently unavailable, therefore, system creation is required. To begin this process, a mapping review of movement sonification systems in the literature was completed. System components in the literature were identified, keyword coded, and grouped, to provide an overview of the components used within these systems. From these results, choices for components of new movement sonification systems were made based on the popularity and applicability, to create two movement sonification systems, one termed ‘Soniccup’, which uses an Inertial Measurement Unit, and the other termed ‘KinectSon’ which uses an Azure Kinect camera. Both systems were setup to translate position estimates into audio pitch, as an output of the sonification process. Both systems were subsequently used in a comparison study with a Vicon Nexus system to establish similarity of positional shape, and therefore establish audio output similarity. The results indicate that the Soniccup produced positional shape representative of the movement performed, for movements of duration under one second, but performance degraded as the movement duration increased. In addition, the Soniccup produced these results with a system latency of approximately 230 ms, which is beyond the limit of real-time perception. The KinectSon system was found to produce similar positional shape to the Vicon Nexus system for all movements, and obtained these results with a system latency of approximately 67 ms, which is within the limit of real-time perception. As such, the KinectSon system has been evaluated as a good candidate for generating real-time audio feedback, however further testing is required to identify suitability of the generated audio feedback. To evaluate the feedback, as part of usability testing, the KinectSon system was used in an agency study. Volunteers with and without upper-limb impairment performed reaching movements whilst using the KinectSon system, and reported the perceived association of the sound generated with the movements performed. For three of the four sonification conditions, a triangular wave pitch modulation component was added to distort the sound. The participants in this study associated their movements with the unmodulated sonification condition stronger than they did with the modulated sonification conditions, indicating that stroke survivors are able to use the KinectSon system and obtain a sense of agency whilst using the system. The thesis concludes with a discussion of the findings of the contributing chapters of this thesis, along with the implications, limitations, and identified future work, within the context of creating a suitable real-time movement sonification system for a large scale study involving an upper limb rehabilitation intervention
    corecore