132 research outputs found

    Music Maker – A Camera-based Music Making Tool for Physical Rehabilitation

    Full text link
    The therapeutic effects of playing music are being recognized increasingly in the field of rehabilitation medicine. People with physical disabilities, however, often do not have the motor dexterity needed to play an instrument. We developed a camera-based human-computer interface called "Music Maker" to provide such people with a means to make music by performing therapeutic exercises. Music Maker uses computer vision techniques to convert the movements of a patient's body part, for example, a finger, hand, or foot, into musical and visual feedback using the open software platform EyesWeb. It can be adjusted to a patient's particular therapeutic needs and provides quantitative tools for monitoring the recovery process and assessing therapeutic outcomes. We tested the potential of Music Maker as a rehabilitation tool with six subjects who responded to or created music in various movement exercises. In these proof-of-concept experiments, Music Maker has performed reliably and shown its promise as a therapeutic device.National Science Foundation (IIS-0308213, IIS-039009, IIS-0093367, P200A01031, EIA-0202067 to M.B.); National Institutes of Health (DC-03663 to E.S.); Boston University (Dudley Allen Sargent Research Fund (to A.L.)

    Representation of Samba dance gestures, using a multi-modal analysis approach

    Get PDF
    In this paper we propose an approach for the representation of dance gestures in Samba dance. This representation is based on a video analysis of body movements, carried out from the viewpoint of the musical meter. Our method provides the periods, a measure of energy and a visual representation of periodic movement in dance. The method is applied to a limited universe of Samba dances and music, which is used to illustrate the usefulness of the approach

    Camera-based software in rehabilitation/therapy intervention

    Get PDF
    Use of an affordable, easily adaptable, ‘non-specific camera-based software’ that is rarely used in the field of rehabilitation is reported in a study with 91 participants over the duration of six workshop sessions. ‘Non-specific camera-based software’ refers to software that is not dependent on specific hardware. Adaptable means that human tracking and created artefact interaction in the camera field of view is relatively easily changed as one desires via a user-friendly GUI. The significance of having both available for contemporary intervention is argued. Conclusions are that the mature, robust, and accessible software EyeCon is a potent and significant user-friendly tool in the field of rehabilitation/therapy and warrants wider exploration.Peer Reviewe

    CAMERA-BASED SOFTWARE IN REHABILITATION/THERAPY INTERVENTION

    Get PDF

    The Intersection of Art and Technology

    Get PDF
    As art influences science and technology, science and technology can in turn inspire art. Recognizing this mutually beneficial relationship, researchers at the Casa Paganini-InfoMus Research Centre work to combine scientific research in information and communications technology (ICT) with artistic and humanistic research. Here, the authors discuss some of their work, showing how their collaboration with artists informed work on analyzing nonverbal expressive and social behavior and contributed to tools, such as the EyesWeb XMI hardware and software platform, that support both artistic and scientific developments. They also sketch out how art-informed multimedia and multimodal technologies find application beyond the arts, in areas including education, cultural heritage, social inclusion, therapy, rehabilitation, and wellness

    Straddling the intersection

    Get PDF
    Music technology straddles the intersection between art and science and presents those who choose to work within its sphere with many practical challenges as well as creative possibilities. The paper focuses on four main areas: secondary education, higher education, practice and research and finally collaboration. The paper emphasises the importance of collaboration in tackling the challenges of interdisciplinarity and in influencing future technological developments

    Beyond Movement an Animal, Beyond an Animal the Sound

    Get PDF
    (Abstract to follow

    Effects of Computerized Emotional Training on Children with High Functioning Autism

    Get PDF
    An evaluation study of a serious game and a system for the automatic emotion recognition designed for helping autistic children to learn to recognize and express emotions by means of their full-body movement is presented. Three-dimensional motion data of full-body movements are obtained from RGB-D sensors and used to recognize emotions by means of linear SVMs. Ten children diagnosed with High Functioning Autism or Asperger Syndrome were involved in the evaluation phase, consisting of repeated sessions to play a specifically designed serious game. Results from the evaluation study show an increase of tasks accuracy from the beginning to the end of training sessions in the trained group. In particular, while the increase of recognition accuracy was concentrated in the first sessions of the game, the increase for expression accuracy is more gradual throughout all sessions. Moreover, the training seems to produce a transfer effect on facial expression recognition

    Adaptive Body Gesture Representation for Automatic Emotion Recognition

    Get PDF
    We present a computational model and a system for the automated recognition of emotions starting from full-body movement. Three-dimensional motion data of full-body movements are obtained either from professional optical motion-capture systems (Qualisys) or from low-cost RGB-D sensors (Kinect and Kinect2). A number of features are then automatically extracted at different levels, from kinematics of a single joint to more global expressive features inspired by psychology and humanistic theories (e.g., contraction index, fluidity, and impulsiveness). An abstraction layer based on dictionary learning further processes these movement features to increase the model generality and to deal with intraclass variability, noise, and incomplete information characterizing emotion expression in human movement. The resulting feature vector is the input for a classifier performing real-time automatic emotion recognition based on linear support vector machines. The recognition performance of the proposed model is presented and discussed, including the tradeoff between precision of the tracking measures (we compare the Kinect RGB-D sensor and the Qualisys motion-capture system) versus dimension of the training dataset. The resulting model and system have been successfully applied in the development of serious games for helping autistic children learn to recognize and express emotions by means of their full-body movement

    Toward a model of computational attention based on expressive behavior: applications to cultural heritage scenarios

    Get PDF
    Our project goals consisted in the development of attention-based analysis of human expressive behavior and the implementation of real-time algorithm in EyesWeb XMI in order to improve naturalness of human-computer interaction and context-based monitoring of human behavior. To this aim, perceptual-model that mimic human attentional processes was developed for expressivity analysis and modeled by entropy. Museum scenarios were selected as an ecological test-bed to elaborate three experiments that focus on visitor profiling and visitors flow regulation
    • 

    corecore