45 research outputs found

    Designing multimodal interactive systems using EyesWeb XMI

    Get PDF
    This paper introduces the EyesWeb XMI platform (for eXtended Multimodal Interaction) as a tool for fast prototyping of multimodal systems, including interconnection of multiple smart devices, e.g., smartphones. EyesWeb is endowed with a visual programming language enabling users to compose modules into applications. Modules are collected in several libraries and include support of many input devices (e.g., video, audio, motion capture, accelerometers, and physiological sensors), output devices (e.g., video, audio, 2D and 3D graphics), and synchronized multimodal data processing. Specific libraries are devoted to real-time analysis of nonverbal expressive motor and social behavior. The EyesWeb platform encompasses further tools such EyesWeb Mobile supporting the development of customized Graphical User Interfaces for specific classes of users. The paper will review the EyesWeb platform and its components, starting from its historical origins, and with a particular focus on the Human-Computer Interaction aspects

    The Intersection of Art and Technology

    Get PDF
    As art influences science and technology, science and technology can in turn inspire art. Recognizing this mutually beneficial relationship, researchers at the Casa Paganini-InfoMus Research Centre work to combine scientific research in information and communications technology (ICT) with artistic and humanistic research. Here, the authors discuss some of their work, showing how their collaboration with artists informed work on analyzing nonverbal expressive and social behavior and contributed to tools, such as the EyesWeb XMI hardware and software platform, that support both artistic and scientific developments. They also sketch out how art-informed multimedia and multimodal technologies find application beyond the arts, in areas including education, cultural heritage, social inclusion, therapy, rehabilitation, and wellness

    Informing bowing and violin learning using movement analysis and machine learning

    Get PDF
    Violin performance is characterized by an intimate connection between the player and her instrument that allows her a continuous control of sound through a sophisticated bowing technique. A great importance in violin pedagogy is, then, given to techniques of the right hand, responsible of most of the sound produced. This study analyses the bowing trajectory in three different classical violin exercises from audio and motion capture recordings to classify, using machine learning techniques, the different kinds of bowing techniques used. Our results show that a clustering algorithm is able to appropriately group together the different shapes produced by the bow trajectories

    An Open Platform for Full Body Interactive Sonification Exergames

    Get PDF
    This paper addresses the use of a remote interactive platform to support home-based rehabilitation for children with motor and cognitive impairment. The interaction between user and platform is achieved on customizable full-body interactive serious games (exergames). These exergames perform real-time analysis of multimodal signals to quantify movement qualities and postural attitudes. Interactive sonification of movement is then applied for providing a real-time feedback based on "aesthetic resonance" and engagement of the children. The games also provide log file recordings therapists can use to assess the performance of the children and the effectiveness of the games. The platform allows the customization of the games to address the children's needs. The platform is based on the EyesWeb XMI software, and the games are designed for home usage, based on Kinect for Xbox One and simple sensors including 3-axis accelerometers available in low-cost Android smartphones

    Multisensory learning in adaptive interactive systems

    Get PDF
    The main purpose of my work is to investigate multisensory perceptual learning and sensory integration in the design and development of adaptive user interfaces for educational purposes. To this aim, starting from renewed understanding from neuroscience and cognitive science on multisensory perceptual learning and sensory integration, I developed a theoretical computational model for designing multimodal learning technologies that take into account these results. Main theoretical foundations of my research are multisensory perceptual learning theories and the research on sensory processing and integration, embodied cognition theories, computational models of non-verbal and emotion communication in full-body movement, and human-computer interaction models. Finally, a computational model was applied in two case studies, based on two EU ICT-H2020 Projects, "weDRAW" and "TELMI", on which I worked during the PhD

    Effects of Computerized Emotional Training on Children with High Functioning Autism

    Get PDF
    An evaluation study of a serious game and a system for the automatic emotion recognition designed for helping autistic children to learn to recognize and express emotions by means of their full-body movement is presented. Three-dimensional motion data of full-body movements are obtained from RGB-D sensors and used to recognize emotions by means of linear SVMs. Ten children diagnosed with High Functioning Autism or Asperger Syndrome were involved in the evaluation phase, consisting of repeated sessions to play a specifically designed serious game. Results from the evaluation study show an increase of tasks accuracy from the beginning to the end of training sessions in the trained group. In particular, while the increase of recognition accuracy was concentrated in the first sessions of the game, the increase for expression accuracy is more gradual throughout all sessions. Moreover, the training seems to produce a transfer effect on facial expression recognition

    Adaptive Body Gesture Representation for Automatic Emotion Recognition

    Get PDF
    We present a computational model and a system for the automated recognition of emotions starting from full-body movement. Three-dimensional motion data of full-body movements are obtained either from professional optical motion-capture systems (Qualisys) or from low-cost RGB-D sensors (Kinect and Kinect2). A number of features are then automatically extracted at different levels, from kinematics of a single joint to more global expressive features inspired by psychology and humanistic theories (e.g., contraction index, fluidity, and impulsiveness). An abstraction layer based on dictionary learning further processes these movement features to increase the model generality and to deal with intraclass variability, noise, and incomplete information characterizing emotion expression in human movement. The resulting feature vector is the input for a classifier performing real-time automatic emotion recognition based on linear support vector machines. The recognition performance of the proposed model is presented and discussed, including the tradeoff between precision of the tracking measures (we compare the Kinect RGB-D sensor and the Qualisys motion-capture system) versus dimension of the training dataset. The resulting model and system have been successfully applied in the development of serious games for helping autistic children learn to recognize and express emotions by means of their full-body movement

    MIROR-Musical Interaction Relying On Reflexion. Project Final Report

    Get PDF
    open7siIl progetto è stato valutato dalla Commissione Europea con il massimo del punteggio (15/15) con la seguente valutazione sintetica: Excellent. The proposal successfully addresses all relevant aspects of the criterion in question. Any shortcomings are minor". Ha superato le tre valutazioni annuali con giudizio molto positivo (good) e con brillante giudizio finale. Progetto co-finanziato dalla Comunità Europea, 7° Programma Quadro, ICT-Challenge 4.2, Technology enhanced-learning, Programma Cooperation, no 258338. Il Programma “COOPERATION” è identificato dal DM del 1/7/2011 (Identificazione dei programmi di ricerca di alta qualificazione, finanziati dall'Unione europea o dal Ministero dell'istruzione, dell'universita' e della ricerca di cui all'articolo 29, comma 7, della legge n. 240/2010) come uno dei due programmi di ricerca di alta qualificazione finanziati dall’UE. In dettaglio, dal Coordinatore sono stati ricoperti i seguenti ruoli: • Preparazione della proposta e Coordinameto scientifico del Progetto • Responsabile dei contatti con la Commissione Europea • Supervisione e monitoraggio del lavoro svolto dal Consorzio, attraverso workshops, report tecnici e scientifici e coordinamento dei deliverable • Lieder dei seguenti Work-Packages: • WP1. Project Management • WP5. Psychological Experiments • WP8. Dissemination and Exploitation • Coordinatore dell’ALB • Coordinamento scientifico e organizzativo del gruppo di ricerca dell'Università di Bologna, composto da: o 2 assegni di ricerca post-dottorato o 1 assegno di ricerca o 2 contratti di assistenza alla ricerca o 1 contratto per il sito web o 15 studenti-collaboratori o 3 insegnanti collaboratori • Responsabile delle collaborazioni e convenzioni con l’Istituto Comprensivo di Casalecchio di Reno, la Nuova Scuola di Musica Baroncini di Imola, Il Centro Danza Musikè-Bologna. • Supervisione del Project Management (Dipartimento della Ricerca Europea dell'Università di Bologna - ARIC).The MIROR (Musical Interaction Relying On Reflexion) project is co‐funded by the European Commission under the 7th Framework Programme, Theme ICT‐2009.4.2, Technology‐enhanced learning. MIROR is a three‐years project and started on September 1st, 2010. All information regarding MIROR is available through the MIROR Portal at http://www.mirorproject.eu. The MIROR Project-Final Report deals with the description of the development of an adaptive system for music learning and teaching based on the “reflexive interaction” paradigm. The system is developed in the context of early childhood music education. It acts as an advanced cognitive tutor, designed to promote specific cognitive abilities in the field of music improvisation, both in formal learning contexts (kindergartens, primary schools, music schools) and informal ones (at home, kinder centres, etc.). The reflexive interaction paradigm is based on the idea of letting users manipulate virtual copies of themselves, through specifically designed machine‐learning software referred to as “Interactive Reflexive Musical Systems” (IRMS). By definition IRMS are able to learn and configure themselves according to their understanding of the learner's behaviour. In MIROR the IRMS paradigm is extended with the analysis and synthesis of multisensory expressive gesture to increase its impact on the musical pedagogy of young children, by developing new multimodal interfaces. The project is based on a spiral design approach involving coupled interactions between technical and psychopedagogical partners. MIROR integrates both psychological case‐study experiments, aiming to investigate cognitive hypotheses concerning the mirroring behaviour and the learning efficacy of the platform, and validation studies aiming at developing the software in concrete educational settings. The project contributes to promoting the reflexive interaction paradigm not only in the field of music learning, but more generally as a new paradigm for establishing a synergy between learning and cognition in the context of child/machine interaction.openopenA. R. Addessi ; C. Anagnostopoulou; S. Newman; B. Olsson; F. Pachet; G. Volpe; S. YoungA. R. Addessi ; C. Anagnostopoulou; S. Newman; B. Olsson; F. Pachet; G. Volpe; S. Youn
    corecore