35 research outputs found

    Responding to human full-body gestures embedded in motion data streams.

    Full text link
     This research created a neural-network enabled artificially intelligent performing agent that was able to learn to dance and recognise movement through a rehearsal and performance process with a human dancer. The agent exhibited emergent dance behaviour and successfully engaged in a live, semi-improvised dance performance with the human dancer

    Towards a framework for socially interactive robots

    Get PDF
    250 p.En las últimas décadas, la investigación en el campo de la robótica social ha crecido considerablemente. El desarrollo de diferentes tipos de robots y sus roles dentro de la sociedad se están expandiendo poco a poco. Los robots dotados de habilidades sociales pretenden ser utilizados para diferentes aplicaciones; por ejemplo, como profesores interactivos y asistentes educativos, para apoyar el manejo de la diabetes en niños, para ayudar a personas mayores con necesidades especiales, como actores interactivos en el teatro o incluso como asistentes en hoteles y centros comerciales.El equipo de investigación RSAIT ha estado trabajando en varias áreas de la robótica, en particular,en arquitecturas de control, exploración y navegación de robots, aprendizaje automático y visión por computador. El trabajo presentado en este trabajo de investigación tiene como objetivo añadir una nueva capa al desarrollo anterior, la capa de interacción humano-robot que se centra en las capacidades sociales que un robot debe mostrar al interactuar con personas, como expresar y percibir emociones, mostrar un alto nivel de diálogo, aprender modelos de otros agentes, establecer y mantener relaciones sociales, usar medios naturales de comunicación (mirada, gestos, etc.),mostrar personalidad y carácter distintivos y aprender competencias sociales.En esta tesis doctoral, tratamos de aportar nuestro grano de arena a las preguntas básicas que surgen cuando pensamos en robots sociales: (1) ¿Cómo nos comunicamos (u operamos) los humanos con los robots sociales?; y (2) ¿Cómo actúan los robots sociales con nosotros? En esa línea, el trabajo se ha desarrollado en dos fases: en la primera, nos hemos centrado en explorar desde un punto de vista práctico varias formas que los humanos utilizan para comunicarse con los robots de una maneranatural. En la segunda además, hemos investigado cómo los robots sociales deben actuar con el usuario.Con respecto a la primera fase, hemos desarrollado tres interfaces de usuario naturales que pretenden hacer que la interacción con los robots sociales sea más natural. Para probar tales interfaces se han desarrollado dos aplicaciones de diferente uso: robots guía y un sistema de controlde robot humanoides con fines de entretenimiento. Trabajar en esas aplicaciones nos ha permitido dotar a nuestros robots con algunas habilidades básicas, como la navegación, la comunicación entre robots y el reconocimiento de voz y las capacidades de comprensión.Por otro lado, en la segunda fase nos hemos centrado en la identificación y el desarrollo de los módulos básicos de comportamiento que este tipo de robots necesitan para ser socialmente creíbles y confiables mientras actúan como agentes sociales. Se ha desarrollado una arquitectura(framework) para robots socialmente interactivos que permite a los robots expresar diferentes tipos de emociones y mostrar un lenguaje corporal natural similar al humano según la tarea a realizar y lascondiciones ambientales.La validación de los diferentes estados de desarrollo de nuestros robots sociales se ha realizado mediante representaciones públicas. La exposición de nuestros robots al público en esas actuaciones se ha convertido en una herramienta esencial para medir cualitativamente la aceptación social de los prototipos que estamos desarrollando. De la misma manera que los robots necesitan un cuerpo físico para interactuar con el entorno y convertirse en inteligentes, los robots sociales necesitan participar socialmente en tareas reales para las que han sido desarrollados, para así poder mejorar su sociabilida

    A framework for correcting human motion alignment for traditional dance training using augmented reality

    Get PDF
    This paper presents a framework for motion capture analysis for dance learning technology using Microsoft Kinect V2. The proposed technology utilizes motion detection, emotion analysis, coordination analysis and interactive feedback techniques for a particular dance style selected by the trainee.This motion capture system solves the heterogeneity of the existing dance learning system and hence provides robustness. The analysis of the proposed work is carried out using query techniques and heuristic evaluation. The Microsoft Kinect V2 embedded with Augmented Reality (AR) technology is explored to demonstrate the recognition accuracy of the proposed framework

    Semi-automation of gesture annotation by machine learning and human collaboration

    Get PDF
    none6siGesture and multimodal communication researchers typically annotate video data manually, even though this can be a very time-consuming task. In the present work, a method to detect gestures is proposed as a fundamental step towards a semi-automatic gesture annotation tool. The proposed method can be applied to RGB videos and requires annotations of part of a video as input. The technique deploys a pose estimation method and active learning. In the experiment, it is shown that if about 27% of the video is annotated, the remaining parts of the video can be annotated automatically with an F-score of at least 0.85. Users can run this tool with a small number of annotations first. If the predicted annotations for the remainder of the video are not satisfactory, users can add further annotations and run the tool again. The code has been released so that other researchers and practitioners can use the results of this research. This tool has been confirmed to work in conjunction with ELAN.openIenaga, Naoto; Cravotta, Alice; Terayama, Kei; Scotney, Bryan W.; Saito, Hideo; Busà, M. GraziaIenaga, Naoto; Cravotta, Alice; Terayama, Kei; Scotney, Bryan W.; Saito, Hideo; Busà, M. Grazi

    Um jogo digital em ambientes imersivos no apoio às vítimas do acidente vascular cerebral

    Get PDF
    A sociedade moderna está a testemunhar um aumento do envelhecimento médio populacional, graças à melhoria da qualidade dos serviços de saúde e de medicação. No entanto, o envelhecimento cria outros problemas como doenças físicas ou mentais com grandes taxas de incidência. O acidente vascular cerebral (AVC) é uma das doenças que afeta sobretudo a população idosa, e o processo de reabilitação é doloroso e difícil de percorrer, sendo que a forma mais eficaz de tratar o doente é na atuação rápida e eficaz da fisioterapia. O consumo de videojogos pela população sénior está a aumentar, sendo que é cada vez mais viável a introdução de novos artefactos digitais no processo de recuperação cerebral e motora pela vítima de AVC. Os programas tradicionais de recuperação para um paciente que tenha sofrido um AVC são organizados em tratamento fisioterapêutico longo e monótono, com a possibilidade de envolver tarefas domésticas desmotivadoras. No entanto existem soluções tecnológicas que monitorizam as tarefas repetitivas de movimento. O aparelho de monitorização aliado a um jogo digital tem a possibilidade de estimular o paciente nas melhorias motoras e cognitivas como uma alternativa ao tratamento fisioterapêutico tradicional. As soluções desenvolvidas até ao momento são escassas, sendo que existe uma grande margem para mudar essa realidade. O principal objetivo desta pesquisa é o de explorar caraterísticas relacionadas com o display, interface gestual, narrativa, género, estilo gráfico, dificuldade, e linguagem que um jogo digital possa ter, para complementar as sessões de fisioterapia na recuperação do AVC pela população sénior, através da criação de um protótipo experimental. Esta investigação empírica tem um carácter exploratório e tem como base a metodologia Development Research (Van den Akker, Branch, Gustafson, Nieveen, & Plomp, 1999). Os resultados indicam que o controlador de movimento – leap motion – é um dispositivo que pode ser adaptado à fisioterapia orientada ao AVC, através de movimentos específicos e contextualizados no ambiente de jogo. Adicionalmente, foi possível observar uma rejeição elevada no uso de Head Mounted Displays devido a dores oculares e perda de orientação.Modern society is witnessing a general population ageing increase in average life expectancy thanks to better health services and medication. However, ageing creates life quality problems, such has several disabilities, diseases, or mental illness with high incidence rates. Stroke patients are a main concern for such ages, and the rehabilitation process is painful and shows very small recovery improvements over time, unless treated in a fast manner. The consumption of videogames by the senior population is increasing, and it is feasible to introduce new digital artefacts for the process of recovering from brain damage and low motricity for the stroke victim. Typical rehabilitation programs for stroke patients are organized in long and monotonous physiotherapy treatment, with the possibility of involving domestic tasks, which can increase the risk of treatment withdrawal derived from low motivation. However, there are some technological solutions that can effectively help in the supervision of those repetitive tasks. A monitoring device connected to a digital game can effectively stimulate a person in cognitive and physical improvements as an alternative to traditional physiotherapy treatment. There is room for improvement in order to change the reality of stroke rehabilitation. The main objective of this research is to explore characteristics related to display, gesture interface device, narrative, genre, game art design, difficulty, and language that can be included in a digital game to complement physiotherapy sessions for stroke rehabilitation, through the creation of a functional prototype. The empirical research has an exploratory character and is based on the methodology “Development Research” (Van den Akker et al., 1999). The results indicate that the motion controller - leap motion - is a device that can be adapted to stroke-oriented physiotherapy through specific movements and contextualized in the game environment. Additionally, it was possible to observe a high rejection in the use of Head Mounted Displays due to ocular pain and orientation loss.Mestrado em Comunicação Multimédi

    Interactive Tango Milonga: An Interactive Dance System for Argentine Tango Social Dance

    Get PDF
    abstract: When dancers are granted agency over music, as in interactive dance systems, the actors are most often concerned with the problem of creating a staged performance for an audience. However, as is reflected by the above quote, the practice of Argentine tango social dance is most concerned with participants internal experience and their relationship to the broader tango community. In this dissertation I explore creative approaches to enrich the sense of connection, that is, the experience of oneness with a partner and complete immersion in music and dance for Argentine tango dancers by providing agency over musical activities through the use of interactive technology. Specifically, I create an interactive dance system that allows tango dancers to affect and create music via their movements in the context of social dance. The motivations for this work are multifold: 1) to intensify embodied experience of the interplay between dance and music, individual and partner, couple and community, 2) to create shared experience of the conventions of tango dance, and 3) to innovate Argentine tango social dance practice for the purposes of education and increasing musicality in dancers.Dissertation/ThesisDoctoral Dissertation Music 201

    Automated Analysis of Synchronization in Human Full-body Expressive Movement

    Get PDF
    The research presented in this thesis is focused on the creation of computational models for the study of human full-body movement in order to investigate human behavior and non-verbal communication. In particular, the research concerns the analysis of synchronization of expressive movements and gestures. Synchronization can be computed both on a single user (intra-personal), e.g., to measure the degree of coordination between the joints\u2019 velocities of a dancer, and on multiple users (inter-personal), e.g., to detect the level of coordination between multiple users in a group. The thesis, through a set of experiments and results, contributes to the investigation of both intra-personal and inter-personal synchronization applied to support the study of movement expressivity, and improve the state-of-art of the available methods by presenting a new algorithm to perform the analysis of synchronization

    Automatic Emotion Recognition: Quantifying Dynamics and Structure in Human Behavior.

    Full text link
    Emotion is a central part of human interaction, one that has a huge influence on its overall tone and outcome. Today's human-centered interactive technology can greatly benefit from automatic emotion recognition, as the extracted affective information can be used to measure, transmit, and respond to user needs. However, developing such systems is challenging due to the complexity of emotional expressions and their dynamics in terms of the inherent multimodality between audio and visual expressions, as well as the mixed factors of modulation that arise when a person speaks. To overcome these challenges, this thesis presents data-driven approaches that can quantify the underlying dynamics in audio-visual affective behavior. The first set of studies lay the foundation and central motivation of this thesis. We discover that it is crucial to model complex non-linear interactions between audio and visual emotion expressions, and that dynamic emotion patterns can be used in emotion recognition. Next, the understanding of the complex characteristics of emotion from the first set of studies leads us to examine multiple sources of modulation in audio-visual affective behavior. Specifically, we focus on how speech modulates facial displays of emotion. We develop a framework that uses speech signals which alter the temporal dynamics of individual facial regions to temporally segment and classify facial displays of emotion. Finally, we present methods to discover regions of emotionally salient events in a given audio-visual data. We demonstrate that different modalities, such as the upper face, lower face, and speech, express emotion with different timings and time scales, varying for each emotion type. We further extend this idea into another aspect of human behavior: human action events in videos. We show how transition patterns between events can be used for automatically segmenting and classifying action events. Our experimental results on audio-visual datasets show that the proposed systems not only improve performance, but also provide descriptions of how affective behaviors change over time. We conclude this dissertation with the future directions that will innovate three main research topics: machine adaptation for personalized technology, human-human interaction assistant systems, and human-centered multimedia content analysis.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133459/1/yelinkim_1.pd

    Tailoring Interaction. Sensing Social Signals with Textiles.

    Get PDF
    Nonverbal behaviour is an important part of conversation and can reveal much about the nature of an interaction. It includes phenomena ranging from large-scale posture shifts to small scale nods. Capturing these often spontaneous phenomena requires unobtrusive sensing techniques that do not interfere with the interaction. We propose an underexploited sensing modality for sensing nonverbal behaviours: textiles. As a material in close contact with the body, they provide ubiquitous, large surfaces that make them a suitable soft interface. Although the literature on nonverbal communication focuses on upper body movements such as gestures, observations of multi-party, seated conversations suggest that sitting postures, leg and foot movements are also systematically related to patterns of social interaction. This thesis addressees the following questions: Can the textiles surrounding us measure social engagement? Can they tell who is speaking, and who, if anyone, is listening? Furthermore, how should wearable textile sensing systems be designed and what behavioural signals could textiles reveal? To address these questions, we have designed and manufactured bespoke chairs and trousers with integrated textile pressure sensors, that are introduced here. The designs are evaluated in three user studies that produce multi-modal datasets for the exploration of fine-grained interactional signals. Two approaches to using these bespoke textile sensors are explored. First, hand crafted sensor patches in chair covers serve to distinguish speakers and listeners. Second, a pressure sensitive matrix in custom-made smart trousers is developed to detect static sitting postures, dynamic bodily movement, as well as basic conversational states. Statistical analyses, machine learning approaches, and ethnographic methods show that by moni- toring patterns of pressure change alone it is possible to not only classify postures with high accuracy, but also to identify a wide range of behaviours reliably in individuals and groups. These findings es- tablish textiles as a novel, wearable sensing system for applications in social sciences, and contribute towards a better understanding of nonverbal communication, especially the significance of posture shifts when seated. If chairs know who is speaking, if our trousers can capture our social engagement, what role can smart textiles have in the future of human interaction? How can we build new ways to map social ecologies and tailor interactions
    corecore