756 research outputs found

    A Database of Full Body Virtual Interactions Annotated with Expressivity Scores

    Get PDF
    Abstract Recent technologies enable the exploitation of full body expressions in applications such as interactive arts but are still limited in terms of dyadic subtle interaction patterns. Our project aims at full body expressive interactions between a user and an autonomous virtual agent. The currently available databases do not contain full body expressivity and interaction patterns via avatars. In this paper, we describe a protocol defined to collect a database to study expressive full-body dyadic interactions. We detail the coding scheme for manually annotating the collected videos. Reliability measures for global annotations of expressivity and interaction are also provided

    Grappling with movement models: performing arts and slippery contexts

    Get PDF
    The ways we leave, recognise, and interpret marks of human movement are deeply entwined with layerings of collective memory. Although we retroactively order chronological sediments to map shareable stories, our remediations often emerge unpredictably from a multidimensional mnemonic fabric: contemporary ideas can resonate with ancient aspirations and initiatives, and foreign fields of investigation can inform ostensibly unrelated endeavours. Such links reinforce the debunking of grand narratives, and resonate with quests for the new kinds of thinking needed to address the mix of living, technological, and semiotic systems that makes up our wider ecology. As a highly evolving field, movement-and-computing is exceptionally open to, and needy of, this diversity. This paper argues for awareness of the analytical apparatus we sometimes too unwittingly bring to bear on our research objects, and for the value of transdisciplinary and tangential thinking to diversify our research questions. With a view to seeking ways to articulate new, shareable questions rather than propose answers, it looks at wider questions of problem-framing. It emphasises the importance of - quite literally - grounding movement, of recognising its environmental implications and qualities. Informed by work on expressive gesture and creative use of instruments in domains including puppetry and music, this paper also insists on the complexity and heterogeneity of the research strands that are indissociably bound up in our corporeal-technological movement practices

    Personalized face and gesture analysis using hierarchical neural networks

    Full text link
    The video-based computational analyses of human face and gesture signals encompass a myriad of challenging research problems involving computer vision, machine learning and human computer interaction. In this thesis, we focus on the following challenges: a) the classification of hand and body gestures along with the temporal localization of their occurrence in a continuous stream, b) the recognition of facial expressivity levels in people with Parkinson's Disease using multimodal feature representations, c) the prediction of student learning outcomes in intelligent tutoring systems using affect signals, and d) the personalization of machine learning models, which can adapt to subject and group-specific nuances in facial and gestural behavior. Specifically, we first conduct a quantitative comparison of two approaches to the problem of segmenting and classifying gestures on two benchmark gesture datasets: a method that simultaneously segments and classifies gestures versus a cascaded method that performs the tasks sequentially. Second, we introduce a framework that computationally predicts an accurate score for facial expressivity and validate it on a dataset of interview videos of people with Parkinson's disease. Third, based on a unique dataset of videos of students interacting with MathSpring, an intelligent tutoring system, collected by our collaborative research team, we build models to predict learning outcomes from their facial affect signals. Finally, we propose a novel solution to a relatively unexplored area in automatic face and gesture analysis research: personalization of models to individuals and groups. We develop hierarchical Bayesian neural networks to overcome the challenges posed by group or subject-specific variations in face and gesture signals. We successfully validate our formulation on the problems of personalized subject-specific gesture classification, context-specific facial expressivity recognition and student-specific learning outcome prediction. We demonstrate the flexibility of our hierarchical framework by validating the utility of both fully connected and recurrent neural architectures

    Laughter and smiling facial expression modelling for the generation of virtual affective behavior

    Get PDF
    Laughter and smiling are significant facial expressions used in human to human communication. We present a computational model for the generation of facial expressions associated with laughter and smiling in order to facilitate the synthesis of such facial expressions in virtual characters. In addition, a new method to reproduce these types of laughter is proposed and validated using databases of generic and specific facial smile expressions. In particular, a proprietary database of laugh and smile expressions is also presented. This database lists the different types of classified and generated laughs presented in this work. The generated expressions are validated through a user study with 71 subjects, which concluded that the virtual character expressions built using the presented model are perceptually acceptable in quality and facial expression fidelity. Finally, for generalization purposes, an additional analysis shows that the results are independent of the type of virtual character’s appearance. © 2021 Mascaró et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited

    The eyes have it

    Get PDF

    Large-Scale Pattern-Based Information Extraction from the World Wide Web

    Get PDF
    Extracting information from text is the task of obtaining structured, machine-processable facts from information that is mentioned in an unstructured manner. It thus allows systems to automatically aggregate information for further analysis, efficient retrieval, automatic validation, or appropriate visualization. This work explores the potential of using textual patterns for Information Extraction from the World Wide Web
    corecore