326 research outputs found

    Switching Partners: Dancing with the Ontological Engineers

    Get PDF
    Ontologies are today being applied in almost every field to support the alignment and retrieval of data of distributed provenance. Here we focus on new ontological work on dance and on related cultural phenomena belonging to what UNESCO calls the “intangible heritage.” Currently data and information about dance, including video data, are stored in an uncontrolled variety of ad hoc ways. This serves not only to prevent retrieval, comparison and analysis of the data, but may also impinge on our ability to preserve the data that already exists. Here we explore recent technological developments that are designed to counteract such problems by allowing information to be retrieved across disciplinary, cultural, linguistic and technological boundaries. Software applications such as the ones envisaged here will enable speedier recovery of data and facilitate its analysis in ways that will assist both archiving of and research on dance

    A study of human performance in recognizing expressive hand movements

    Full text link

    How Shall I Count the Ways? A Method for Quantifying the Qualitative Aspects of Unscripted Movement With Laban Movement Analysis

    Get PDF
    There is significant clinical evidence showing that creative and expressive movement processes involved in dance/movement therapy (DMT) enhance psycho-social well-being. Yet, because movement is a complex phenomenon, statistically validating which aspects of movement change during interventions or lead to significant positive therapeutic outcomes is challenging because movement has multiple, overlapping variables appearing in unique patterns in different individuals and situations. One factor contributing to the therapeutic effects of DMT is movement’s effect on clients’ emotional states. Our previous study identified sets of movement variables which, when executed, enhanced specific emotions. In this paper, we describe how we selected movement variables for statistical analysis in that study, using a multi-stage methodology to identify, reduce, code, and quantify the multitude of variables present in unscripted movement. We suggest a set of procedures for using Laban Movement Analysis (LMA)-described movement variables as research data. Our study used LMA, an internationally accepted comprehensive system for movement analysis, and a primary DMT clinical assessment tool for describing movement. We began with Davis’s (1970) three-stepped protocol for analyzing movement patterns and identifying the most important variables: (1) We repeatedly observed video samples of validated (Atkinson et al., 2004) emotional expressions to identify prevalent movement variables, eliminating variables appearing minimally or absent. (2) We use the criteria repetition, frequency, duration and emphasis to eliminate additional variables. (3) For each emotion, we analyzed motor expression variations to discover how variables cluster: first, by observing ten movement samples of each emotion to identify variables common to all samples; second, by qualitative analysis of the two best-recognized samples to determine if phrasing, duration or relationship among variables was significant. We added three new steps to this protocol: (4) we created Motifs (LMA symbols) combining movement variables extracted in steps 1–3; (5) we asked participants in the pilot study to move these combinations and quantify their emotional experience. Based on the results of the pilot study, we eliminated more variables; (6) we quantified the remaining variables’ prevalence in each Motif for statistical analysis that examined which variables enhanced each emotion. We posit that our method successfully quantified unscripted movement data for statistical analysis

    Discovery and recognition of motion primitives in human activities

    Get PDF
    We present a novel framework for the automatic discovery and recognition of motion primitives in videos of human activities. Given the 3D pose of a human in a video, human motion primitives are discovered by optimizing the `motion flux', a quantity which captures the motion variation of a group of skeletal joints. A normalization of the primitives is proposed in order to make them invariant with respect to a subject anatomical variations and data sampling rate. The discovered primitives are unknown and unlabeled and are unsupervisedly collected into classes via a hierarchical non-parametric Bayes mixture model. Once classes are determined and labeled they are further analyzed for establishing models for recognizing discovered primitives. Each primitive model is defined by a set of learned parameters. Given new video data and given the estimated pose of the subject appearing on the video, the motion is segmented into primitives, which are recognized with a probability given according to the parameters of the learned models. Using our framework we build a publicly available dataset of human motion primitives, using sequences taken from well-known motion capture datasets. We expect that our framework, by providing an objective way for discovering and categorizing human motion, will be a useful tool in numerous research fields including video analysis, human inspired motion generation, learning by demonstration, intuitive human-robot interaction, and human behavior analysis

    An original framework for understanding human actions and body language by using deep neural networks

    Get PDF
    The evolution of both fields of Computer Vision (CV) and Artificial Neural Networks (ANNs) has allowed the development of efficient automatic systems for the analysis of people's behaviour. By studying hand movements it is possible to recognize gestures, often used by people to communicate information in a non-verbal way. These gestures can also be used to control or interact with devices without physically touching them. In particular, sign language and semaphoric hand gestures are the two foremost areas of interest due to their importance in Human-Human Communication (HHC) and Human-Computer Interaction (HCI), respectively. While the processing of body movements play a key role in the action recognition and affective computing fields. The former is essential to understand how people act in an environment, while the latter tries to interpret people's emotions based on their poses and movements; both are essential tasks in many computer vision applications, including event recognition, and video surveillance. In this Ph.D. thesis, an original framework for understanding Actions and body language is presented. The framework is composed of three main modules: in the first one, a Long Short Term Memory Recurrent Neural Networks (LSTM-RNNs) based method for the Recognition of Sign Language and Semaphoric Hand Gestures is proposed; the second module presents a solution based on 2D skeleton and two-branch stacked LSTM-RNNs for action recognition in video sequences; finally, in the last module, a solution for basic non-acted emotion recognition by using 3D skeleton and Deep Neural Networks (DNNs) is provided. The performances of RNN-LSTMs are explored in depth, due to their ability to model the long term contextual information of temporal sequences, making them suitable for analysing body movements. All the modules were tested by using challenging datasets, well known in the state of the art, showing remarkable results compared to the current literature methods

    Inferring Causal Factors of Core Affect Dynamics on Social Participation through the Lens of the Observer

    Get PDF
    A core endeavour in current affective computing and social signal processing research is the construction of datasets embedding suitable ground truths to foster machine learning methods. This practice brings up hitherto overlooked intricacies. In this paper, we consider causal factors potentially arising when human raters evaluate the affect fluctuations of subjects involved in dyadic interactions and subsequently categorise them in terms of social participation traits. To gauge such factors, we propose an emulator as a statistical approximation of the human rater, and we first discuss the motivations and the rationale behind the approach.The emulator is laid down in the next section as a phenomenological model where the core affect stochastic dynamics as perceived by the rater are captured through an Ornstein-Uhlenbeck process; its parameters are then exploited to infer potential causal effects in the attribution of social traits. Following that, by resorting to a publicly available dataset, the adequacy of the model is evaluated in terms of both human raters' emulation and machine learning predictive capabilities. We then present the results, which are followed by a general discussion concerning findings and their implications, together with advantages and potential applications of the approach

    The Machine as Art/ The Machine as Artist

    Get PDF
    The articles collected in this volume from the two companion Arts Special Issues, “The Machine as Art (in the 20th Century)” and “The Machine as Artist (in the 21st Century)”, represent a unique scholarly resource: analyses by artists, scientists, and engineers, as well as art historians, covering not only the current (and astounding) rapprochement between art and technology but also the vital post-World War II period that has led up to it; this collection is also distinguished by several of the contributors being prominent individuals within their own fields, or as artists who have actually participated in the still unfolding events with which it is concerne

    The Machine as Art/ The Machine as Artist

    Get PDF
    • …
    corecore