19 research outputs found

    Segmenting Motion Capture Data Using a Qualitative Analysis

    Get PDF
    Many interactive 3D games utilize motion capture for both character animation and user input. These applications require short, meaningful sequences of data. Manually producing these segments of motion capture data is a laborious, time-consuming process that is impractical for real-time applications. We present a method to automatically produce semantic segmentations of general motion capture data by examining the qualitative properties that are intrinsic to all motions, using Laban Movement Analysis (LMA). LMA provides a good compromise between high-level semantic features, which are difficult to extract for general motions, and lowlevel kinematic features, which often yield unsophisticated segmentations. Our method finds motion sequences which exhibit high output similarity from a collection of neural networks trained with temporal variance. We show that segmentations produced using LMA features are more similar to manual segmentations, both at the frame and the segment level, than several other automatic segmentation methods

    Prosody-Based Adaptive Metaphoric Head and Arm Gestures Synthesis in Human Robot Interaction

    Get PDF
    International audienceIn human-human interaction, the process of communication can be established through three modalities: verbal, non-verbal (i.e., gestures), and/or para-verbal (i.e., prosody). The linguistic literature shows that the para-verbal and non-verbal cues are naturally aligned and synchronized, however the natural mechanism of this synchronization is still unexplored. The difficulty encountered during the coordination between prosody and metaphoric head-arm gestures concerns the conveyed meaning , the way of performing gestures with respect to prosodic characteristics, their relative temporal arrangement, and their coordinated organization in the phrasal structure of utterance. In this research, we focus on the mechanism of mapping between head-arm gestures and speech prosodic characteristics in order to generate an adaptive robot behavior to the interacting human's emotional state. Prosody patterns and the motion curves of head-arm gestures are aligned separately into parallel Hidden Markov Models (HMM). The mapping between speech and head-arm gestures is based on the Coupled Hidden Markov Models (CHMM), which could be seen as a multi-stream collection of HMM, characterizing the segmented prosody and head-arm gestures' data. An emotional state based audio-video database has been created for the validation of this study. The obtained results show the effectiveness of the proposed methodology

    Warped K-Means: An algorithm to cluster sequentially-distributed data

    Full text link
    [EN] Many devices generate large amounts of data that follow some sort of sequentiality, e.g., motion sensors, e-pens, eye trackers, etc. and often these data need to be compressed for classification, storage, and/or retrieval tasks. Traditional clustering algorithms can be used for this purpose, but unfortunately they do not cope with the sequential information implicitly embedded in such data. Thus, we revisit the well-known K-means algorithm and provide a general method to properly cluster sequentially-distributed data. We present Warped K-Means (WKM), a multi-purpose partitional clustering procedure that minimizes the sum of squared error criterion, while imposing a hard sequentiality constraint in the classification step. We illustrate the properties of WKM in three applications, one being the segmentation and classification of human activity. WKM outperformed five state-of- the-art clustering techniques to simplify data trajectories, achieving a recognition accuracy of near 97%, which is an improvement of around 66% over their peers. Moreover, such an improvement came with a reduction in the computational cost of more than one order of magnitude.This work has been partially supported by Casmacat (FP7-ICT-2011-7, Project 287576), tranScriptorium (FP7-ICT-2011-9, Project 600707), STraDA (MINECO, TIN2012-37475-0O2-01), and ALMPR (GVA, Prometeo/20091014) projects.Leiva Torres, LA.; Vidal, E. (2013). Warped K-Means: An algorithm to cluster sequentially-distributed data. Information Sciences. 237:196-210. https://doi.org/10.1016/j.ins.2013.02.042S19621023

    Analyzing and Learning Movement Through Human-Computer Co-Creative Improvisation and Data Visualization

    Get PDF
    Recent years have seen an incredible rise in the availability of household motion and video capture technologies, ranging from the humble webcam to the relatively sophisticated Kinect sensor. Naturally, this precipitated a rise in both the quantity and quality of motion capture data available on the internet. The wealth of data on the internet has caused a new interest in the field of motion data classification, the specific task of having a model classify and sort different clips of human motion. However, there is comparatively little work in the field of motion data clustering, which is an unsupervised field that may be more useful in the future as it allows for agents to recognize “categories” of motions without the need for user input or classified data. Systems that can cluster motion data focus more on “what type of motion data is this, and what is it similar to” rather than which motion is this. The LuminAI project, as described in this paper, is an example of a practical use for motion data clustering that allows the system to respond to user dance moves with a similar but different gesture. To analyze the efficacy and properties of this motion data clustering pipeline, we also propose a novel data visualization tool and the design considerations involved in its development.Undergraduat

    A Language for Human Action

    Get PDF
    Human-centered computing (HCC) is centered on humans and what they do, i.e. human actions. Thus, developing an infrastructure for HCC requires understanding human action, at some level of detail. We need to be able to talk about actions, synthesize actions, recognize actions, manipulate actions, imitate actions, imagine and predict actions. How could we achieve this in a principled fashion? This paper proposes that the space of human actions has a linguistic structure. This is a sensory-motor space consisting of the evolution of the joint angles of the human body in movement. The space of human activity has its own phonemes, morphemes, and sentences. We present a Human Activity Language (HAL) for symbolic non-arbitrary representation of visual and motor information. In phonology, we define atomic segments (kinetemes) that are used to compose human activity. In morphology, we propose parallel learning to incorporate associative learning into a language inference approach. Parallel learning solves the problem of overgeneralization and is effective in identifying the active joints and motion patterns in a particular action. In syntax, we point out some of the basic constraints for sentence formation. Finally, we demonstrate this linguistic framework on a praxicon of 200 human actions (motion capture data obtained by a suit) and we discuss the implications of HAL on HCC

    Learning Parallel Grammar Systems for a Human Activity Language

    Get PDF
    We have empirically discovered that the space of human actions has a linguistic structure. This is a sensory-motor space consisting of the evolution of the joint angles of the human body in movement. The space of human activity has its own phonemes, morphemes, and sentences. In kinetology, the phonology of human movement, we define atomic segments (kinetemes) that are used to compose human activity. In this paper, we present a morphological representation that explicitly contains the subset of actuators responsible for the activity, the synchronization rules modeling coordination among these actuators, and the motion pattern performed by each participating actuator. We model a human action with a novel formal grammar system, named Parallel Synchronous Grammar System (PSGS), adapted from Parallel Communicating Grammar Systems (PCGS). We propose a heuristic PArallel Learning (PAL) algorithm for the automatic inference of a PSGS. Our algorithm is used in the learning of human activity. Instead of a sequence of sentences, the input is a single string for each actuator in the body. The algorithm infers the components of the grammar system as a subset of actuators, a CFG grammar for the language of each component, and synchronization rules. Our framework is evaluated with synthetic data and real motion data from a large scale motion capture database containing around 200 different actions corresponding to verbs associated with voluntary observable movement. On synthetic data, our algorithm achieves 100% success rate with a noise level up to 7%

    A template based approach for human action recognition

    Get PDF
    Visual analysis of human movements concerns the understanding of human activities from image sequences. The goal of the action/gesture recognition is to recognize the label that corresponds to an action or gesture made by a human in a sequence of images. To solve this problem, the researchers have proposed solutions that range from object recognition techniques, to speech recognition techniques, face recognition or brain function . The techniques presented in this thesis, are related to a set of techniques that condense a video sequence into a template that retain important information to action/gestures classification applying standard object recognition techniques. In a first stage of this thesis, we have proposed a view-based temporal template approach for action/gesture representation from tensors. The templates are computed from three different projections considering a video sequence as a third-order tensor. We compute each projection from the fibers of the tensor using a combination of simple functions . We have studied which function and feature extractor/descriptor is the most suitable to project the template from the tensor. We have tested five different simple functions used to project the fibers, namely, supremum, mean, standard deviation, skewness and kurtosis using public datasets. We have also studied the performance obtained applying four feature extractors/descriptors like PHOW, LIOP, HOG and SMFs. Using more complex datasets, we have assessed the most suitable feature representation for our templates (Bag Of Words or Fisher Vectors) and the complementarity among the features computed from each simple function (Max, Mean, Standard Deviation, Kurtosis y Skewness). Finally, we have studied the comptementarity with a successful technique like Improved Dense Trajectories. The experiments have shown that Standard Deviation function and PHOW extractor/descriptor are the most suitable for our templates. The results have shown also that our 3 projection templates overcome most state-of-the-art techniques in more complex datasets when we combine the templates with Fisher Vector representation . The features extracted by each simple function are complementary among them and that added to HOG, HOF and MBH improves the performance of IDTs. Derived from this thesis, we have also presented another view-based temporal temptate approach for action recognition obtained from a Radon transform projection and that allows the temporal segmentation of human actions in real time. First, we propose a generalization of the R transform that it is useful to adapt the transform to the problem to be solve. We have studied the performance in three functions, namely, Max, Mean and Standard Deviation for pre-segmentad human action recognition using a public dataset, and we have compared the results against traditional R transform . The results have shown that Maxfunction obtains the best performance when it is applied on Radon transform and that our technique overcomes many state-of-the-art techniques in action recognition. In a second stage, we have modified the classifier to adapt it to temporal segmentation of human actions. To assess the performance, we have merged Weizman and Hollywood actions datasets and we have measured the performance of the method to identify isolated actions. The experiments have shown that our technique overcomes the state-of-the-art techniques in Weizman dataset in no pre-segmented human actions.El análisis visual de movimientos humanos hace referencia al entendimiento de la actividad humana en secuencias de video. El objetivo del reconocimiento de acciones/gestos en ámbito de la Visión por Computador, es identificar el nombre que corresponde a una acción o gesto realizado en una secuencia de imágenes. Para dar solución a este problema, los investigadores han propuesto soluciones que van desde la aplicación de técnicas que derivan del reconocimiento de objetos, del reconocimiento del habla, del reconocimiento facial o del funcionamiento del cerebro. Las técnicas presentadas en esta tesis, están relacionadas con un conjunto de técnicas que intentan condensar una secuencia de video en unas templates que retienen información importante de cara a la discriminación entre acciones/gestos aplicando técnicas estándar de reconocimiento de objetos. En la primera parte de esta tesis, hemos propuesto una aproximación basada en template para la representación de acciones/gestos a partir de tensores. Nuestras templates se calculan desde tres proyecciones diferentes considerando una secuencia de vídeo como un tensor de tercer orden. Calculamos cada proyección desde las fibras del tensor de tercer orden utilizando funciones simples. Hemos hecho un estudio exhaustivo para encontrar qué función debe ser utilizada para proyectar el template desde el tensor, y qué extractor/descriptor es el más adecuado. Utilizando datasets públicos simples, hemos testeado cinco funciones diferentes simples para proyectar las fibras, llamadas, Max, Mean, Standard Deviation, Kurtosis y Skewness. Hemos estudiado también el rendimiento obtenido aplicando a nuestras templates, cuatro técnicas de extracción/descripción de características del estado del arte como PHOW, LIOP, HOG y SMFs. Utilizando datasets más complejos, hemos estudiado cuál es la mejor representación de las características extraídas de las templates (Bag Of Words o Fisher Vectores), y la complementariedad entre las características extraídas con cada una de las cinco funciones (Max, Mean, Standard Deviation, Kurtosis y Skewness) y la complementariedad de estas con una exitosa técnica como Improved Dense Trajectories. Los experimentos han demostrado que la desviación estándar es la mejor función para proyectar las fibras en las templates, y que PHOW obtiene el mejor rendimiento como detector/descriptor en las templates obtenidas. Los datasets más complejos han mostrado que la mejor representación para las características extraídas de las templates es Fisher Vectores, que existe complementariedad entre las características extraídas con cada una de las funciones y que la fusión de estas características con Improved Dense Trajectories, hace que este último mejore su rendimiento. Derivado de los trabajos de esta tesis, también presentamos otra aproximación basada en template por el reconocimiento de acciones/gestos que se obtiene de una proyección derivada de la transformada de Radon y que permite la segmentación temporal de acciones en tiempo real. Primero hemos planteado una generalización de la transformada R que permite adaptar la transformada al problema a resolver mediante la función de proyección. Hemos estudiado su rendimiento para las funciones Max, Mean y Standard Deviation en reconocimiento de acciones pre-segmentadas sobre un dataset público y comparado los resultados con la transformada R. Los resultados han mostrado que la función Max obtiene el mejor resultado cuando se aplica sobre la transformada de Radon y que nuestra técnica supera a muchos métodos del estado del arte en reconocimiento de acciones. En una segunda fase, hemos introducido una modificación en la etapa de clasificación de nuestra técnica para permitir segmentar acciones temporalmente. Para evaluar su rendimiento, hemos concatenado acciones de los datasets Weizmann y Hollywood y medido la capacidad de la técnica para identificar cada una de las acciones individuales. Los experimentos han demostrado que nuestra técnica rinde mejor en la segmentación de acciones del Weizmann dataset que las técnicas del estado del arteL’anàlisi visual de moviments humans fa referència al enteniment d’activitat humana en seqüències de vídeo. L’objectiu del reconeixement d’accions/gestos en l’àmbit de la Visió per Computador, és identificar el nom que correspon a una acció o gest realitzat en una seqüència d’imatges. Per donar solució a aquest problema, els investigadors han proposat solucions que van des de l’aplicació de tècniques que deriven del reconeixement d’objectes, del reconeixement de la parla, del reconeixement facial o del funcionament del cervell. Les tècniques presentades en aquesta tesi, estan relacionades amb un conjunt de tècniques que intenten condensar una seqüència de vídeo en uns templates que retinguin informació important de cara a la discriminació entre accions/gestos aplicant tècniques estàndards de reconeixement d’objectes. A la primera part d’aquesta tesi, hem proposat una aproximació basada en template per la representació d’accions/gestos a partir de tensors. Les nostres templates es calculen des de tres projeccions diferents considerant una seqüència de vídeo com un tensor de tercer ordre. Calculem cada projecció des de les fibres del tensor de tercer ordre utilitzant funcions simples. Hem fet un estudi exhaustiu per trobar quina funció ha de ser utilitzada per projectar el template des del tensor, i quin extractor/descriptor és el més adequat. Utilitzant datasets públics simples, hem testejat cinc funcions diferents simples per projectar les fibres, anomenades, Max, Mean, Standard Deviation, Kurtosi i Skewness. Hem estudiat també el rendiment obtingut aplicant a les nostres templates, quatre tècniques d’extracció/descripció de característiques de l’estat de l’art com PHOW, LIOP, HOG i SMFs. Utilitzant datasets més complexes, hem estudiat quina és la millor representació de les característiques extretes de les templates (Bag Of Words o Fisher Vectors) i la complementarietat entre les característiques extretes amb cada una de les cinc funcions (Max, Mean, Standard Deviation, Kurtosi i Skewness) i la complementarietat d’aquestes amb una exitosa tècnica com Improved Dense Trajectories. Els experiments han demostrat que la desviació estàndard és la millor funció per projectar les fibres en les templates, i que PHOW obté el millor rendiment com a detector/descriptor en les templates obtingudes. Els datasets més complexes han mostrat que la millor representació per a les característiques extretes de les templates és amb Fisher Vectors, que existeix complementarietat entre les característiques extretes amb cada una de les funcions i que la fusió d’aquestes característiques amb Improved Dense Trajectories, fa que aquest últim millori el seu rendiment. Derivat dels treballs d’aquesta tesi, també presentem una altre aproximació basada en template pel reconeixement d’accions/gestos que s’obté d’una projecció derivada de la transformada de Radon i que permet la segmentació temporal d’accions en temps real. Primer hem plantejat una generalització de la transformada R que permet adaptar la transformada al problema a resoldre mitjançant la funció de projecció. Hem estudiat el seu rendiment per a les funcions Max, Mean i Standard Deviation en reconeixement d’accions pre-segmentades sobre un dataset públic i comparat els resultats amb la transformada R. Els resultats han mostrat que la funció Max obté el millor resultat quan s’aplica sobre la transformada de Radon i que la nostra tècnica supera a molts mètodes de l’estat de l’art en reconeixement d’accions. A una segona fase, hem introduït una modificació a la etapa de classificació de la nostra tècnica per permetre segmentar accions temporalment. Per avaluar el seu rendiment, hem concatenat accions dels datasets Weizmann i Hollywood i mesurat la capacitat de la tècnica per identificar cadascuna de les accions individuals. Els experiments han demostrat que la nostra tècnica rendeix millor en la segmentació de les accions del dataset Weizmann que les tècniques de l’estat de l’art.Postprint (published version
    corecore