5 research outputs found

    Tracking body and hands for gesture recognition: NATOPS aircraft handling signals database

    Get PDF
    We present a unified framework for body and hand tracking, the output of which can be used for understanding simultaneously performed body-and-hand gestures. The framework uses a stereo camera to collect 3D images, and tracks body and hand together, combining various existing techniques to make tracking tasks efficient. In addition, we introduce a multi-signal gesture database: the NATOPS aircraft handling signals. Unlike previous gesture databases, this data requires knowledge about both body and hand in order to distinguish gestures. It is also focused on a clearly defined gesture vocabulary from a real-world scenario that has been refined over many years. The database includes 24 body-and-hand gestures, and provides both gesture video clips and the body and hand features we extracted

    Interacciones basadas en gestos: revisión crítica

    Get PDF
    This paper presents a critical review of human-computer interactions (HCI) based on gestures. Gestures, as ways of non-verbal communication, have been of interest in HCI because they make possible the interaction with the machine through the body, as an agent that perceives and acts in the world. The review was carried out in the most critical databases in HCI, as well as some Latin-American academic sources, and included an analysis of the evolution of gesture-based interactions, current work, and future perspectives. The article is carried out holistically, considering both technical and human issues: psychological, social, and cultural, as well as their relationships. We present this analytical process as a scientometric description of the search results, the description of the gesture as a means of interaction, the techniques used for the different steps in the gesture recognition process, and the presentation of the applications and challenges of gesture-based interactions. It concludes through a series of questions that invite the reader to think about potential research focus on gesture-based interactions.Este artículo presenta una revisión crítica de la interacción humano-computador (HCI) basada en gestos. El gesto, como una forma de comunicación no verbal, ha sido de interés para el área de HCI en la búsqueda de alternativas de interacción entre el humano y la máquina, a través del cuerpo como agente que percibe y actúa en el mundo. La revisión se hizo en las bases de datos de mayor importancia en HCI y en algunas fuentes de literatura académica latinoamericana en el área, e incluye un análisis de la evolución de las interacciones basadas en gestos, el trabajo actual y las perspectivas a futuro. El análisis se desarrolla de forma holística y abarca asuntos técnicos y humanos: psicológicos, sociales y culturales, así como su relación. Este proceso analítico se presenta como una descripción cienciométrica de los resultados de las búsquedas, a fin de exponer el gesto como medio de interacción, las técnicas utilizadas para los diferentes pasos en el proceso de reconocimiento de gestos y las aplicaciones y desafíos de las interacciones basadas en gestos. Como conclusión se formula una serie de preguntas que invitan al lector a pensar en potenciales focos de investigación en las interacciones basadas en gestos

    Combination of Accumulated Motion and Color Segmentation for Human Activity Analysis

    Get PDF
    The automated analysis of activity in digital multimedia, and especially video, is gaining more and more importance due to the evolution of higher-level video processing systems and the development of relevant applications such as surveillance and sports. This paper presents a novel algorithm for the recognition and classification of human activities, which employs motion and color characteristics in a complementary manner, so as to extract the most information from both sources, and overcome their individual limitations. The proposed method accumulates the flow estimates in a video, and extracts “regions of activity†by processing their higher-order statistics. The shape of these activity areas can be used for the classification of the human activities and events taking place in a video and the subsequent extraction of higher-level semantics. Color segmentation of the active and static areas of each video frame is performed to complement this information. The color layers in the activity and background areas are compared using the earth mover's distance, in order to achieve accurate object segmentation. Thus, unlike much existing work on human activity analysis, the proposed approach is based on general color and motion processing methods, and not on specific models of the human body and its kinematics. The combined use of color and motion information increases the method robustness to illumination variations and measurement noise. Consequently, the proposed approach can lead to higher-level information about human activities, but its applicability is not limited to specific human actions. We present experiments with various real video sequences, from sports and surveillance domains, to demonstrate the effectiveness of our approach

    Prosody and Kinesics Based Co-analysis Towards Continuous Gesture Recognition

    Get PDF
    The aim of this study is to develop a multimodal co-analysis framework for continuous gesture recognition by exploiting prosodic and kinesics manifestation of natural communication. Using this framework, a co-analysis pattern between correlating components is obtained. The co-analysis pattern is clustered using K-means clustering to determine how well the pattern distinguishes the gestures. Features of the proposed approach that differentiate it from the other models are its less susceptibility to idiosyncrasies, its scalability, and simplicity. The experiment was performed on Multimodal Annotated Gesture Corpus (MAGEC) that we created for research on understanding non-verbal communication community, particularly the gestures

    Human Motion Analysis Using Very Few Inertial Measurement Units

    Get PDF
    Realistic character animation and human motion analysis have become major topics of research. In this doctoral research work, three different aspects of human motion analysis and synthesis have been explored. Firstly, on the level of better management of tens of gigabytes of publicly available human motion capture data sets, a relational database approach has been proposed. We show that organizing motion capture data in a relational database provides several benefits such as centralized access to major freely available mocap data sets, fast search and retrieval of data, annotations based retrieval of contents, entertaining data from non-mocap sensor modalities etc. Moreover, the same idea is also proposed for managing quadruped motion capture data. Secondly, a new method of full body human motion reconstruction using very sparse configuration of sensors is proposed. In this setup, two sensor are attached to the upper extremities and one sensor is attached to the lower trunk. The lower trunk sensor is used to estimate ground contacts, which are later used in the reconstruction process along with the low dimensional inputs from the sensors attached to the upper extremities. The reconstruction results of the proposed method have been compared with the reconstruction results of the existing approaches and it has been observed that the proposed method generates lower average reconstruction errors. Thirdly, in the field of human motion analysis, a novel method of estimation of human soft biometrics such as gender, height, and age from the inertial data of a simple human walk is proposed. The proposed method extracts several features from the time and frequency domains for each individual step. A random forest classifier is fed with the extracted features in order to estimate the soft biometrics of a human. The results of classification have shown that it is possible with a higher accuracy to estimate the gender, height, and age of a human from the inertial data of a single step of his/her walk
    corecore