178 research outputs found

    Evaluation of Pose Tracking Accuracy in the First and Second Generations of Microsoft Kinect

    Full text link
    Microsoft Kinect camera and its skeletal tracking capabilities have been embraced by many researchers and commercial developers in various applications of real-time human movement analysis. In this paper, we evaluate the accuracy of the human kinematic motion data in the first and second generation of the Kinect system, and compare the results with an optical motion capture system. We collected motion data in 12 exercises for 10 different subjects and from three different viewpoints. We report on the accuracy of the joint localization and bone length estimation of Kinect skeletons in comparison to the motion capture. We also analyze the distribution of the joint localization offsets by fitting a mixture of Gaussian and uniform distribution models to determine the outliers in the Kinect motion data. Our analysis shows that overall Kinect 2 has more robust and more accurate tracking of human pose as compared to Kinect 1.Comment: 10 pages, IEEE International Conference on Healthcare Informatics 2015 (ICHI 2015

    Estado actual de la técnica y cuestiones perdurables en la recogida de datos antropométricos

    Get PDF
    The study of human body size and shape has been a topic of research for a very long time. In the past, anthropometry used traditional measuring techniques to record the dimensions of the human body and reported variance in body dimensions as a function of mean and standard deviation. Nowadays, the study of human body dimensions can be carried out more efficiently using three-dimensional body scanners, which can provide large amounts of anthropometric data more quickly than traditional techniques can. This paper presents a description of the broad range of issues related to the collection of anthropometric data using three-dimensional body scanners, including the different types of technologies available and their implications, the standard scanning process needed for effective data collection, and the possible sources of measurement errors that might affect the reliability and validity of the data collected.El estudio del tamaño y la forma del cuerpo humano ha sido un tema de investigación durante un tiempo muy largo. En el pasado, la antropometría utilizó técnicas de medición tradicionales para registrar las dimensiones del cuerpo humano y reportó la variación en las dimensiones del cuerpo en función de la media y la desviación estándar. Hoy en día, el estudio de las dimensiones del cuerpo humano se puede llevar a cabo utilizando maneras más eficientes, como los escáneres tridimensionales del cuerpo, que pueden proporcionar grandes cantidades de datos antropométricos más rápidamente que las técnicas tradicionales. En este trabajo se presenta una descripción de la amplia gama de temas relacionados con la recogida de datos antropométricos utilizando escáneres tridimensionales del cuerpo, incluyendo los diferentes tipos de tecnologías disponibles y sus implicaciones, el proceso de digitalización estándar necesario para la captura efectiva de datos, y las posibles fuentes de los errores de medición que podrán afectar la fiabilidad y validez de los datos recogidos.This work is financed by FEDER funds through the Competitive Factors Operational Program (COMPETE) POCI-01-0145-FEDER-007043 and POCI-01-0145FEDER-007136 and by national funds through FCT – the Portuguese Foundation for Science and Technology, under the projects UID/CEC/00319/2013 and UID/CTM/00264 respectively

    Natural User Interfaces for Virtual Character Full Body and Facial Animation in Immersive Virtual Worlds

    Get PDF
    In recent years, networked virtual environments have steadily grown to become a frontier in social computing. Such virtual cyberspaces are usually accessed by multiple users through their 3D avatars. Recent scientific activity has resulted in the release of both hardware and software components that enable users at home to interact with their virtual persona through natural body and facial activity performance. Based on 3D computer graphics methods and vision-based motion tracking algorithms, these techniques aspire to reinforce the sense of autonomy and telepresence within the virtual world. In this paper we present two distinct frameworks for avatar animation through user natural motion input. We specifically target the full body avatar control case using a Kinect sensor via a simple, networked skeletal joint retargeting pipeline, as well as an intuitive user facial animation 3D reconstruction pipeline for rendering highly realistic user facial puppets. Furthermore, we present a common networked architecture to enable multiple remote clients to capture and render any number of 3D animated characters within a shared virtual environment

    Validity and repeatability of a depth camera based surface imaging system for thigh volume measurement

    Get PDF
    Complex anthropometric measures, such as area and volume, can identify changes in body size and shape that are not detectable with traditional anthropometric measures of lengths, breadths, skinfolds and girths. However, taking these more complex measures with manual techniques (tape measurement and water displacement) is often unsuitable. Three dimensional (3D) surface imaging systems are quick and accurate alternatives to manual techniques but their use is restricted by cost, complexity and limited access. We have developed a novel low cost, accessible and portable 3D surface imaging system based on consumer depth cameras. The aim of this study was to determine the validity and repeatability of the system in the measurement of thigh volume. The thigh volumes of 36 participants were measured with the depth camera system and a high precision commercially available 3D surface imaging system (3dMD). The depth camera system used within this study is highly repeatable (technical error of measurement of < 1.0% intra-calibration and ~ 2.0% inter-calibration) but systematically overestimates (~6%) thigh volume when compared to the 3dMD system. This suggests poor agreement yet a close relationship, which once corrected can yield a usable thigh volume measurement. Keywords : Kinanthropometry, Anthropometry, Depth Camera, 3D Body Scanning, Surface Imaging

    Face morphology: Can it tell us something about body weight and fat?

    Get PDF
    This paper proposes a method for an automatic extraction of geometric features, related to weight parameters, from 3D facial data acquired with low-cost depth scanners. The novelty of the method relies both on the processing of the 3D facial data and on the definition of the geometric features which are conceptually simple, robust against noise and pose estimation errors, computationally efficient, invariant with respect to rotation, translation, and scale changes. Experimental results show that these measurements are highly correlated with weight, BMI, and neck circumference, and well correlated with waist and hip circumference, which are markers of central obesity. Therefore the proposed method strongly supports the development of interactive, non-obtrusive systems able to provide a support for the detection of weight-related problems

    Postural injury risk assessment for industrial processes using advanced sensory systems

    Full text link
    The major contributions of this research delivered both advancements and novel frameworks to enhance the current methods of postural assessments within industrial environments. This included the development of load vs repetition analysis, A novel BVH Model and a low cost ergonomic scoring tool relying on pixel labelling

    Facial analysis with depth maps and deep learning

    Get PDF
    Tese de Doutoramento em Ciência e Tecnologia Web em associação com a Universidade de Trás-os-Montes e Alto Douro, apresentada à Universidade AbertaA recolha e análise sequencial de dados multimodais do rosto humano é um problema importante em visão por computador, com aplicações variadas na análise e monitorização médica, entretenimento e segurança. No entanto, devido à natureza do problema, há uma falta de sistemas acessíveis e fáceis de usar, em tempo real, com capacidade de anotações, análise 3d, capacidade de reanalisar e com uma velocidade capaz de detetar padrões faciais em ambientes de trabalho. No âmbito de um esforço contínuo, para desenvolver ferramentas de apoio à monitorização e avaliação de emoções/sinais em ambiente de trabalho, será realizada uma investigação relativa à aplicabilidade de uma abordagem de análise facial para mapear e avaliar os padrões faciais humanos. O objetivo consiste em investigar um conjunto de sistemas e técnicas que possibilitem responder à questão de como usar dados de sensores multimodais para obter um sistema de classificação para identificar padrões faciais. Com isso em mente, foi planeado desenvolver ferramentas para implementar um sistema em tempo real de forma a reconhecer padrões faciais. O desafio é interpretar esses dados de sensores multimodais para classificá-los com algoritmos de aprendizagem profunda e cumprir os seguintes requisitos: capacidade de anotações, análise 3d e capacidade de reanalisar. Além disso, o sistema tem que ser capaze de melhorar continuamente o resultado do modelo de classificação para melhorar e avaliar diferentes padrões do rosto humano. A FACE ANALYSYS, uma ferramenta desenvolvida no contexto desta tese de doutoramento, será complementada por várias aplicações para investigar as relações de vários dados de sensores com estados emocionais/sinais. Este trabalho é útil para desenvolver um sistema de análise adequado para a perceção de grandes quantidades de dados comportamentais.Collecting and analyzing in real time multimodal sensor data of a human face is an important problem in computer vision, with applications in medical and monitoring analysis, entertainment, and security. However, due to the exigent nature of the problem, there is a lack of affordable and easy to use systems, with real time annotations capability, 3d analysis, replay capability and with a frame speed capable of detecting facial patterns in working behavior environments. In the context of an ongoing effort to develop tools to support the monitoring and evaluation of human affective state in working environments, this research will investigate the applicability of a facial analysis approach to map and evaluate human facial patterns. Our objective consists in investigating a set of systems and techniques that make it possible to answer the question regarding how to use multimodal sensor data to obtain a classification system in order to identify facial patterns. With that in mind, it will be developed tools to implement a real-time system in a way that it will be able to recognize facial patterns from 3d data. The challenge is to interpret this multi-modal sensor data to classify it with deep learning algorithms and fulfill the follow requirements: annotations capability, 3d analysis and replay capability. In addition, the system will be able to enhance continuously the output result of the system with a training process in order to improve and evaluate different patterns of the human face. FACE ANALYSYS is a tool developed in the context of this doctoral thesis, in order to research the relations of various sensor data with human facial affective state. This work is useful to develop an appropriate visualization system for better insight of a large amount of behavioral data.N/
    corecore