18 research outputs found
The Potential of Using Exergame s for Correcting Posture Problems of Children
In Poland we observe a large scale of posture deformations and consequent health implications in
the population of school children. Therefore many pedagogical and medical societies are looking for
ways and methods of preventing and reversing this negative trend. The purpose of this article is to
present the potential of practical use of new posture correction interactive games among school
children. Some of the newest equipment and ways of detecting and registering natural human
movement have been presented. Teletransmision of data in real time makes it possible for a physiotherapist
to supervise and modify the player’s motor behavior. Creating visually attractive and
involving forms of physical exercise might help encourage and inspire children to reject a sedentary
lifestyle.3128930316Studia Edukacyjn
Human Skeleton and Joints Extraction for Virtual Clothing Fitting
人体运动捕捉是计算机视觉领域倍受关注的一个研究热点,在智能视频监控、视频分析、动画、游戏、医学诊断和人机交互等领域均有广阔的应用前景。它包括人体的标定与跟踪和人体动作的识别与理解两个主要内容。其中,人体的标定与跟踪是运动识别和理解的基础,在人体运动捕捉中起着关键性的作用。因此,本文以虚拟试衣系统为背景,研究人体的骨架化技术与人体关节点的定位,具有重要的理论价值和实际意义。 骨架是描述物体形状和拓扑结构的一种有效手段,广泛应用于人体的描述。基于距离变换的骨架化算法对人体的骨架有较好的效果,但无法保证骨架的连通性。本文利用图像梯度的性质,通过梯度化距离变换图,着重突出了潜在的骨架点;利用距离值和...Human motion capture is a hot research in the field of Computer Vision. This research has various application prospects in intelligent video surveillance, video analysis, animation, computer games, medical diagnostics, human-computer interaction, and so on. It includes two main components: human calibration and tracking, and human action recognition and understanding. Among them, the human calibra...学位:工学硕士院系专业:信息科学与技术学院计算机科学系_计算机应用技术学号:2302009115275
Motion Capture of Hands in Action Using Discriminative Salient Points
Abstract. Capturing the motion of two hands interacting with an object is a very challenging task due to the large number of degrees of freedom, self-occlusions, and similarity between the fingers, even in the case of multiple cameras observing the scene. In this paper we propose to use discriminatively learned salient points on the fingers and to estimate the finger-salient point associations simultaneously with the estimation of the hand pose. We introduce a differentiable objective function that also takes edges, optical flow and collisions into account. Our qualitative and quantitative evaluations show that the proposed approach achieves very accurate results for several challenging sequences containing hands and objects in action.
PhysCap: Physically Plausible Monocular 3D Motion Capture in Real Time
Marker-less 3D human motion capture from a single colour camera has seen
significant progress. However, it is a very challenging and severely ill-posed
problem. In consequence, even the most accurate state-of-the-art approaches
have significant limitations. Purely kinematic formulations on the basis of
individual joints or skeletons, and the frequent frame-wise reconstruction in
state-of-the-art methods greatly limit 3D accuracy and temporal stability
compared to multi-view or marker-based motion capture. Further, captured 3D
poses are often physically incorrect and biomechanically implausible, or
exhibit implausible environment interactions (floor penetration, foot skating,
unnatural body leaning and strong shifting in depth), which is problematic for
any use case in computer graphics. We, therefore, present PhysCap, the first
algorithm for physically plausible, real-time and marker-less human 3D motion
capture with a single colour camera at 25 fps. Our algorithm first captures 3D
human poses purely kinematically. To this end, a CNN infers 2D and 3D joint
positions, and subsequently, an inverse kinematics step finds space-time
coherent joint angles and global 3D pose. Next, these kinematic reconstructions
are used as constraints in a real-time physics-based pose optimiser that
accounts for environment constraints (e.g., collision handling and floor
placement), gravity, and biophysical plausibility of human postures. Our
approach employs a combination of ground reaction force and residual force for
plausible root control, and uses a trained neural network to detect foot
contact events in images. Our method captures physically plausible and
temporally stable global 3D human motion, without physically implausible
postures, floor penetrations or foot skating, from video in real time and in
general scenes. The video is available at
http://gvv.mpi-inf.mpg.de/projects/PhysCapComment: 16 pages, 11 figure
A survey on human performance capture and animation
With the rapid development of computing technology, three-dimensional (3D) human body
models and their dynamic motions are widely used in the digital entertainment industry. Human perfor-
mance mainly involves human body shapes and motions. Key research problems include how to capture
and analyze static geometric appearance and dynamic movement of human bodies, and how to simulate
human body motions with physical e�ects. In this survey, according to main research directions of human body performance capture and animation, we summarize recent advances in key research topics, namely
human body surface reconstruction, motion capture and synthesis, as well as physics-based motion sim-
ulation, and further discuss future research problems and directions. We hope this will be helpful for
readers to have a comprehensive understanding of human performance capture and animatio
Neural Human Video Rendering by Learning Dynamic Textures and Rendering-to-Video Translation
Synthesizing realistic videos of humans using neural networks has been a
popular alternative to the conventional graphics-based rendering pipeline due
to its high efficiency. Existing works typically formulate this as an
image-to-image translation problem in 2D screen space, which leads to artifacts
such as over-smoothing, missing body parts, and temporal instability of
fine-scale detail, such as pose-dependent wrinkles in the clothing. In this
paper, we propose a novel human video synthesis method that approaches these
limiting factors by explicitly disentangling the learning of time-coherent
fine-scale details from the embedding of the human in 2D screen space. More
specifically, our method relies on the combination of two convolutional neural
networks (CNNs). Given the pose information, the first CNN predicts a dynamic
texture map that contains time-coherent high-frequency details, and the second
CNN conditions the generation of the final video on the temporally coherent
output of the first CNN. We demonstrate several applications of our approach,
such as human reenactment and novel view synthesis from monocular video, where
we show significant improvement over the state of the art both qualitatively
and quantitatively
Human Pose Estimation from Monocular Images : a Comprehensive Survey
Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problema into several modules: feature extraction and description, human body models, and modelin methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used