8 research outputs found
ΠΠ»Π³ΠΎΡΠΈΡΠΌΡΡΠ½ΠΎ-ΠΏΡΠΎΠ³ΡΠ°ΠΌΠ½ΠΈΠΉ ΠΌΠ΅ΡΠΎΠ΄ ΠΊΠΎΠΌΠΏΠ΅Π½ΡΠ°ΡΡΡ Π΄Π΅ΡΠ΅ΠΊΡΡΠ² DQ-ΡΠΊΡΠ½ΡΠ½Π³Ρ
ΠΠ°Π½Ρ Π΄ΠΈΠΏΠ»ΠΎΠΌΠ½Ρ ΡΠΎΠ±ΠΎΡΡ ΠΏΡΠΈΡΠ²ΡΡΠ΅Π½ΠΎ ΡΠΎΠ·ΡΠΎΠ±ΡΡ ΠΌΠ΅ΡΠΎΠ΄Ρ ΠΏΠΎΡΡ-ΠΎΠ±ΡΠΎΠ±ΠΊΠΈ ΡΡΠΈΠ²ΠΈΠΌΡΡΠ½ΠΎΡ ΠΌΠΎΠ΄Π΅Π»Ρ, ΡΠΎ Π΄ΠΎΠ·Π²ΠΎΠ»ΡΡ ΠΊΠΎΠΌΠΏΠ΅Π½ΡΡΠ²Π°ΡΠΈ Π΄Π΅ΡΠ΅ΠΊΡΠΈ ΡΠΊΡΠ½ΡΠ½Π³Ρ Π΄ΡΠ°Π»ΡΠ½ΠΈΠΌΠΈ ΠΊΠ²Π°ΡΠ΅ΡΠ½ΡΠΎΠ½Π°ΠΌΠΈ.
Π ΠΎΠ·ΡΠΎΠ±Π»Π΅Π½ΠΈΠΉ ΠΌΠ΅ΡΠΎΠ΄ Π΄ΠΎΠ·Π²ΠΎΠ»ΡΡ Π·Π½Π°ΡΠ½ΠΎΡ ΠΌΡΡΠΎΡ ΠΏΠΎΠΊΡΠ°ΡΠΈΡΠΈ ΡΠΊΡΡΡΡ Π°Π½ΡΠΌΠ°ΡΡΡ Π² Π·ΠΎΠ½Π°Ρ
, Π΄Π΅ Π΄Π΅ΡΠ΅ΠΊΡΠΈ ΡΠΊΡΠ½ΡΠ½Π³Ρ ΠΏΠΎΠ΄Π²ΡΠΉΠ½ΠΈΠΌΠΈ ΠΊΠ²Π°ΡΠ΅ΡΠ½ΡΠΎΠ½Π°ΠΌΠΈ Π½Π°ΠΉΠ±ΡΠ»ΡΡ ΠΏΠΎΠΌΡΡΠ½Ρ, ΡΠ° ΠΎΠΌΠΈΠ½Π°Ρ ΠΏΡΠΎΠ±Π»Π΅ΠΌΠ½Ρ Π·ΠΎΠ½ΠΈ, Π΄Π΅ Π΄Π΅ΡΠ΅ΠΊΡΠΈ ΡΠΊΡΠ½ΡΠ½Π³Ρ ΠΌΠ΅Π½Ρ ΠΏΠΎΠΌΡΡΠ½Ρ ΡΠ° ΠΏΠΎΡΡΠ΅Π±ΡΡΡΡ Π±ΡΠ»ΡΡ ΡΠΊΠ»Π°Π΄Π½ΠΈΡ
ΡΠΎΠ·ΡΠ°Ρ
ΡΠ½ΠΊΡΠ² Π΄Π»Ρ ΠΏΠΎΠ²Π½ΠΎΡΡΠ½Π½ΠΎΠ³ΠΎ ΡΡΡΠ½Π΅Π½Π½Ρ, Π·Π°Π±Π΅Π·ΠΏΠ΅ΡΡΡΡΠΈ ΠΏΠ»Π°Π²Π½ΠΈΠΉ ΠΏΠ΅ΡΠ΅Ρ
ΡΠ΄ ΠΌΡΠΆ ΡΠΈΠΌΠΈ Π·ΠΎΠ½Π°ΠΌΠΈ Π±Π΅Π· ΡΠΎΠ·ΡΠΈΠ²ΡΠ².
Π ΡΠ°ΠΌΠΊΠ°Ρ
Π΄ΠΈΠΏΠ»ΠΎΠΌΠ½ΠΎΡ ΡΠΎΠ±ΠΎΡΠΈ ΡΠ°ΠΊΠΎΠΆ ΡΠΎΠ·ΡΠΎΠ±Π»Π΅Π½ΠΎ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠ½Ρ ΡΠ΅Π°Π»ΡΠ·Π°ΡΡΡ Π·Π°ΠΏΡΠΎΠΏΠΎΠ½ΠΎΠ²Π°Π½ΠΎΠ³ΠΎ ΠΌΠ΅ΡΠΎΠ΄Ρ Ρ Π²ΠΈΠ³Π»ΡΠ΄Ρ ΠΏΠ»Π°Π³ΡΠ½Π° Π΄Π»Ρ ΡΡΡΡΡ Unity, ΡΠΎ Π²ΠΈΠΊΠΎΡΠΈΡΡΠΎΠ²ΡΡ ΡΠΎΠ·ΡΠ°Ρ
ΡΠ½ΠΊΠΎΠ²Ρ ΡΠ΅ΠΉΠ΄Π΅ΡΠΈ Π΄Π»Ρ ΠΏΡΠ΄Π²ΠΈΡΠ΅Π½Π½Ρ ΡΠ²ΠΈΠ΄ΠΊΠΎΠ΄ΡΡ, ΠΏΡΠ΄ΡΡΠΈΠΌΡΡ ΡΠΎΠ±ΠΎΡΡ Π· ΡΡΠ»ΡΠΎΠ²ΠΈΠΌΠΈ ΡΠΎΡΠΌΠ°ΠΌΠΈ, Π²ΡΡΠ°Π½ΠΎΠ²Π»Π΅Π½Π½Ρ ΠΊΠΎΠ΅ΡΡΡΡΡΠ½ΡΠ° ΠΊΠΎΠΌΠΏΠ΅Π½ΡΠ°ΡΡΡ ΡΠΊ Π΄Π»Ρ ΠΌΠΎΠ΄Π΅Π»Ρ Π² ΡΡΠ»ΠΎΠΌΡ, ΡΠ°ΠΊ Ρ Π΄Π»Ρ ΠΎΠΊΡΠ΅ΠΌΠΈΡ
Π²Π΅ΡΡΠ΅ΠΊΡΡΠ², ΠΌΠ°Ρ ΡΠ²ΠΈΠ΄ΠΊΠΎΠ΄ΡΡ, Π±Π»ΠΈΠ·ΡΠΊΡ Π΄ΠΎ Π²Π±ΡΠ΄ΠΎΠ²Π°Π½ΠΎΠ³ΠΎ ΡΠΊΡΠ½ΡΠ½Π³Ρ (ΠΏΡΠΈ Π²ΠΈΠΊΠΎΡΠΈΡΡΠ°Π½Π½Ρ ΠΊΠΎΠΌΠΏΡΠ»ΡΡΠΎΡΠ° IL2CPP), Π°Π²ΡΠΎΠΌΠ°ΡΠΈΡΠ½ΠΎ ΠΏΠΎΠΏΠ΅ΡΠ΅Π΄ΠΆΠ°Ρ ΡΠΎΠ·ΡΠΎΠ±Π½ΠΈΠΊΠ° ΠΏΡΠΎ ΡΠΎΠ·ΠΏΠΎΠ²ΡΡΠ΄ΠΆΠ΅Π½Ρ ΠΏΠΎΠΌΠΈΠ»ΠΊΠΈ Π½Π°Π»Π°ΡΡΡΠ²Π°Π½Π½Ρ ΡΠ° ΡΡΠΌΡΡΠ½ΠΈΠΉ Π· Π³ΡΠ°ΡΡΡΠ½ΠΈΠΌΠΈ API DirectX, OpenGL, Vulkan ΡΠ° Metal.
ΠΡΠ»ΠΎ ΠΏΡΠΎΠ²Π΅Π΄Π΅Π½ΠΎ Π΅ΠΌΠΏΡΡΠΈΡΠ½Ρ Π²ΠΈΠΌΡΡΠΈ ΡΠ²ΠΈΠ΄ΠΊΠΎΠ΄ΡΡ ΡΠΎΠ·ΡΠΎΠ±Π»Π΅Π½ΠΎΡ ΡΠΌΠΏΠ»Π΅ΠΌΠ΅Π½ΡΠ°ΡΡΡ, Π·Π³ΡΠ΄Π½ΠΎ Π· ΡΠΊΠΈΠΌΠΈ ΡΠΎΠ·ΡΠΎΠ±Π»Π΅Π½Π° ΡΠΌΠΏΠ»Π΅ΠΌΠ΅Π½ΡΠ°ΡΡΡ ΡΠΊΡΠ½ΡΠ½Π³Ρ Π΄ΡΠ°Π»ΡΠ½ΠΈΠΌΠΈ ΠΊΠ²Π°ΡΠ΅ΡΠ½ΡΠΎΠ½Π°ΠΌΠΈ ΠΏΠΎΠ²ΡΠ»ΡΠ½ΡΡΠ° Π·Π° Π²Π±ΡΠ΄ΠΎΠ²Π°Π½ΠΈΠΉ Π»ΡΠ½ΡΠΉΠ½ΠΈΠΉ ΡΠΊΡΠ½ΡΠ½Π³ ΡΡΡΡΡ Π»ΠΈΡΠ΅ Π½Π° 20%, Π° Π΄ΠΎΠ΄Π°ΡΠΊΠΎΠ²Π° ΠΏΠΎΡΡ-ΠΎΠ±ΡΠΎΠ±ΠΊΠ° ΠΌΠΎΠ΄Π΅Π»Ρ ΡΠΏΠΎΠ²ΡΠ»ΡΠ½ΡΡ ΡΠΊΡΠ½ΡΠ½Π³ ΡΠ΅ Π½Π° 8% Ρ Π½Π°ΠΉΠ³ΡΡΡΠΎΠΌΡ Π²ΠΈΠΏΠ°Π΄ΠΊΡ, ΠΏΡΠΎΡΠ΅ Π΄Π»Ρ Π΄ΠΎΡΡΠ³Π½Π΅Π½Π½Ρ Π²ΠΈΡΠΎΠΊΠΎΡ ΡΠ²ΠΈΠ΄ΠΊΠΎΠ΄ΡΡ Π½Π΅ΠΎΠ±Ρ
ΡΠ΄Π½Π΅ Π²ΠΈΠΊΠΎΡΠΈΡΡΠ°Π½Π½Ρ ΠΊΠΎΠΌΠΏΡΠ»ΡΡΠΎΡΠ° IL2CPP.This thesis is dedicated to the development of a 3-dimensional model post-processing method, that allows to reduce the artifacts of dual quaternion skinning.
The proposed method allows to significantly improve the visual quality of animation in areas, where the artifacts are most obvious, while omitting the problematic areas, where the artifacts are less noticeable and require more complex calculations to remove and providing a smooth transitions between such zones.
A software implementation of the proposed method was developed in a form of a plugin for Unity engine, that performs calculations in compute shaders for increased performance, supports blend shapes, allows setting the compensation coefficient both for the model as a whole and for separate vertices, displays performance speed comparable to that of built-in skinning (as long as IL2CPP compiler is used), automatically detects and fixes common setup errors and is compatible with API DirectX, OpenGL, Vulkan and Metal.
A benchmark of the developed implementation was performed, according to which the developed implementation of DQ skinning is only 20% slower than built-in linear skinning system and the additional post-processing of the model slows down the skinning by additional 8% in worst-case scenario. Though, in order to achieve such performance, IL2CPP compiler must be used
A Revisit of Shape Editing Techniques: from the Geometric to the Neural Viewpoint
3D shape editing is widely used in a range of applications such as movie
production, computer games and computer aided design. It is also a popular
research topic in computer graphics and computer vision. In past decades,
researchers have developed a series of editing methods to make the editing
process faster, more robust, and more reliable. Traditionally, the deformed
shape is determined by the optimal transformation and weights for an energy
term. With increasing availability of 3D shapes on the Internet, data-driven
methods were proposed to improve the editing results. More recently as the deep
neural networks became popular, many deep learning based editing methods have
been developed in this field, which is naturally data-driven. We mainly survey
recent research works from the geometric viewpoint to those emerging neural
deformation techniques and categorize them into organic shape editing methods
and man-made model editing methods. Both traditional methods and recent neural
network based methods are reviewed
Robust Hand Motion Capture and Physics-Based Control for Grasping in Real Time
Hand motion capture technologies are being explored due to high demands in the fields such as video game, virtual reality, sign language recognition, human-computer interaction, and robotics. However, existing systems suffer a few limitations, e.g. they are high-cost (expensive capture devices), intrusive (additional wear-on sensors or complex configurations), and restrictive (limited motion varieties and restricted capture space). This dissertation mainly focus on exploring algorithms and applications for the hand motion capture system that is low-cost, non-intrusive, low-restriction, high-accuracy, and robust.
More specifically, we develop a realtime and fully-automatic hand tracking system using a low-cost depth camera. We first introduce an efficient shape-indexed cascaded pose regressor that directly estimates 3D hand poses from depth images. A unique property of our hand pose regressor is to utilize a low-dimensional parametric hand geometric model to learn 3D shape-indexed features robust to variations in hand shapes, viewpoints and hand poses. We further introduce a hybrid tracking scheme that effectively complements our hand pose regressor with model-based hand tracking. In addition, we develop a rapid 3D hand shape modeling method that uses a small number of depth images to accurately construct a subject-specific skinned mesh model for hand tracking. This step not only automates the whole tracking system but also improves the robustness and accuracy of model-based tracking and hand pose regression.
Additionally, we also propose a physically realistic human grasping synthesis method that is capable to grasp a wide variety of objects. Given an object to be grasped, our method is capable to compute required controls (e.g. forces and torques) that advance the simulation to achieve realistic grasping. Our method combines the power of data-driven synthesis and physics-based grasping control. We first introduce a data-driven method to synthesize a realistic grasping motion from large sets of prerecorded grasping motion data. And then we transform the synthesized kinematic motion to a physically realistic one by utilizing our online physics-based motion control method. In addition, we also provide a performance interface which allows the user to act out before a depth camera to control a virtual object
Accurate Human Motion Capture and Modeling using Low-cost Sensors
Motion capture technologies, especially those combined with multiple kinds of sensory technologies to capture both kinematic and dynamic information, are widely used in a variety of fields such as biomechanics, robotics, and health. However, many existing systems suffer from limitations of being intrusive, restrictive, and expensive.
This dissertation explores two aspects of motion capture systems that are low-cost, non-intrusive, high-accuracy, and easy to use for common users, including both full-body kinematics and dynamics capture, and user-specific hand modeling.
More specifically, we present a new method for full-body motion capture that uses input data captured by three depth cameras and a pair of pressure-sensing shoes. Our system is appealing because it is fully automatic and can accurately reconstruct both full-body kinematic and dynamic data. We introduce a highly accurate tracking process that automatically reconstructs 3D skeletal poses using depth data, foot pressure data, and detailed full-body geometry. We also develop an efficient physics-based motion reconstruction algorithm for solving internal joint torques and contact forces based on contact pressure information and 3D poses from the kinematic tracking process.
In addition, we present a novel low-dimensional parametric model for 3D hand modeling and synthesis. We construct a low-dimensional parametric model to compactly represent hand shape variations across individuals and enhance it by adding Linear Blend Skinning (LBS) for pose deformation. We also introduce an efficient iterative approach to learn the parametric model from a large unaligned scan database. Our model is compact, expressive, and produces a natural-looking LBS model for pose deformation, which allows for a variety of applications ranging from user-specific hand modeling to skinning weights transfer and model-based hand tracking