11 research outputs found
ΠΠ΅ΡΠΎΠ΄ ΡΠΎΡΠΌΠΈΡΠΎΠ²Π°Π½ΠΈΡ ΡΠΈΡΡΠΎΠ²ΠΎΠΉ ΡΠ΅Π½ΠΈ ΠΏΡΠΎΡΠ΅ΡΡΠ° ΠΏΠ΅ΡΠ΅ΠΌΠ΅ΡΠ΅Π½ΠΈΡ ΡΠ΅Π»ΠΎΠ²Π΅ΠΊΠ° Π½Π° ΠΎΡΠ½ΠΎΠ²Π΅ ΠΎΠ±ΡΠ΅Π΄ΠΈΠ½Π΅Π½ΠΈΡ ΡΠΈΡΡΠ΅ΠΌ Π·Π°Ρ Π²Π°ΡΠ° Π΄Π²ΠΈΠΆΠ΅Π½ΠΈΠΉ
The article deals with the problem of forming a digital shadow of the process of moving a person. An analysis of the subject area was carried out, which showed the need to formalize the process of creating digital shadows to simulate human movements in virtual space, testing software and hardware systems that operate on the basis of human actions, as well as in various systems of musculoskeletal rehabilitation. It was revealed that among the existing approaches to the capture of human movements, it is impossible to single out a universal and stable method under various environmental conditions. A method for forming a digital shadow has been developed based on combining and synchronizing data from three motion capture systems (virtual reality trackers, a motion capture suit, and cameras using computer vision technologies). Combining the above systems makes it possible to obtain a comprehensive assessment of the position and condition of a person regardless of environmental conditions (electromagnetic interference, illumination). To implement the proposed method, a formalization of the digital shadow of the human movement process was carried out, including a description of the mechanisms for collecting and processing data from various motion capture systems, as well as the stages of combining, filtering, and synchronizing data. The scientific novelty of the method lies in the formalization of the process of collecting data on the movement of a person, combining and synchronizing the hardware of the motion capture systems to create digital shadows of the process of moving a person. The obtained theoretical results will be used as a basis for software abstraction of a digital shadow in information systems to solve the problems of testing, simulating a person, and modeling his reaction to external stimuli by generalizing the collected data arrays about his movement.Π ΡΡΠ°ΡΡΠ΅ ΡΠ°ΡΡΠΌΠ°ΡΡΠΈΠ²Π°Π΅ΡΡΡ Π·Π°Π΄Π°ΡΠ° ΡΠΎΡΠΌΠΈΡΠΎΠ²Π°Π½ΠΈΡ ΡΠΈΡΡΠΎΠ²ΠΎΠΉ ΡΠ΅Π½ΠΈ ΠΏΡΠΎΡΠ΅ΡΡΠ° ΠΏΠ΅ΡΠ΅ΠΌΠ΅ΡΠ΅Π½ΠΈΡ ΡΠ΅Π»ΠΎΠ²Π΅ΠΊΠ°. ΠΡΠΎΠ²Π΅Π΄Π΅Π½ Π°Π½Π°Π»ΠΈΠ· ΠΏΡΠ΅Π΄ΠΌΠ΅ΡΠ½ΠΎΠΉ ΠΎΠ±Π»Π°ΡΡΠΈ, ΠΊΠΎΡΠΎΡΡΠΉ ΠΏΠΎΠΊΠ°Π·Π°Π» Π½Π΅ΠΎΠ±Ρ
ΠΎΠ΄ΠΈΠΌΠΎΡΡΡ ΡΠΎΡΠΌΠ°Π»ΠΈΠ·Π°ΡΠΈΠΈ ΠΏΡΠΎΡΠ΅ΡΡΠ° ΡΠΎΠ·Π΄Π°Π½ΠΈΡ ΡΠΈΡΡΠΎΠ²ΡΡ
ΡΠ΅Π½Π΅ΠΉ Π΄Π»Ρ ΠΈΠΌΠΈΡΠ°ΡΠΈΠΈ Π΄Π²ΠΈΠΆΠ΅Π½ΠΈΠΉ ΡΠ΅Π»ΠΎΠ²Π΅ΠΊΠ° Π² Π²ΠΈΡΡΡΠ°Π»ΡΠ½ΠΎΠΌ ΠΏΡΠΎΡΡΡΠ°Π½ΡΡΠ²Π΅, ΡΠ΅ΡΡΠΈΡΠΎΠ²Π°Π½ΠΈΠΈ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΌΠ½ΠΎ-Π°ΠΏΠΏΠ°ΡΠ°ΡΠ½ΡΡ
ΠΊΠΎΠΌΠΏΠ»Π΅ΠΊΡΠΎΠ², ΡΡΠ½ΠΊΡΠΈΠΎΠ½ΠΈΡΡΡΡΠΈΡ
Π½Π° ΠΎΡΠ½ΠΎΠ²Π΅ Π΄Π΅ΠΉΡΡΠ²ΠΈΠΉ ΡΠ΅Π»ΠΎΠ²Π΅ΠΊΠ°, Π° ΡΠ°ΠΊΠΆΠ΅ Π² ΡΠ°Π·Π»ΠΈΡΠ½ΡΡ
ΡΠΈΡΡΠ΅ΠΌΠ°Ρ
ΠΎΠΏΠΎΡΠ½ΠΎ-Π΄Π²ΠΈΠ³Π°ΡΠ΅Π»ΡΠ½ΠΎΠΉ ΡΠ΅Π°Π±ΠΈΠ»ΠΈΡΠ°ΡΠΈΠΈ. ΠΡΡΠ²Π»Π΅Π½ΠΎ, ΡΡΠΎ ΡΡΠ΅Π΄ΠΈ ΡΡΡΠ΅ΡΡΠ²ΡΡΡΠΈΡ
ΠΏΠΎΠ΄Ρ
ΠΎΠ΄ΠΎΠ² ΠΊ Π·Π°Ρ
Π²Π°ΡΡ Π΄Π²ΠΈΠΆΠ΅Π½ΠΈΠΉ ΡΠ΅Π»ΠΎΠ²Π΅ΠΊΠ° Π½Π΅Π»ΡΠ·Ρ Π²ΡΠ΄Π΅Π»ΠΈΡΡ ΡΠ½ΠΈΠ²Π΅ΡΡΠ°Π»ΡΠ½ΡΠΉ ΠΈ ΡΡΠ°Π±ΠΈΠ»ΡΠ½ΠΎ ΡΠ°Π±ΠΎΡΠ°ΡΡΠΈΠΉ ΠΏΡΠΈ ΡΠ°Π·Π»ΠΈΡΠ½ΡΡ
ΡΡΠ»ΠΎΠ²ΠΈΡΡ
Π²Π½Π΅ΡΠ½Π΅ΠΉ ΡΡΠ΅Π΄Ρ. Π Π°Π·ΡΠ°Π±ΠΎΡΠ°Π½ ΠΌΠ΅ΡΠΎΠ΄ ΡΠΎΡΠΌΠΈΡΠΎΠ²Π°Π½ΠΈΡ ΡΠΈΡΡΠΎΠ²ΠΎΠΉ ΡΠ΅Π½ΠΈ Π½Π° ΠΎΡΠ½ΠΎΠ²Π΅ ΠΊΠΎΠΌΠ±ΠΈΠ½ΠΈΡΠΎΠ²Π°Π½ΠΈΡ ΠΈ ΡΠΈΠ½Ρ
ΡΠΎΠ½ΠΈΠ·Π°ΡΠΈΠΈ Π΄Π°Π½Π½ΡΡ
ΠΈΠ· ΡΡΠ΅Ρ
ΡΠΈΡΡΠ΅ΠΌ Π·Π°Ρ
Π²Π°ΡΠ° Π΄Π²ΠΈΠΆΠ΅Π½ΠΈΠΉ (ΡΡΠ΅ΠΊΠ΅ΡΡ Π²ΠΈΡΡΡΠ°Π»ΡΠ½ΠΎΠΉ ΡΠ΅Π°Π»ΡΠ½ΠΎΡΡΠΈ, ΠΊΠΎΡΡΡΠΌ motion capture ΠΈ ΠΊΠ°ΠΌΠ΅ΡΡ Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ΠΌ ΡΠ΅Ρ
Π½ΠΎΠ»ΠΎΠ³ΠΈΠΉ ΠΊΠΎΠΌΠΏΡΡΡΠ΅ΡΠ½ΠΎΠ³ΠΎ Π·ΡΠ΅Π½ΠΈΡ). ΠΠ±ΡΠ΅Π΄ΠΈΠ½Π΅Π½ΠΈΠ΅ ΠΏΠ΅ΡΠ΅ΡΠΈΡΠ»Π΅Π½Π½ΡΡ
ΡΠΈΡΡΠ΅ΠΌ ΠΏΠΎΠ·Π²ΠΎΠ»ΡΠ΅Ρ ΠΏΠΎΠ»ΡΡΠΈΡΡ ΠΊΠΎΠΌΠΏΠ»Π΅ΠΊΡΠ½ΡΡ ΠΎΡΠ΅Π½ΠΊΡ ΠΏΠΎΠ»ΠΎΠΆΠ΅Π½ΠΈΡ ΠΈ ΡΠΎΡΡΠΎΡΠ½ΠΈΡ ΡΠ΅Π»ΠΎΠ²Π΅ΠΊΠ° Π½Π΅Π·Π°Π²ΠΈΡΠΈΠΌΠΎ ΠΎΡ ΡΡΠ»ΠΎΠ²ΠΈΠΉ Π²Π½Π΅ΡΠ½Π΅ΠΉ ΡΡΠ΅Π΄Ρ (ΡΠ»Π΅ΠΊΡΡΠΎΠΌΠ°Π³Π½ΠΈΡΠ½ΡΠ΅ ΠΏΠΎΠΌΠ΅Ρ
ΠΈ, ΠΎΡΠ²Π΅ΡΠ΅Π½Π½ΠΎΡΡΡ). ΠΠ»Ρ ΡΠ΅Π°Π»ΠΈΠ·Π°ΡΠΈΠΈ ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½Π½ΠΎΠ³ΠΎ ΠΌΠ΅ΡΠΎΠ΄Π° ΠΏΡΠΎΠ²Π΅Π΄Π΅Π½Π° ΡΠΎΡΠΌΠ°Π»ΠΈΠ·Π°ΡΠΈΡ ΡΠΈΡΡΠΎΠ²ΠΎΠΉ ΡΠ΅Π½ΠΈ ΠΏΡΠΎΡΠ΅ΡΡΠ° ΠΏΠ΅ΡΠ΅ΠΌΠ΅ΡΠ΅Π½ΠΈΡ ΡΠ΅Π»ΠΎΠ²Π΅ΠΊΠ°, Π²ΠΊΠ»ΡΡΠ°ΡΡΠ°Ρ ΠΎΠΏΠΈΡΠ°Π½ΠΈΠ΅ ΠΌΠ΅Ρ
Π°Π½ΠΈΠ·ΠΌΠΎΠ² ΡΠ±ΠΎΡΠ° ΠΈ ΠΎΠ±ΡΠ°Π±ΠΎΡΠΊΠΈ Π΄Π°Π½Π½ΡΡ
ΠΎΡ ΡΠ°Π·Π»ΠΈΡΠ½ΡΡ
ΡΠΈΡΡΠ΅ΠΌ Π·Π°Ρ
Π²Π°ΡΠ° Π΄Π²ΠΈΠΆΠ΅Π½ΠΈΠΉ, Π° ΡΠ°ΠΊΠΆΠ΅ ΡΡΠ°ΠΏΡ ΠΎΠ±ΡΠ΅Π΄ΠΈΠ½Π΅Π½ΠΈΡ, ΡΠΈΠ»ΡΡΡΠ°ΡΠΈΠΈ ΠΈ ΡΠΈΠ½Ρ
ΡΠΎΠ½ΠΈΠ·Π°ΡΠΈΠΈ Π΄Π°Π½Π½ΡΡ
. ΠΠ°ΡΡΠ½Π°Ρ Π½ΠΎΠ²ΠΈΠ·Π½Π° ΠΌΠ΅ΡΠΎΠ΄Π° Π·Π°ΠΊΠ»ΡΡΠ°Π΅ΡΡΡ Π² ΡΠΎΡΠΌΠ°Π»ΠΈΠ·Π°ΡΠΈΠΈ ΠΏΡΠΎΡΠ΅ΡΡΠ° ΡΠ±ΠΎΡΠ° Π΄Π°Π½Π½ΡΡ
ΠΎ ΠΏΠ΅ΡΠ΅ΠΌΠ΅ΡΠ΅Π½ΠΈΠΈ ΡΠ΅Π»ΠΎΠ²Π΅ΠΊΠ°, ΠΎΠ±ΡΠ΅Π΄ΠΈΠ½Π΅Π½ΠΈΠΈ ΠΈ ΡΠΈΠ½Ρ
ΡΠΎΠ½ΠΈΠ·Π°ΡΠΈΠΈ Π°ΠΏΠΏΠ°ΡΠ°ΡΠ½ΠΎΠ³ΠΎ ΠΎΠ±Π΅ΡΠΏΠ΅ΡΠ΅Π½ΠΈΡ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΠΌΡΡ
ΡΠΈΡΡΠ΅ΠΌ Π·Π°Ρ
Π²Π°ΡΠ° Π΄Π²ΠΈΠΆΠ΅Π½ΠΈΠΉ Π΄Π»Ρ ΡΠΎΠ·Π΄Π°Π½ΠΈΡ ΡΠΈΡΡΠΎΠ²ΡΡ
ΡΠ΅Π½Π΅ΠΉ ΠΏΡΠΎΡΠ΅ΡΡΠ° ΠΏΠ΅ΡΠ΅ΠΌΠ΅ΡΠ΅Π½ΠΈΡ ΡΠ΅Π»ΠΎΠ²Π΅ΠΊΠ°. ΠΠΎΠ»ΡΡΠ΅Π½Π½ΡΠ΅ ΡΠ΅ΠΎΡΠ΅ΡΠΈΡΠ΅ΡΠΊΠΈΠ΅ ΡΠ΅Π·ΡΠ»ΡΡΠ°ΡΡ Π±ΡΠ΄ΡΡ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡΡΡ Π² ΠΊΠ°ΡΠ΅ΡΡΠ²Π΅ ΠΎΡΠ½ΠΎΠ²Ρ Π΄Π»Ρ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΌΠ½ΠΎΠΉ Π°Π±ΡΡΡΠ°ΠΊΡΠΈΠΈ ΡΠΈΡΡΠΎΠ²ΠΎΠΉ ΡΠ΅Π½ΠΈ Π² ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΎΠ½Π½ΡΡ
ΡΠΈΡΡΠ΅ΠΌΠ°Ρ
Π΄Π»Ρ ΡΠ΅ΡΠ΅Π½ΠΈΡ Π·Π°Π΄Π°Ρ ΡΠ΅ΡΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ, ΠΈΠΌΠΈΡΠ°ΡΠΈΠΈ ΡΠ΅Π»ΠΎΠ²Π΅ΠΊΠ° ΠΈ ΠΌΠΎΠ΄Π΅Π»ΠΈΡΠΎΠ²Π°Π½ΠΈΡ Π΅Π³ΠΎ ΡΠ΅Π°ΠΊΡΠΈΠΈ Π½Π° Π²Π½Π΅ΡΠ½ΠΈΠ΅ ΡΠ°Π·Π΄ΡΠ°ΠΆΠΈΡΠ΅Π»ΠΈ Π·Π° ΡΡΠ΅Ρ ΠΎΠ±ΠΎΠ±ΡΠ΅Π½ΠΈΡ ΡΠΎΠ±ΡΠ°Π½Π½ΡΡ
ΠΌΠ°ΡΡΠΈΠ²ΠΎΠ² Π΄Π°Π½Π½ΡΡ
ΠΎ Π΅Π³ΠΎ ΠΏΠ΅ΡΠ΅ΠΌΠ΅ΡΠ΅Π½ΠΈΠΈ
ΠΠ΅ΡΠΎΠ΄ ΡΠΎΡΠΌΠΈΡΠΎΠ²Π°Π½ΠΈΡ ΡΠΈΡΡΠΎΠ²ΠΎΠΉ ΡΠ΅Π½ΠΈ ΠΏΡΠΎΡΠ΅ΡΡΠ° ΠΏΠ΅ΡΠ΅ΠΌΠ΅ΡΠ΅Π½ΠΈΡ ΡΠ΅Π»ΠΎΠ²Π΅ΠΊΠ° Π½Π° ΠΎΡΠ½ΠΎΠ²Π΅ ΠΎΠ±ΡΠ΅Π΄ΠΈΠ½Π΅Π½ΠΈΡ ΡΠΈΡΡΠ΅ΠΌ Π·Π°Ρ Π²Π°ΡΠ° Π΄Π²ΠΈΠΆΠ΅Π½ΠΈΠΉ
Π ΡΡΠ°ΡΡΠ΅ ΡΠ°ΡΡΠΌΠ°ΡΡΠΈΠ²Π°Π΅ΡΡΡ Π·Π°Π΄Π°ΡΠ° ΡΠΎΡΠΌΠΈΡΠΎΠ²Π°Π½ΠΈΡ ΡΠΈΡΡΠΎΠ²ΠΎΠΉ ΡΠ΅Π½ΠΈ ΠΏΡΠΎΡΠ΅ΡΡΠ° ΠΏΠ΅ΡΠ΅ΠΌΠ΅ΡΠ΅Π½ΠΈΡ ΡΠ΅Π»ΠΎΠ²Π΅ΠΊΠ°. ΠΡΠΎΠ²Π΅Π΄Π΅Π½ Π°Π½Π°Π»ΠΈΠ· ΠΏΡΠ΅Π΄ΠΌΠ΅ΡΠ½ΠΎΠΉ ΠΎΠ±Π»Π°ΡΡΠΈ, ΠΊΠΎΡΠΎΡΡΠΉ ΠΏΠΎΠΊΠ°Π·Π°Π» Π½Π΅ΠΎΠ±Ρ
ΠΎΠ΄ΠΈΠΌΠΎΡΡΡ ΡΠΎΡΠΌΠ°Π»ΠΈΠ·Π°ΡΠΈΠΈ ΠΏΡΠΎΡΠ΅ΡΡΠ° ΡΠΎΠ·Π΄Π°Π½ΠΈΡ ΡΠΈΡΡΠΎΠ²ΡΡ
ΡΠ΅Π½Π΅ΠΉ Π΄Π»Ρ ΠΈΠΌΠΈΡΠ°ΡΠΈΠΈ Π΄Π²ΠΈΠΆΠ΅Π½ΠΈΠΉ ΡΠ΅Π»ΠΎΠ²Π΅ΠΊΠ° Π² Π²ΠΈΡΡΡΠ°Π»ΡΠ½ΠΎΠΌ ΠΏΡΠΎΡΡΡΠ°Π½ΡΡΠ²Π΅, ΡΠ΅ΡΡΠΈΡΠΎΠ²Π°Π½ΠΈΠΈ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΌΠ½ΠΎ-Π°ΠΏΠΏΠ°ΡΠ°ΡΠ½ΡΡ
ΠΊΠΎΠΌΠΏΠ»Π΅ΠΊΡΠΎΠ², ΡΡΠ½ΠΊΡΠΈΠΎΠ½ΠΈΡΡΡΡΠΈΡ
Π½Π° ΠΎΡΠ½ΠΎΠ²Π΅ Π΄Π΅ΠΉΡΡΠ²ΠΈΠΉ ΡΠ΅Π»ΠΎΠ²Π΅ΠΊΠ°, Π° ΡΠ°ΠΊΠΆΠ΅ Π² ΡΠ°Π·Π»ΠΈΡΠ½ΡΡ
ΡΠΈΡΡΠ΅ΠΌΠ°Ρ
ΠΎΠΏΠΎΡΠ½ΠΎ-Π΄Π²ΠΈΠ³Π°ΡΠ΅Π»ΡΠ½ΠΎΠΉ ΡΠ΅Π°Π±ΠΈΠ»ΠΈΡΠ°ΡΠΈΠΈ. ΠΡΡΠ²Π»Π΅Π½ΠΎ, ΡΡΠΎ ΡΡΠ΅Π΄ΠΈ ΡΡΡΠ΅ΡΡΠ²ΡΡΡΠΈΡ
ΠΏΠΎΠ΄Ρ
ΠΎΠ΄ΠΎΠ² ΠΊ Π·Π°Ρ
Π²Π°ΡΡ Π΄Π²ΠΈΠΆΠ΅Π½ΠΈΠΉ ΡΠ΅Π»ΠΎΠ²Π΅ΠΊΠ° Π½Π΅Π»ΡΠ·Ρ Π²ΡΠ΄Π΅Π»ΠΈΡΡ ΡΠ½ΠΈΠ²Π΅ΡΡΠ°Π»ΡΠ½ΡΠΉ ΠΈ ΡΡΠ°Π±ΠΈΠ»ΡΠ½ΠΎ ΡΠ°Π±ΠΎΡΠ°ΡΡΠΈΠΉ ΠΏΡΠΈ ΡΠ°Π·Π»ΠΈΡΠ½ΡΡ
ΡΡΠ»ΠΎΠ²ΠΈΡΡ
Π²Π½Π΅ΡΠ½Π΅ΠΉ ΡΡΠ΅Π΄Ρ. Π Π°Π·ΡΠ°Π±ΠΎΡΠ°Π½ ΠΌΠ΅ΡΠΎΠ΄ ΡΠΎΡΠΌΠΈΡΠΎΠ²Π°Π½ΠΈΡ ΡΠΈΡΡΠΎΠ²ΠΎΠΉ ΡΠ΅Π½ΠΈ Π½Π° ΠΎΡΠ½ΠΎΠ²Π΅ ΠΊΠΎΠΌΠ±ΠΈΠ½ΠΈΡΠΎΠ²Π°Π½ΠΈΡ ΠΈ ΡΠΈΠ½Ρ
ΡΠΎΠ½ΠΈΠ·Π°ΡΠΈΠΈ Π΄Π°Π½Π½ΡΡ
ΠΈΠ· ΡΡΠ΅Ρ
ΡΠΈΡΡΠ΅ΠΌ Π·Π°Ρ
Π²Π°ΡΠ° Π΄Π²ΠΈΠΆΠ΅Π½ΠΈΠΉ (ΡΡΠ΅ΠΊΠ΅ΡΡ Π²ΠΈΡΡΡΠ°Π»ΡΠ½ΠΎΠΉ ΡΠ΅Π°Π»ΡΠ½ΠΎΡΡΠΈ, ΠΊΠΎΡΡΡΠΌ motion capture ΠΈ ΠΊΠ°ΠΌΠ΅ΡΡ Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ΠΌ ΡΠ΅Ρ
Π½ΠΎΠ»ΠΎΠ³ΠΈΠΉ ΠΊΠΎΠΌΠΏΡΡΡΠ΅ΡΠ½ΠΎΠ³ΠΎ Π·ΡΠ΅Π½ΠΈΡ). ΠΠ±ΡΠ΅Π΄ΠΈΠ½Π΅Π½ΠΈΠ΅ ΠΏΠ΅ΡΠ΅ΡΠΈΡΠ»Π΅Π½Π½ΡΡ
ΡΠΈΡΡΠ΅ΠΌ ΠΏΠΎΠ·Π²ΠΎΠ»ΡΠ΅Ρ ΠΏΠΎΠ»ΡΡΠΈΡΡ ΠΊΠΎΠΌΠΏΠ»Π΅ΠΊΡΠ½ΡΡ ΠΎΡΠ΅Π½ΠΊΡ ΠΏΠΎΠ»ΠΎΠΆΠ΅Π½ΠΈΡ ΠΈ ΡΠΎΡΡΠΎΡΠ½ΠΈΡ ΡΠ΅Π»ΠΎΠ²Π΅ΠΊΠ° Π½Π΅Π·Π°Π²ΠΈΡΠΈΠΌΠΎ ΠΎΡ ΡΡΠ»ΠΎΠ²ΠΈΠΉ Π²Π½Π΅ΡΠ½Π΅ΠΉ ΡΡΠ΅Π΄Ρ (ΡΠ»Π΅ΠΊΡΡΠΎΠΌΠ°Π³Π½ΠΈΡΠ½ΡΠ΅ ΠΏΠΎΠΌΠ΅Ρ
ΠΈ, ΠΎΡΠ²Π΅ΡΠ΅Π½Π½ΠΎΡΡΡ). ΠΠ»Ρ ΡΠ΅Π°Π»ΠΈΠ·Π°ΡΠΈΠΈ ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½Π½ΠΎΠ³ΠΎ ΠΌΠ΅ΡΠΎΠ΄Π° ΠΏΡΠΎΠ²Π΅Π΄Π΅Π½Π° ΡΠΎΡΠΌΠ°Π»ΠΈΠ·Π°ΡΠΈΡ ΡΠΈΡΡΠΎΠ²ΠΎΠΉ ΡΠ΅Π½ΠΈ ΠΏΡΠΎΡΠ΅ΡΡΠ° ΠΏΠ΅ΡΠ΅ΠΌΠ΅ΡΠ΅Π½ΠΈΡ ΡΠ΅Π»ΠΎΠ²Π΅ΠΊΠ°, Π²ΠΊΠ»ΡΡΠ°ΡΡΠ°Ρ ΠΎΠΏΠΈΡΠ°Π½ΠΈΠ΅ ΠΌΠ΅Ρ
Π°Π½ΠΈΠ·ΠΌΠΎΠ² ΡΠ±ΠΎΡΠ° ΠΈ ΠΎΠ±ΡΠ°Π±ΠΎΡΠΊΠΈ Π΄Π°Π½Π½ΡΡ
ΠΎΡ ΡΠ°Π·Π»ΠΈΡΠ½ΡΡ
ΡΠΈΡΡΠ΅ΠΌ Π·Π°Ρ
Π²Π°ΡΠ° Π΄Π²ΠΈΠΆΠ΅Π½ΠΈΠΉ, Π° ΡΠ°ΠΊΠΆΠ΅ ΡΡΠ°ΠΏΡ ΠΎΠ±ΡΠ΅Π΄ΠΈΠ½Π΅Π½ΠΈΡ, ΡΠΈΠ»ΡΡΡΠ°ΡΠΈΠΈ ΠΈ ΡΠΈΠ½Ρ
ΡΠΎΠ½ΠΈΠ·Π°ΡΠΈΠΈ Π΄Π°Π½Π½ΡΡ
. ΠΠ°ΡΡΠ½Π°Ρ Π½ΠΎΠ²ΠΈΠ·Π½Π° ΠΌΠ΅ΡΠΎΠ΄Π° Π·Π°ΠΊΠ»ΡΡΠ°Π΅ΡΡΡ Π² ΡΠΎΡΠΌΠ°Π»ΠΈΠ·Π°ΡΠΈΠΈ ΠΏΡΠΎΡΠ΅ΡΡΠ° ΡΠ±ΠΎΡΠ° Π΄Π°Π½Π½ΡΡ
ΠΎ ΠΏΠ΅ΡΠ΅ΠΌΠ΅ΡΠ΅Π½ΠΈΠΈ ΡΠ΅Π»ΠΎΠ²Π΅ΠΊΠ°, ΠΎΠ±ΡΠ΅Π΄ΠΈΠ½Π΅Π½ΠΈΠΈ ΠΈ ΡΠΈΠ½Ρ
ΡΠΎΠ½ΠΈΠ·Π°ΡΠΈΠΈ Π°ΠΏΠΏΠ°ΡΠ°ΡΠ½ΠΎΠ³ΠΎ ΠΎΠ±Π΅ΡΠΏΠ΅ΡΠ΅Π½ΠΈΡ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΠΌΡΡ
ΡΠΈΡΡΠ΅ΠΌ Π·Π°Ρ
Π²Π°ΡΠ° Π΄Π²ΠΈΠΆΠ΅Π½ΠΈΠΉ Π΄Π»Ρ ΡΠΎΠ·Π΄Π°Π½ΠΈΡ ΡΠΈΡΡΠΎΠ²ΡΡ
ΡΠ΅Π½Π΅ΠΉ ΠΏΡΠΎΡΠ΅ΡΡΠ° ΠΏΠ΅ΡΠ΅ΠΌΠ΅ΡΠ΅Π½ΠΈΡ ΡΠ΅Π»ΠΎΠ²Π΅ΠΊΠ°. ΠΠΎΠ»ΡΡΠ΅Π½Π½ΡΠ΅ ΡΠ΅ΠΎΡΠ΅ΡΠΈΡΠ΅ΡΠΊΠΈΠ΅ ΡΠ΅Π·ΡΠ»ΡΡΠ°ΡΡ Π±ΡΠ΄ΡΡ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡΡΡ Π² ΠΊΠ°ΡΠ΅ΡΡΠ²Π΅ ΠΎΡΠ½ΠΎΠ²Ρ Π΄Π»Ρ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΌΠ½ΠΎΠΉ Π°Π±ΡΡΡΠ°ΠΊΡΠΈΠΈ ΡΠΈΡΡΠΎΠ²ΠΎΠΉ ΡΠ΅Π½ΠΈ Π² ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΎΠ½Π½ΡΡ
ΡΠΈΡΡΠ΅ΠΌΠ°Ρ
Π΄Π»Ρ ΡΠ΅ΡΠ΅Π½ΠΈΡ Π·Π°Π΄Π°Ρ ΡΠ΅ΡΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ, ΠΈΠΌΠΈΡΠ°ΡΠΈΠΈ ΡΠ΅Π»ΠΎΠ²Π΅ΠΊΠ° ΠΈ ΠΌΠΎΠ΄Π΅Π»ΠΈΡΠΎΠ²Π°Π½ΠΈΡ Π΅Π³ΠΎ ΡΠ΅Π°ΠΊΡΠΈΠΈ Π½Π° Π²Π½Π΅ΡΠ½ΠΈΠ΅ ΡΠ°Π·Π΄ΡΠ°ΠΆΠΈΡΠ΅Π»ΠΈ Π·Π° ΡΡΠ΅Ρ ΠΎΠ±ΠΎΠ±ΡΠ΅Π½ΠΈΡ ΡΠΎΠ±ΡΠ°Π½Π½ΡΡ
ΠΌΠ°ΡΡΠΈΠ²ΠΎΠ² Π΄Π°Π½Π½ΡΡ
ΠΎ Π΅Π³ΠΎ ΠΏΠ΅ΡΠ΅ΠΌΠ΅ΡΠ΅Π½ΠΈΠΈ
DiffMimic: Efficient Motion Mimicking with Differentiable Physics
Motion mimicking is a foundational task in physics-based character animation.
However, most existing motion mimicking methods are built upon reinforcement
learning (RL) and suffer from heavy reward engineering, high variance, and slow
convergence with hard explorations. Specifically, they usually take tens of
hours or even days of training to mimic a simple motion sequence, resulting in
poor scalability. In this work, we leverage differentiable physics simulators
(DPS) and propose an efficient motion mimicking method dubbed DiffMimic. Our
key insight is that DPS casts a complex policy learning task to a much simpler
state matching problem. In particular, DPS learns a stable policy by analytical
gradients with ground-truth physical priors hence leading to significantly
faster and stabler convergence than RL-based methods. Moreover, to escape from
local optima, we utilize a Demonstration Replay mechanism to enable stable
gradient backpropagation in a long horizon. Extensive experiments on standard
benchmarks show that DiffMimic has a better sample efficiency and time
efficiency than existing methods (e.g., DeepMimic). Notably, DiffMimic allows a
physically simulated character to learn Backflip after 10 minutes of training
and be able to cycle it after 3 hours of training, while the existing approach
may require about a day of training to cycle Backflip. More importantly, we
hope DiffMimic can benefit more differentiable animation systems with
techniques like differentiable clothes simulation in future research.Comment: ICLR 2023 Code is at https://github.com/jiawei-ren/diffmimic Project
page is at https://diffmimic.github.io
Learning predict-and-simulate policies from unorganized human motion data
The goal of this research is to create physically simulated biped characters equipped with a rich repertoire of motor skills. The user can control the characters interactively by modulating their control objectives. The characters can interact physically with each other and with the environment. We present a novel network-based algorithm that learns control policies from unorganized, minimally-labeled human motion data. The network architecture for interactive character animation incorporates an RNN-based motion generator into a DRL-based controller for physics simulation and control. The motion generator guides forward dynamics simulation by feeding a sequence of future motion frames to track. The rich future prediction facilitates policy learning from large training data sets. We will demonstrate the effectiveness of our approach with biped characters that learn a variety of dynamic motor skills from large, unorganized data and react to unexpected perturbation beyond the scope of the training data.N