389 research outputs found

    Accurate Human Motion Capture and Modeling using Low-cost Sensors

    Get PDF
    Motion capture technologies, especially those combined with multiple kinds of sensory technologies to capture both kinematic and dynamic information, are widely used in a variety of fields such as biomechanics, robotics, and health. However, many existing systems suffer from limitations of being intrusive, restrictive, and expensive. This dissertation explores two aspects of motion capture systems that are low-cost, non-intrusive, high-accuracy, and easy to use for common users, including both full-body kinematics and dynamics capture, and user-specific hand modeling. More specifically, we present a new method for full-body motion capture that uses input data captured by three depth cameras and a pair of pressure-sensing shoes. Our system is appealing because it is fully automatic and can accurately reconstruct both full-body kinematic and dynamic data. We introduce a highly accurate tracking process that automatically reconstructs 3D skeletal poses using depth data, foot pressure data, and detailed full-body geometry. We also develop an efficient physics-based motion reconstruction algorithm for solving internal joint torques and contact forces based on contact pressure information and 3D poses from the kinematic tracking process. In addition, we present a novel low-dimensional parametric model for 3D hand modeling and synthesis. We construct a low-dimensional parametric model to compactly represent hand shape variations across individuals and enhance it by adding Linear Blend Skinning (LBS) for pose deformation. We also introduce an efficient iterative approach to learn the parametric model from a large unaligned scan database. Our model is compact, expressive, and produces a natural-looking LBS model for pose deformation, which allows for a variety of applications ranging from user-specific hand modeling to skinning weights transfer and model-based hand tracking

    A survey on human performance capture and animation

    Get PDF
    With the rapid development of computing technology, three-dimensional (3D) human body models and their dynamic motions are widely used in the digital entertainment industry. Human perfor- mance mainly involves human body shapes and motions. Key research problems include how to capture and analyze static geometric appearance and dynamic movement of human bodies, and how to simulate human body motions with physical e๏ฟฝects. In this survey, according to main research directions of human body performance capture and animation, we summarize recent advances in key research topics, namely human body surface reconstruction, motion capture and synthesis, as well as physics-based motion sim- ulation, and further discuss future research problems and directions. We hope this will be helpful for readers to have a comprehensive understanding of human performance capture and animatio

    Data-Driven Approach to Simulating Realistic Human Joint Constraints

    Full text link
    Modeling realistic human joint limits is important for applications involving physical human-robot interaction. However, setting appropriate human joint limits is challenging because it is pose-dependent: the range of joint motion varies depending on the positions of other bones. The paper introduces a new technique to accurately simulate human joint limits in physics simulation. We propose to learn an implicit equation to represent the boundary of valid human joint configurations from real human data. The function in the implicit equation is represented by a fully connected neural network whose gradients can be efficiently computed via back-propagation. Using gradients, we can efficiently enforce realistic human joint limits through constraint forces in a physics engine or as constraints in an optimization problem.Comment: To appear at ICRA 2018; 6 pages, 9 figures; for associated video, see https://youtu.be/wzkoE7wCbu

    Comprehensive and accurate estimation of lower body movement using few wearable sensors

    Full text link
    Human pose estimation involves tracking the position and orientation (i.e., pose) of body segments, and estimating joint kinematics. It finds application in robotics, virtual reality, animation, and healthcare. Recent miniaturisation of inertial measurement units (IMUs) has paved the path towards inertial motion capture systems (MCS) suitable for use in unstructured environments. However, commercial inertial MCS attach one-sensor-per-body-segment (OSPS) which can be too cumbersome for daily use. A reduced-sensor-count configuration, where IMUs are placed on a subset of body segments, can improve user comfort while also reducing setup time and system cost. This work aims to develop pose estimation algorithms that track lower body motion using as few sensors as possible, towards developing a comfortable MCS for daily routine use. Such a tool can facilitate interactive rehabilitation, performance improvement, and the study of movement disorder progression to potentially enable predictive diagnostics. This thesis presents pose estimation algorithms that utilise biomechanical constraints, additional distance measurements, and balance assumptions to infer information lost from using less sensors. Specifically, it presents a novel use of Lie group representation of pose alongside a constrained extended Kalman filter for estimating pelvis, thigh, shank, and foot kinematics using only two or three IMUs. The algorithms iteratively use linear kinematic equations to predict the next state, leverage indirect observations and assumptions (e.g., pelvis height, zero-velocity update, flat-floor assumption, inter-IMU distance), and enforce biomechanical constraints (e.g., constant body segment length, hinged knee joints, range of motion). The algorithm was comprehensively evaluated on nine healthy subjects who performed free walking, jogging, and other random movements within a 4x4 m2 room using benchmark optical and inertial (i.e., OSPS) MCS. In contrast to existing benchmark datasets, both direct kinematics (e.g., Vicon plug-in gait commonly used in gait analysis) and inverse kinematics (used in robotics and musculoskeletal modelling) pose reconstruction, along with the corresponding measurements, are shared publicly. The mean position root-mean-square error relative to the mid-pelvis origin was 5.3ยฑ1.0 cm, while the sagittal knee and hip joint angle correlation coefficients were 0.85ยฑ0.05 and 0.89ยฑ0.05 indicating promising performance for joint kinematics in the sagittal plane

    The Supernumerary Robotic 3rd Thumb for Skilled Music Tasks

    Get PDF
    Wearable robotics bring the opportunity to augment human capability and performance, be it through prosthetics, exoskeletons, or supernumerary robotic limbs. The latter concept allows enhancing human performance and assisting them in daily tasks. An important research question is, however, whether the use of such devices can lead to their eventual cognitive embodiment, allowing the user to adapt to them and use them seamlessly as any other limb of their own. This paper describes the creation of a platform to investigate this. Our supernumerary robotic 3rd thumb was created to augment piano playing, allowing a pianist to press piano keys beyond their natural hand-span; thus leading to functional augmentation of their skills and the technical feasibility to play with 11 fingers. The robotic finger employs sensors, motors, and a human interfacing algorithm to control its movement in real-time. A proof of concept validation experiment has been conducted to show the effectiveness of the robotic finger in playing musical pieces on a grand piano, showing that naive users were able to use it for 11 finger play within a few hours

    Low-Cost Sensors and Biological Signals

    Get PDF
    Many sensors are currently available at prices lower than USD 100 and cover a wide range of biological signals: motion, muscle activity, heart rate, etc. Such low-cost sensors have metrological features allowing them to be used in everyday life and clinical applications, where gold-standard material is both too expensive and time-consuming to be used. The selected papers present current applications of low-cost sensors in domains such as physiotherapy, rehabilitation, and affective technologies. The results cover various aspects of low-cost sensor technology from hardware design to software optimization

    ๋™์˜์ƒ ์† ์‚ฌ๋žŒ ๋™์ž‘์˜ ๋ฌผ๋ฆฌ ๊ธฐ๋ฐ˜ ์žฌ๊ตฌ์„ฑ ๋ฐ ๋ถ„์„

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2021. 2. ์ด์ œํฌ.In computer graphics, simulating and analyzing human movement have been interesting research topics started since the 1960s. Still, simulating realistic human movements in a 3D virtual world is a challenging task in computer graphics. In general, motion capture techniques have been used. Although the motion capture data guarantees realistic result and high-quality data, there is lots of equipment required to capture motion, and the process is complicated. Recently, 3D human pose estimation techniques from the 2D video are remarkably developed. Researchers in computer graphics and computer vision have attempted to reconstruct the various human motions from video data. However, existing methods can not robustly estimate dynamic actions and not work on videos filmed with a moving camera. In this thesis, we propose methods to reconstruct dynamic human motions from in-the-wild videos and to control the motions. First, we developed a framework to reconstruct motion from videos using prior physics knowledge. For dynamic motions such as backspin, the poses estimated by a state-of-the-art method are incomplete and include unreliable root trajectory or lack intermediate poses. We designed a reward function using poses and hints extracted from videos in the deep reinforcement learning controller and learned a policy to simultaneously reconstruct motion and control a virtual character. Second, we simulated figure skating movements in video. Skating sequences consist of fast and dynamic movements on ice, hindering the acquisition of motion data. Thus, we extracted 3D key poses from a video to then successfully replicate several figure skating movements using trajectory optimization and a deep reinforcement learning controller. Third, we devised an algorithm for gait analysis through video of patients with movement disorders. After acquiring the patients joint positions from 2D video processed by a deep learning network, the 3D absolute coordinates were estimated, and gait parameters such as gait velocity, cadence, and step length were calculated. Additionally, we analyzed the optimization criteria of human walking by using a 3D musculoskeletal humanoid model and physics-based simulation. For two criteria, namely, the minimization of muscle activation and joint torque, we compared simulation data with real human data for analysis. To demonstrate the effectiveness of the first two research topics, we verified the reconstruction of dynamic human motions from 2D videos using physics-based simulations. For the last two research topics, we evaluated our results with real human data.์ปดํ“จํ„ฐ ๊ทธ๋ž˜ํ”ฝ์Šค์—์„œ ์ธ๊ฐ„์˜ ์›€์ง์ž„ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ๋ฐ ๋ถ„์„์€ 1960 ๋…„๋Œ€๋ถ€ํ„ฐ ๋‹ค๋ฃจ์–ด์ง„ ํฅ๋ฏธ๋กœ์šด ์—ฐ๊ตฌ ์ฃผ์ œ์ด๋‹ค. ๋ช‡ ์‹ญ๋…„ ๋™์•ˆ ํ™œ๋ฐœํ•˜๊ฒŒ ์—ฐ๊ตฌ๋˜์–ด ์™”์Œ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ , 3์ฐจ์› ๊ฐ€์ƒ ๊ณต๊ฐ„ ์ƒ์—์„œ ์‚ฌ์‹ค์ ์ธ ์ธ๊ฐ„์˜ ์›€์ง์ž„์„ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ํ•˜๋Š” ์—ฐ๊ตฌ๋Š” ์—ฌ์ „ํžˆ ์–ด๋ ต๊ณ  ๋„์ „์ ์ธ ์ฃผ์ œ์ด๋‹ค. ๊ทธ๋™์•ˆ ์‚ฌ๋žŒ์˜ ์›€์ง์ž„ ๋ฐ์ดํ„ฐ๋ฅผ ์–ป๊ธฐ ์œ„ํ•ด์„œ ๋ชจ์…˜ ์บก์ณ ๊ธฐ์ˆ ์ด ์‚ฌ์šฉ๋˜์–ด ์™”๋‹ค. ๋ชจ์…˜ ์บก์ฒ˜ ๋ฐ์ดํ„ฐ๋Š” ์‚ฌ์‹ค์ ์ธ ๊ฒฐ๊ณผ์™€ ๊ณ ํ’ˆ์งˆ ๋ฐ์ดํ„ฐ๋ฅผ ๋ณด์žฅํ•˜์ง€๋งŒ ๋ชจ์…˜ ์บก์ณ๋ฅผ ํ•˜๊ธฐ ์œ„ํ•ด์„œ ํ•„์š”ํ•œ ์žฅ๋น„๋“ค์ด ๋งŽ๊ณ , ๊ทธ ๊ณผ์ •์ด ๋ณต์žกํ•˜๋‹ค. ์ตœ๊ทผ์— 2์ฐจ์› ์˜์ƒ์œผ๋กœ๋ถ€ํ„ฐ ์‚ฌ๋žŒ์˜ 3์ฐจ์› ์ž์„ธ๋ฅผ ์ถ”์ •ํ•˜๋Š” ์—ฐ๊ตฌ๋“ค์ด ๊ด„๋ชฉํ•  ๋งŒํ•œ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์—ฌ์ฃผ๊ณ  ์žˆ๋‹ค. ์ด๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ์ปดํ“จํ„ฐ ๊ทธ๋ž˜ํ”ฝ์Šค์™€ ์ปดํ“จํ„ฐ ๋น„์ ผ ๋ถ„์•ผ์˜ ์—ฐ๊ตฌ์ž๋“ค์€ ๋น„๋””์˜ค ๋ฐ์ดํ„ฐ๋กœ๋ถ€ํ„ฐ ๋‹ค์–‘ํ•œ ์ธ๊ฐ„ ๋™์ž‘์„ ์žฌ๊ตฌ์„ฑํ•˜๋ ค๋Š” ์‹œ๋„๋ฅผ ํ•˜๊ณ  ์žˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๊ธฐ์กด์˜ ๋ฐฉ๋ฒ•๋“ค์€ ๋น ๋ฅด๊ณ  ๋‹ค์ด๋‚˜๋ฏนํ•œ ๋™์ž‘๋“ค์€ ์•ˆ์ •์ ์œผ๋กœ ์ถ”์ •ํ•˜์ง€ ๋ชปํ•˜๋ฉฐ ์›€์ง์ด๋Š” ์นด๋ฉ”๋ผ๋กœ ์ดฌ์˜ํ•œ ๋น„๋””์˜ค์— ๋Œ€ํ•ด์„œ๋Š” ์ž‘๋™ํ•˜์ง€ ์•Š๋Š”๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๋น„๋””์˜ค๋กœ๋ถ€ํ„ฐ ์—ญ๋™์ ์ธ ์ธ๊ฐ„ ๋™์ž‘์„ ์žฌ๊ตฌ์„ฑํ•˜๊ณ  ๋™์ž‘์„ ์ œ์–ดํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ๋จผ์ € ์‚ฌ์ „ ๋ฌผ๋ฆฌํ•™ ์ง€์‹์„ ์‚ฌ์šฉํ•˜์—ฌ ๋น„๋””์˜ค์—์„œ ๋ชจ์…˜์„ ์žฌ๊ตฌ์„ฑํ•˜๋Š” ํ”„๋ ˆ์ž„ ์›Œํฌ๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ๊ณต์ค‘์ œ๋น„์™€ ๊ฐ™์€ ์—ญ๋™์ ์ธ ๋™์ž‘๋“ค์— ๋Œ€ํ•ด์„œ ์ตœ์‹  ์—ฐ๊ตฌ ๋ฐฉ๋ฒ•์„ ๋™์›ํ•˜์—ฌ ์ถ”์ •๋œ ์ž์„ธ๋“ค์€ ์บ๋ฆญํ„ฐ์˜ ๊ถค์ ์„ ์‹ ๋ขฐํ•  ์ˆ˜ ์—†๊ฑฐ๋‚˜ ์ค‘๊ฐ„์— ์ž์„ธ ์ถ”์ •์— ์‹คํŒจํ•˜๋Š” ๋“ฑ ๋ถˆ์™„์ „ํ•˜๋‹ค. ์šฐ๋ฆฌ๋Š” ์‹ฌ์ธต๊ฐ•ํ™”ํ•™์Šต ์ œ์–ด๊ธฐ์—์„œ ์˜์ƒ์œผ๋กœ๋ถ€ํ„ฐ ์ถ”์ถœํ•œ ํฌ์ฆˆ์™€ ํžŒํŠธ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ๋ณด์ƒ ํ•จ์ˆ˜๋ฅผ ์„ค๊ณ„ํ•˜๊ณ  ๋ชจ์…˜ ์žฌ๊ตฌ์„ฑ๊ณผ ์บ๋ฆญํ„ฐ ์ œ์–ด๋ฅผ ๋™์‹œ์— ํ•˜๋Š” ์ •์ฑ…์„ ํ•™์Šตํ•˜์˜€๋‹ค. ๋‘˜ ์งธ, ๋น„๋””์˜ค์—์„œ ํ”ผ๊ฒจ ์Šค์ผ€์ดํŒ… ๊ธฐ์ˆ ์„ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ํ•œ๋‹ค. ํ”ผ๊ฒจ ์Šค์ผ€์ดํŒ… ๊ธฐ์ˆ ๋“ค์€ ๋น™์ƒ์—์„œ ๋น ๋ฅด๊ณ  ์—ญ๋™์ ์ธ ์›€์ง์ž„์œผ๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ์–ด ๋ชจ์…˜ ๋ฐ์ดํ„ฐ๋ฅผ ์–ป๊ธฐ๊ฐ€ ๊นŒ๋‹ค๋กญ๋‹ค. ๋น„๋””์˜ค์—์„œ 3์ฐจ์› ํ‚ค ํฌ์ฆˆ๋ฅผ ์ถ”์ถœํ•˜๊ณ  ๊ถค์  ์ตœ์ ํ™” ๋ฐ ์‹ฌ์ธต๊ฐ•ํ™”ํ•™์Šต ์ œ์–ด๊ธฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์—ฌ๋Ÿฌ ํ”ผ๊ฒจ ์Šค์ผ€์ดํŒ… ๊ธฐ์ˆ ์„ ์„ฑ๊ณต์ ์œผ๋กœ ์‹œ์—ฐํ•œ๋‹ค. ์…‹ ์งธ, ํŒŒํ‚จ์Šจ ๋ณ‘์ด๋‚˜ ๋‡Œ์„ฑ๋งˆ๋น„์™€ ๊ฐ™์€ ์งˆ๋ณ‘์œผ๋กœ ์ธํ•˜์—ฌ ์›€์ง์ž„ ์žฅ์• ๊ฐ€ ์žˆ๋Š” ํ™˜์ž์˜ ๋ณดํ–‰์„ ๋ถ„์„ํ•˜๊ธฐ ์œ„ํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ œ์•ˆํ•œ๋‹ค. 2์ฐจ์› ๋น„๋””์˜ค๋กœ๋ถ€ํ„ฐ ๋”ฅ๋Ÿฌ๋‹์„ ์‚ฌ์šฉํ•œ ์ž์„ธ ์ถ”์ •๊ธฐ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ ํ™˜์ž์˜ ๊ด€์ ˆ ์œ„์น˜๋ฅผ ์–ป์–ด๋‚ธ ๋‹ค์Œ, 3์ฐจ์› ์ ˆ๋Œ€ ์ขŒํ‘œ๋ฅผ ์–ป์–ด๋‚ด์–ด ์ด๋กœ๋ถ€ํ„ฐ ๋ณดํญ, ๋ณดํ–‰ ์†๋„์™€ ๊ฐ™์€ ๋ณดํ–‰ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ๊ณ„์‚ฐํ•œ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ๊ทผ๊ณจ๊ฒฉ ์ธ์ฒด ๋ชจ๋ธ๊ณผ ๋ฌผ๋ฆฌ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ์ด์šฉํ•˜์—ฌ ์ธ๊ฐ„ ๋ณดํ–‰์˜ ์ตœ์ ํ™” ๊ธฐ์ค€์— ๋Œ€ํ•ด ํƒ๊ตฌํ•œ๋‹ค. ๊ทผ์œก ํ™œ์„ฑ๋„ ์ตœ์†Œํ™”์™€ ๊ด€์ ˆ ๋Œ๋ฆผํž˜ ์ตœ์†Œํ™”, ๋‘ ๊ฐ€์ง€ ๊ธฐ์ค€์— ๋Œ€ํ•ด ์‹œ๋ฎฌ๋ ˆ์ด์…˜ํ•œ ํ›„, ์‹ค์ œ ์‚ฌ๋žŒ ๋ฐ์ดํ„ฐ์™€ ๋น„๊ตํ•˜์—ฌ ๊ฒฐ๊ณผ๋ฅผ ๋ถ„์„ํ•œ๋‹ค. ์ฒ˜์Œ ๋‘ ๊ฐœ์˜ ์—ฐ๊ตฌ ์ฃผ์ œ์˜ ํšจ๊ณผ๋ฅผ ์ž…์ฆํ•˜๊ธฐ ์œ„ํ•ด, ๋ฌผ๋ฆฌ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ์‚ฌ์šฉํ•˜์—ฌ ์ด์ฐจ์› ๋น„๋””์˜ค๋กœ๋ถ€ํ„ฐ ์žฌ๊ตฌ์„ฑํ•œ ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ์—ญ๋™์ ์ธ ์‚ฌ๋žŒ์˜ ๋™์ž‘๋“ค์„ ์žฌํ˜„ํ•œ๋‹ค. ๋‚˜์ค‘ ๋‘ ๊ฐœ์˜ ์—ฐ๊ตฌ ์ฃผ์ œ๋Š” ์‚ฌ๋žŒ ๋ฐ์ดํ„ฐ์™€์˜ ๋น„๊ต ๋ถ„์„์„ ํ†ตํ•˜์—ฌ ํ‰๊ฐ€ํ•œ๋‹ค.1 Introduction 1 2 Background 9 2.1 Pose Estimation from 2D Video . . . . . . . . . . . . . . . . . . . . 9 2.2 Motion Reconstruction from Monocular Video . . . . . . . . . . . . 10 2.3 Physics-Based Character Simulation and Control . . . . . . . . . . . 12 2.4 Motion Reconstruction Leveraging Physics . . . . . . . . . . . . . . 13 2.5 Human Motion Control . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.5.1 Figure Skating Simulation . . . . . . . . . . . . . . . . . . . 16 2.6 Objective Gait Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.7 Optimization for Human Movement Simulation . . . . . . . . . . . . 17 2.7.1 Stability Criteria . . . . . . . . . . . . . . . . . . . . . . . . 18 3 Human Dynamics from Monocular Video with Dynamic Camera Movements 19 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.3 Pose and Contact Estimation . . . . . . . . . . . . . . . . . . . . . . 21 3.4 Learning Human Dynamics . . . . . . . . . . . . . . . . . . . . . . . 24 3.4.1 Policy Learning . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.4.2 Network Training . . . . . . . . . . . . . . . . . . . . . . . . 28 3.4.3 Scene Estimator . . . . . . . . . . . . . . . . . . . . . . . . 29 3.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.5.1 Video Clips . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.5.2 Comparison of Contact Estimators . . . . . . . . . . . . . . . 33 3.5.3 Ablation Study . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.5.4 Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4 Figure Skating Simulation from Video 42 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 4.2 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 4.3 Skating Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.3.1 Non-holonomic Constraints . . . . . . . . . . . . . . . . . . 46 4.3.2 Relaxation of Non-holonomic Constraints . . . . . . . . . . . 47 4.4 Data Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.5 Trajectory Optimization and Control . . . . . . . . . . . . . . . . . . 50 4.5.1 Trajectory Optimization . . . . . . . . . . . . . . . . . . . . 50 4.5.2 Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 4.6 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 5 Gait Analysis Using Pose Estimation Algorithm with 2D-video of Patients 61 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.2.1 Patients and video recording . . . . . . . . . . . . . . . . . . 63 5.2.2 Standard protocol approvals, registrations, and patient consents 66 5.2.3 3D Pose estimation from 2D video . . . . . . . . . . . . . . . 66 5.2.4 Gait parameter estimation . . . . . . . . . . . . . . . . . . . 67 5.2.5 Statistical analysis . . . . . . . . . . . . . . . . . . . . . . . 68 5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 5.3.1 Validation of video-based analysis of the gait . . . . . . . . . 68 5.3.2 gait analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 5.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 5.4.1 Validation with the conventional sensor-based method . . . . 75 5.4.2 Analysis of gait and turning in TUG . . . . . . . . . . . . . . 75 5.4.3 Correlation with clinical parameters . . . . . . . . . . . . . . 76 5.4.4 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 5.5 Supplementary Material . . . . . . . . . . . . . . . . . . . . . . . . . 77 6 Control Optimization of Human Walking 80 6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 6.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 6.2.1 Musculoskeletal model . . . . . . . . . . . . . . . . . . . . . 82 6.2.2 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . 82 6.2.3 Control co-activation level . . . . . . . . . . . . . . . . . . . 83 6.2.4 Push-recovery experiment . . . . . . . . . . . . . . . . . . . 84 6.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 6.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 7 Conclusion 90 7.1 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91Docto

    Non-contact measures to monitor hand movement of people with rheumatoid arthritis using a monocular RGB camera

    Get PDF
    Hand movements play an essential role in a personโ€™s ability to interact with the environment. In hand biomechanics, the range of joint motion is a crucial metric to quantify changes due to degenerative pathologies, such as rheumatoid arthritis (RA). RA is a chronic condition where the immune system mistakenly attacks the joints, particularly those in the hands. Optoelectronic motion capture systems are gold-standard tools to quantify changes but are challenging to adopt outside laboratory settings. Deep learning executed on standard video data can capture RA participants in their natural environments, potentially supporting objectivity in remote consultation. The three main research aims in this thesis were 1) to assess the extent to which current deep learning architectures, which have been validated for quantifying motion of other body segments, can be applied to hand kinematics using monocular RGB cameras, 2) to localise where in videos the hand motions of interest are to be found, 3) to assess the validity of 1) and 2) to determine disease status in RA. First, hand kinematics for twelve healthy participants, captured with OpenPose were benchmarked against those captured using an optoelectronic system, showing acceptable instrument errors below 10ยฐ. Then, a gesture classifier was tested to segment video recordings of twenty-two healthy participants, achieving an accuracy of 93.5%. Finally, OpenPose and the classifier were applied to videos of RA participants performing hand exercises to determine disease status. The inferred disease activity exhibited agreement with the in-person ground truth in nine out of ten instances, outperforming virtual consultations, which agreed only six times out of ten. These results demonstrate that this approach is more effective than estimated disease activity performed by human experts during video consultations. The end goal sets the foundation for a tool that RA participants can use to observe their disease activity from their home.Open Acces

    Egocentric Reconstruction of Human Bodies for Real-time Mobile Telepresence

    Get PDF
    A mobile 3D acquisition system has the potential to make telepresence significantly more convenient, available to users anywhere, anytime, without relying on any instrumented environments. Such a system can be implemented using egocentric reconstruction methods, which rely only on wearable sensors, such as head-worn cameras and body-worn inertial measurement units. Prior egocentric reconstruction methods suffer from incomplete body visibility as well as insufficient sensor data. This dissertation investigates an egocentric 3D capture system relying only on sensors embedded in commonly worn items such as eyeglasses, wristwatches, and shoes. It introduces three advances in egocentric reconstruction of human bodies. (1) A parametric-model-based reconstruction method that overcomes incomplete body surface visibility by estimating the user's body pose and facial expression, and using the results to re-target a high-fidelity pre-scanned model of the user. (2) A learning-based visual-inertial body motion reconstruction system that relies only on eyeglasses-mounted cameras and a few body-worn inertial sensors. This approach overcomes the challenges of self-occlusion and outside-of-camera motions, and allows for unobtrusive real-time 3D capture of the user. (3) A physically plausible reconstruction method based on rigid body dynamics, which reduces motion jitter and prevents interpenetrations between the reconstructed user's model and the objects in the environment such as the ground, walls, and furniture. This dissertation includes experimental results demonstrating the real-time, mobile reconstruction of human bodies in indoor and outdoor scenes, relying only on wearable sensors embedded in commonly-worn objects and overcoming the sparse observation challenges of egocentric reconstruction. The potential usefulness of this approach is demonstrated in a telepresence scenario featuring physical therapy training.Doctor of Philosoph
    • โ€ฆ
    corecore