1,134 research outputs found

    A Simple Algorithm for Assimilating Marker-Based Motion Capture Data During Periodic Human Movement Into Models of Multi-Rigid-Body Systems

    Get PDF
    Human movement analysis is often performed with a model of multi-rigid-body system, whereby reflective-marker-based motion capture data are assimilated into the model for characterizing kinematics and kinetics of the movements quantitatively. Accuracy of such analysis is limited, due to motions of the markers on the skin relative to the underlying skeletal system, referred to as the soft tissue artifact (STA). Here we propose a simple algorithm for assimilating motion capture data during periodic human movements, such as bipedal walking, into models of multi-rigid-body systems in a way that the assimilated motions are not affected by STA. The proposed algorithm assumes that STA time-profiles during periodic movements are also periodic. We then express unknown STA profiles using Fourier series, and show that the Fourier coefficients can be determined optimally based solely on the periodicity assumption for the STA and kinematic constraints requiring that any two adjacent rigid-links are connected by a rotary joint, leading to the STA-free assimilated motion that is consistent with the multi-rigid-link model. To assess the efficiency of the algorithm, we performed a numerical experiment using a dynamic model of human gait composed of seven rigid links, on which we placed STA-affected markers, and showed that the algorithm can estimate the STA accurately and retrieve the non-STA-affected true motion of the model. We also confirmed that our STA-removal processing improves accuracy of the inverse dynamics analysis, suggesting the usability of the proposed algorithm for gait analysis

    {PhysCap}: {P}hysically Plausible Monocular {3D} Motion Capture in Real Time

    Get PDF

    PhysCap: Physically Plausible Monocular 3D Motion Capture in Real Time

    Get PDF
    Marker-less 3D human motion capture from a single colour camera has seen significant progress. However, it is a very challenging and severely ill-posed problem. In consequence, even the most accurate state-of-the-art approaches have significant limitations. Purely kinematic formulations on the basis of individual joints or skeletons, and the frequent frame-wise reconstruction in state-of-the-art methods greatly limit 3D accuracy and temporal stability compared to multi-view or marker-based motion capture. Further, captured 3D poses are often physically incorrect and biomechanically implausible, or exhibit implausible environment interactions (floor penetration, foot skating, unnatural body leaning and strong shifting in depth), which is problematic for any use case in computer graphics. We, therefore, present PhysCap, the first algorithm for physically plausible, real-time and marker-less human 3D motion capture with a single colour camera at 25 fps. Our algorithm first captures 3D human poses purely kinematically. To this end, a CNN infers 2D and 3D joint positions, and subsequently, an inverse kinematics step finds space-time coherent joint angles and global 3D pose. Next, these kinematic reconstructions are used as constraints in a real-time physics-based pose optimiser that accounts for environment constraints (e.g., collision handling and floor placement), gravity, and biophysical plausibility of human postures. Our approach employs a combination of ground reaction force and residual force for plausible root control, and uses a trained neural network to detect foot contact events in images. Our method captures physically plausible and temporally stable global 3D human motion, without physically implausible postures, floor penetrations or foot skating, from video in real time and in general scenes. The video is available at http://gvv.mpi-inf.mpg.de/projects/PhysCapComment: 16 pages, 11 figure

    Technological advancements in the analysis of human motion and posture management through digital devices

    Get PDF
    Technological development of motion and posture analyses is rapidly progressing, especially in rehabilitation settings and sport biomechanics. Consequently, clear discrimination among different measurement systems is required to diversify their use as needed. This review aims to resume the currently used motion and posture analysis systems, clarify and suggest the appropriate approaches suitable for specific cases or contexts. The currently gold standard systems of motion analysis, widely used in clinical settings, present several limitations related to marker placement or long procedure time. Fully automated and markerless systems are overcoming these drawbacks for conducting biomechanical studies, especially outside laboratories. Similarly, new posture analysis techniques are emerging, often driven by the need for fast and non-invasive methods to obtain high-precision results. These new technologies have also become effective for children or adolescents with non-specific back pain and postural insufficiencies. The evolutions of these methods aim to standardize measurements and provide manageable tools in clinical practice for the early diagnosis of musculoskeletal pathologies and to monitor daily improvements of each patient. Herein, these devices and their uses are described, providing researchers, clinicians, orthopedics, physical therapists, and sports coaches an effective guide to use new technologies in their practice as instruments of diagnosis, therapy, and prevention

    Learning discriminative features for human motion understanding

    Get PDF
    Human motion understanding has attracted considerable interest in recent research for its applications to video surveillance, content-based search and healthcare. With different capturing methods, human motion can be recorded in various forms (e.g. skeletal data, video, image, etc.). Compared to the 2D video and image, skeletal data recorded by motion capture device contains full 3D movement information. To begin with, we first look into a gait motion analysis problem based on 3D skeletal data. We propose an automatic framework for identifying musculoskeletal and neurological disorders among older people based on 3D skeletal motion data. In this framework, a feature selection strategy and two new gait features are proposed to choose an optimal feature set from the input features to optimise classification accuracy. Due to self-occlusion caused by single shooting angle, 2D video and image are not able to record full 3D geometric information. Therefore, viewpoint variation dramatically affects the performance on lots of 2D based applications (e.g. arbitrary view action recognition and image-based 3D human shape reconstruction). Leveraging view-invariance from the 3D model is a popular idea to improve the performance on 2D computer vision problems. Therefore, in the second contribution, we adopt 3D models built with computer graphics technology to assist in solving the problem of arbitrary view action recognition. As a solution, a new transfer dictionary learning framework that utilises computer graphics technologies to synthesise realistic 2D and 3D training videos is proposed, which can project a real-world 2D video into a view-invariant sparse representation. In the third contribution, 3D models are utilised to build an end-to-end 3D human shape reconstruction system, which can recover the 3D human shape from a single image without any prior parametric model. In contrast to most existing methods that calculate 3D joint locations, the method proposed in this thesis can produce a richer and more useful point cloud based representation. Synthesised high-quality 2D images and dense 3D point clouds are used to train a CNN-based encoder and 3D regression module. It can be concluded that the methods introduced in this thesis try to explore human motion understanding from 3D to 2D. We investigate how to compensate for the lack of full geometric information in 2D based applications with view-invariance learnt from 3D models

    Multi-contact Planning on Humans for Physical Assistance by Humanoid

    Get PDF
    International audienceFor robots to interact with humans in close proximity safely and efficiently, a specialized method to compute whole-body robot posture and plan contact locations is required. In our work, a humanoid robot is used as a caregiver that is performing a physical assistance task. We propose a method for formulating and initializing a non-linear optimization posture generation problem from an intuitive description of the assistance task and the result of a human point cloud processing. The proposed method allows to plan whole-body posture and contact locations on a task-specific surface of a human body, under robot equilibrium, friction cone, torque/joint limits, collision avoidance, and assistance task inherent constraints. The proposed framework can uniformly handle any arbitrary surface generated from point clouds, for autonomously planing the contact locations and interaction forces on potentially moving, movable, and deformable surfaces, which occur in direct physical human-robot interaction. We conclude the paper with examples of posture generation for physical human-robot interaction scenarios

    Real-time Immersive human-computer interaction based on tracking and recognition of dynamic hand gestures

    Get PDF
    With fast developing and ever growing use of computer based technologies, human-computer interaction (HCI) plays an increasingly pivotal role. In virtual reality (VR), HCI technologies provide not only a better understanding of three-dimensional shapes and spaces, but also sensory immersion and physical interaction. With the hand based HCI being a key HCI modality for object manipulation and gesture based communication, challenges are presented to provide users a natural, intuitive, effortless, precise, and real-time method for HCI based on dynamic hand gestures, due to the complexity of hand postures formed by multiple joints with high degrees-of-freedom, the speed of hand movements with highly variable trajectories and rapid direction changes, and the precision required for interaction between hands and objects in the virtual world. Presented in this thesis is the design and development of a novel real-time HCI system based on a unique combination of a pair of data gloves based on fibre-optic curvature sensors to acquire finger joint angles, a hybrid tracking system based on inertia and ultrasound to capture hand position and orientation, and a stereoscopic display system to provide an immersive visual feedback. The potential and effectiveness of the proposed system is demonstrated through a number of applications, namely, hand gesture based virtual object manipulation and visualisation, hand gesture based direct sign writing, and hand gesture based finger spelling. For virtual object manipulation and visualisation, the system is shown to allow a user to select, translate, rotate, scale, release and visualise virtual objects (presented using graphics and volume data) in three-dimensional space using natural hand gestures in real-time. For direct sign writing, the system is shown to be able to display immediately the corresponding SignWriting symbols signed by a user using three different signing sequences and a range of complex hand gestures, which consist of various combinations of hand postures (with each finger open, half-bent, closed, adduction and abduction), eight hand orientations in horizontal/vertical plans, three palm facing directions, and various hand movements (which can have eight directions in horizontal/vertical plans, and can be repetitive, straight/curve, clockwise/anti-clockwise). The development includes a special visual interface to give not only a stereoscopic view of hand gestures and movements, but also a structured visual feedback for each stage of the signing sequence. An excellent basis is therefore formed to develop a full HCI based on all human gestures by integrating the proposed system with facial expression and body posture recognition methods. Furthermore, for finger spelling, the system is shown to be able to recognise five vowels signed by two hands using the British Sign Language in real-time

    Validation of an extended foot-ankle musculoskeletal model using in vivo 4D CT data

    Get PDF
    openPer simulare il movimento del corpo umano, è necessario creare dei modelli che rappresentino le strutture anatomiche. In questo elaborato ci si concentrerà su un modello biomeccanico del complesso piede-caviglia implementato in un software per la modellazione muscoloscheletrica, nella fattispecie OpenSim. OpenSim è un software che consente di sviluppare modelli di strutture muscoloscheletriche e creare simulazioni dinamiche in grado di stimare i parametri interni delle strutture anatomiche (come le forze muscolari e di contatto tra le ossa), attraverso la simulazione della cinematica e la cinetica del movimento delle varie strutture coinvolte. Nel presente elaborato, si è partiti dallo studio di un dataset, acquisito da Boey et al. (2020) tramite scansione 4D CT in combinazione con un dispositivo di manipolazione del piede su soggetti sani e pazienti affetti da instabilità cronica di caviglia. In questo modo è stata valutata la cinematica dell’osso del piede durante il cammino simulato. Lo scopo di questo elaborato è quindi validare un modello del complesso piede-caviglia sviluppato da Malaquias et al. (2016), partendo dai dati acquisiti affinché, imponendo il movimento della pedana, la simulazione restituisca delle variabili comparabili a quelle reali. Il modello muscoloscheletrico esteso del complesso piede-caviglia è composto da sei segmenti rigidi e cinque articolazioni anatomiche (caviglia, sottoastragalica, mediotarsica, tarsometatrsale e metatarsofalangea) per un totale di otto gradi di libertà. A questo modello è stata aggiunto una pedana (per simulare il dispositivo di manipolazione utilizzato nella sperimentazione) e sono stati incrementati i gradi di libertà delle articolazioni di caviglia e sottoastragalica, per ottenere tre gradi di libertà ciascuna. Dopodiché, è stato imposto un movimento combinato di inversione\eversione ed ab-adduzione alla pedana ed è stato valutato il movimento del modello del piede rispetto al dataset.To simulate the movement of the human body, it is necessary to create models that represent anatomical structures. In this thesis the focus will be placed on a biomechanical model of the complex foot-ankle implemented in a software for musculoskeletal modeling, in particular OpenSim. OpenSim is software that allows to develop models of musculoskeletal structures and create dynamic simulations capable of estimating the internal parameters of anatomical structures (such as muscle and contact forces between bones), through the simulation of the kinematics and kinetics of the movement of the various anatomical structures involved. In this paper, the starting point was the study of a dataset, acquired by Boey et al. (2020) with 4D CT scan in combination with a foot manipulator device. The study was run on healthy subjects as well as patients with chronic ankle instability. In this way, the kinematics of the movement of the foot bones during simulated gait was evaluated. The aim of this project was to validate a model of the foot-ankle complex, developed by Malaquias et al. (2016), starting from the acquired data, so that, by imposing the movement of the platform, the simulation would return variables comparable to the dataset. This extended musculoskeletal model of the foot-ankle complex is composed of six rigid segments and five anatomical joints (ankle, subtalar, midtarsal, tarsometatarsal, and metatarsophalangeal) for a total of eight degrees of freedom. A footplate was added to this model (to simulate the foot manipulator device utilized in the experiment) and the degrees of freedom of the ankle and subtalar joints were increased, to obtain three degrees of freedom each. After that, a combined inversion\eversion and plantar\dorsiflexion movement was imposed on the footplate and the movement of the foot model was evaluated against the dataset
    • …
    corecore