21 research outputs found

    Unsupervised 3D Pose Estimation with Geometric Self-Supervision

    Full text link
    We present an unsupervised learning approach to recover 3D human pose from 2D skeletal joints extracted from a single image. Our method does not require any multi-view image data, 3D skeletons, correspondences between 2D-3D points, or use previously learned 3D priors during training. A lifting network accepts 2D landmarks as inputs and generates a corresponding 3D skeleton estimate. During training, the recovered 3D skeleton is reprojected on random camera viewpoints to generate new "synthetic" 2D poses. By lifting the synthetic 2D poses back to 3D and re-projecting them in the original camera view, we can define self-consistency loss both in 3D and in 2D. The training can thus be self supervised by exploiting the geometric self-consistency of the lift-reproject-lift process. We show that self-consistency alone is not sufficient to generate realistic skeletons, however adding a 2D pose discriminator enables the lifter to output valid 3D poses. Additionally, to learn from 2D poses "in the wild", we train an unsupervised 2D domain adapter network to allow for an expansion of 2D data. This improves results and demonstrates the usefulness of 2D pose data for unsupervised 3D lifting. Results on Human3.6M dataset for 3D human pose estimation demonstrate that our approach improves upon the previous unsupervised methods by 30% and outperforms many weakly supervised approaches that explicitly use 3D data

    Articulated Clinician Detection Using 3D Pictorial Structures on RGB-D Data

    Full text link
    Reliable human pose estimation (HPE) is essential to many clinical applications, such as surgical workflow analysis, radiation safety monitoring and human-robot cooperation. Proposed methods for the operating room (OR) rely either on foreground estimation using a multi-camera system, which is a challenge in real ORs due to color similarities and frequent illumination changes, or on wearable sensors or markers, which are invasive and therefore difficult to introduce in the room. Instead, we propose a novel approach based on Pictorial Structures (PS) and on RGB-D data, which can be easily deployed in real ORs. We extend the PS framework in two ways. First, we build robust and discriminative part detectors using both color and depth images. We also present a novel descriptor for depth images, called histogram of depth differences (HDD). Second, we extend PS to 3D by proposing 3D pairwise constraints and a new method that makes exact inference tractable. Our approach is evaluated for pose estimation and clinician detection on a challenging RGB-D dataset recorded in a busy operating room during live surgeries. We conduct series of experiments to study the different part detectors in conjunction with the various 2D or 3D pairwise constraints. Our comparisons demonstrate that 3D PS with RGB-D part detectors significantly improves the results in a visually challenging operating environment.Comment: The supplementary video is available at https://youtu.be/iabbGSqRSg

    시퀀스 기반 3차원 다인 자세 추정을 위한 기하학적 데이터 증강 기법

    Get PDF
    학위논문(석사) -- 서울대학교대학원 : 데이터사이언스대학원 데이터사이언스학과, 2023. 2. 이준석.3D pose estimation is an invaluable task in computer vision with various practical applications. Recently, a Transformer-based sequence-to-sequence model, MixSTE [60], has been successfully applied to 3D single-person pose estimation by decoupling the 2Dto-3D modeling from pixel-level details. We propose a natural extension of this model from single-person to multi-person problem, adding a novel inter-personal attention for 2D-to-3D lifting. Naturally referring to neighboring frames, this design is highly robust in handling occlusions. However, 3D multi-person pose estimation is still challenging due to extreme data scarcity. From an observation that our 2D-to-3D lifting approach is free from pixel-level details, we propose a novel geometry-aware data augmentation that allows us to infinitely generate diverse training examples from existing single-person trajectories. From extensive experiments on standard benchmarks, we verify that our model and data augmentation method achieve the state-of-the-art, not just on accuracy but also on smoothness. We also qualitatively demonstrate the effectiveness of our approach both on public benchmarks and with in-the-wild videos.컴퓨터 비전에 기반한 3차원 자세 추정(3D Pose Estimation)은 매우 다양한 분야에 응용될 수 있기 때문에 큰 가치가 있다. 최근, 트랜스포머(Transformer) 모델 기반의 시퀀스-시퀀스(Sequence-tosequence) 모델인 MixSTE [60] 은 단일 객체(사람) 3차원 자세 추정에서 2차원 자세로부터의 3차원 자세 추정(2D-to-3D Lifting)의 방법을 활용하여 성공적인 결과를 거둔 바 있다. 본 연구는 이의 확장으로써 다중 객체 3차원 자세 문제를 다루며, 기존 연구와 비교해 등장하는 객체간 정보의 상호 참조(Inter-Personal Attention) 모듈을 새로이 추가하였다. 모델 구조에 기반하여 상호 인접 프레임 정보를 자연스럽게 참조함으로써, 본 연구에서 고안한 모델은 상호 가려짐 현상에 강인한 성능을 보였다. 하지만, 다중 객체 3차원 자세 추정은 데이터 부족 현상이라는 고질적인 문제를 지닌다. 본 연구의 방법론은 픽셀 수준의 디테일에서 벗어나, 2차원 자세와 3차원 자세 간의 관계를 다루기에, 주어진 데이터와 카메라 파라미터에 기반하여 데이터를 사실상 무제한적으로 증강할 수 있다는 강점을 지닌다. 본 분야에서 성능 측정 및 비교를 위한 대표적인 실험용 데이터셋에서 성능을 측정한 결과, 본 연구에서 고안한 모델은 정확도 뿐만 아니라 출력 결과의 부드러움 두 측면에서 모두 여타 기존 모델과 비교해 가장 훌륭한 성능을 보였다. 나아가, 테스트용 데이터셋 뿐만 아니라 다양한 시중 비디오에서도 훌륭한 성능을 보임으로써 연구의 상업적 가치 또한 입증하였다.Chapter 1. Introduction 1 Chapter 2. Related Work 5 Chapter 3. Problem Formulation and Notations 8 Chapter 4. The POTR-3D Model 9 Chapter 5. Geometry-Aware Data Augmentation 16 Chapter 6. Experiments 22 Chapter 7. Summary 35 Bibliography 37 Abstract in Korean 44석

    Capture de mouvements humains par capteurs RGB-D

    Get PDF
    L'arrivée simultanée de capteurs de profondeur et couleur, et d'algorithmes de détection de squelettes super-temps-réel a conduit à un regain de la recherche sur la capture de mouvements humains. Cette fonctionnalité constitue un point clé de la communication Homme-Machine. Mais le contexte d'application de ces dernières avancées est l'interaction volontaire et fronto-parallèle, ce qui permet certaines approximations et requiert un positionnement spécifique des capteurs. Dans cette thèse, nous présentons une approche multi-capteurs, conçue pour améliorer la robustesse et la précision du positionnement des articulations de l'homme, et fondée sur un processus de lissage trajectoriel par intégration temporelle, et le filtrage des squelettes détectés par chaque capteur. L'approche est testée sur une base de données nouvelle acquise spécifiquement, avec une méthodologie d'étalonnage adaptée spécialement. Un début d'extension à la perception jointe avec du contexte, ici des objets, est proposée.Simultaneous apparition of depth and color sensors and super-realtime skeleton detection algorithms led to a surge of new research in Human Motion Capture. This feature is a key part of Human-Machine Interaction. But the applicative context of those new technologies is voluntary, fronto-parallel interaction with the sensor, which allowed the designers certain approximations and requires a specific sensor placement. In this thesis, we present a multi-sensor approach, designed to improve robustness and accuracy of a human's joints positionning, and based on a trajectory smoothing process by temporal integration, and filtering of the skeletons detected in each sensor. The approach has been tested on a new specially constituted database, with a specifically adapted calibration methodology. We also began extending the approach to context-based improvements, with object perception being proposed
    corecore