245 research outputs found
OPTAR: Automatic Coordinate Frame Registration between OpenPTrack and Google ARCore using Ambient Visual Features
This thesis presents a system for the estimation of the coordinate frame registration between OpenPTrack and Google ARCore. OpenPTrack is a multi-camera solution that integrates people tracking, skeleton tracking, and pose recognition. ARCore is a framework for the development of Augmented Reality applications on smartphones. The transformation between the two coordinate frames is obtained by exploiting visual features observed by both the phone and OpenPTrack cameras
3-D Hand Pose Estimation from Kinect's Point Cloud Using Appearance Matching
We present a novel appearance-based approach for pose estimation of a human
hand using the point clouds provided by the low-cost Microsoft Kinect sensor.
Both the free-hand case, in which the hand is isolated from the surrounding
environment, and the hand-object case, in which the different types of
interactions are classified, have been considered. The hand-object case is
clearly the most challenging task having to deal with multiple tracks. The
approach proposed here belongs to the class of partial pose estimation where
the estimated pose in a frame is used for the initialization of the next one.
The pose estimation is obtained by applying a modified version of the Iterative
Closest Point (ICP) algorithm to synthetic models to obtain the rigid
transformation that aligns each model with respect to the input data. The
proposed framework uses a "pure" point cloud as provided by the Kinect sensor
without any other information such as RGB values or normal vector components.
For this reason, the proposed method can also be applied to data obtained from
other types of depth sensor, or RGB-D camera
Pix3D: Dataset and Methods for Single-Image 3D Shape Modeling
We study 3D shape modeling from a single image and make contributions to it
in three aspects. First, we present Pix3D, a large-scale benchmark of diverse
image-shape pairs with pixel-level 2D-3D alignment. Pix3D has wide applications
in shape-related tasks including reconstruction, retrieval, viewpoint
estimation, etc. Building such a large-scale dataset, however, is highly
challenging; existing datasets either contain only synthetic data, or lack
precise alignment between 2D images and 3D shapes, or only have a small number
of images. Second, we calibrate the evaluation criteria for 3D shape
reconstruction through behavioral studies, and use them to objectively and
systematically benchmark cutting-edge reconstruction algorithms on Pix3D.
Third, we design a novel model that simultaneously performs 3D reconstruction
and pose estimation; our multi-task learning approach achieves state-of-the-art
performance on both tasks.Comment: CVPR 2018. The first two authors contributed equally to this work.
Project page: http://pix3d.csail.mit.ed
Feature Based Calibration of a Network of Kinect Sensors
The availability of affordable depth sensors in conjunction with common RGB cameras, such as the Microsoft Kinect, can provide robots with a complete and instantaneous representation of the current surrounding environment. However, in the problem of calibrating multiple camera systems, traditional methods bear some drawbacks, such as requiring human intervention. In this thesis, we propose an automatic and reliable calibration framework that can easily estimate the extrinsic parameters of a Kinect sensor network. Our framework includes feature extraction, Random Sample Consensus and camera pose estimation from high accuracy correspondences. We also implement a robustness analysis of position estimation algorithms. The result shows that our system could provide precise data under certain amount noise.
Keywords
Kinect, Multiple Camera Calibration, Feature Points Extraction, Correspondence, RANSA
Deep Learning for Head Pose Estimation: A Survey
Head pose estimation (HPE) is an active and popular area of research. Over the years, many approaches have constantly been developed, leading to a progressive improvement in accuracy; nevertheless, head pose estimation remains an open research topic, especially in unconstrained environments. In this paper, we will review the increasing amount of available datasets and the modern methodologies used to estimate orientation, with a special attention to deep learning techniques. We will discuss the evolution of the feld by proposing a classifcation of head pose estimation methods, explaining their advantages and disadvantages, and highlighting the diferent ways deep learning techniques have been used in the context of HPE. An
in-depth performance comparison and discussion is presented at the end of the work. We also highlight the most promising research directions for future investigations on the topic
- …