17 research outputs found

    V2V-PoseNet: Voxel-to-Voxel Prediction Network for Accurate 3D Hand and Human Pose Estimation from a Single Depth Map

    Full text link
    Most of the existing deep learning-based methods for 3D hand and human pose estimation from a single depth map are based on a common framework that takes a 2D depth map and directly regresses the 3D coordinates of keypoints, such as hand or human body joints, via 2D convolutional neural networks (CNNs). The first weakness of this approach is the presence of perspective distortion in the 2D depth map. While the depth map is intrinsically 3D data, many previous methods treat depth maps as 2D images that can distort the shape of the actual object through projection from 3D to 2D space. This compels the network to perform perspective distortion-invariant estimation. The second weakness of the conventional approach is that directly regressing 3D coordinates from a 2D image is a highly non-linear mapping, which causes difficulty in the learning procedure. To overcome these weaknesses, we firstly cast the 3D hand and human pose estimation problem from a single depth map into a voxel-to-voxel prediction that uses a 3D voxelized grid and estimates the per-voxel likelihood for each keypoint. We design our model as a 3D CNN that provides accurate estimates while running in real-time. Our system outperforms previous methods in almost all publicly available 3D hand and human pose estimation datasets and placed first in the HANDS 2017 frame-based 3D hand pose estimation challenge. The code is available in https://github.com/mks0601/V2V-PoseNet_RELEASE.Comment: HANDS 2017 Challenge Frame-based 3D Hand Pose Estimation Winner (ICCV 2017), Published at CVPR 201

    Pengenalan Pose Tangan Menggunakan HuMoment

    Get PDF
    Computer vision yang didasarkan pada pengenalan bentuk memiliki banyak potensi dalam interaksi manusia dan komputer. Pose tangan dapat dijadikan simbol interaksi manusia dengan komputer seperti halnya pada penggunaan berbagai pose tangan pada bahasa isyarat. Berbagai pose tangan dapat digunakan untuk menggantikan fungsi mouse, untuk mengendalikan robot, dan sebagainya. Penelitian ini difokuskan pada pembangunan sistem pengenalan pose tangan menggunakan HuMoment. Proses pengenalan pose tangan dimulai dengan melakukan segmentasi citra masukan untuk menghasilkan citra ROI (Region of Interest) yaitu area telapak tangan. Selanjutnya dilakukan proses deteksi tepi. Kemudian dilakukan ekstraksi nilai HuMoment. Nilai HuMoment dikuantisasikan ke dalam bukukode yang dihasilkan dari proses pelatihan menggunakan K-Means. Proses kuantisasi dilakukan dengan menghitung nilai Euclidean Distance terkecil antara nilai HuMomment citra masukan dan bukukode. Berdasarkan hasil penelitian, nilai akurasi sistem dalam mengenali pose tangan adalah 88.57%

    A Preliminary Investigation into a Deep Learning Implementation for Hand Tracking on Mobile Devices

    Get PDF
    Hand tracking is an essential component of computer graphics and human-computer interaction applications. The use of RGB camera without specific hardware and sensors (e.g., depth cameras) allows developing solutions for a plethora of devices and platforms. Although various methods were proposed, hand tracking from a single RGB camera is still a challenging research area due to occlusions, complex backgrounds, and various hand poses and gestures. We present a mobile application for 2D hand tracking from RGB images captured by the smartphone camera. The images are processed by a deep neural network, modified specifically to tackle this task and run on mobile devices, looking for a compromise between performance and computational time. Network output is used to show a 2D skeleton on the user's hand. We tested our system on several scenarios, showing an interactive hand tracking level and achieving promising results in the case of variable brightness and backgrounds and small occlusions
    corecore