24,398 research outputs found
Object classification in RGB-D images using Ensamble of Shape Functions mehtod
Klasifikacija objekata prikazanim na 2D ili 3D slikama postaje sve više i više popularna i tehnilogije koje se razvijaju su efikasnije i jednostavnije, dok kvaliteta i točnost ostaju visoki. Metoda korištena u ovom radu je ESF(Ensemble of shape functions). Ona je jedna od metoda uključena u PCL biblioteku. ESF je funkcija oblika koja je jednostavna, a omogućuje puno načina za korištenje. Jedan od najčešćih načina korištenja je klasificiranje objekata, ali ima i ostalih kao što su primjećivanje, računanje raznoraznih udaljenosti, normala u geometrijskoj okolini. U radu je također opisan program za klasifikaciju objekata na RGB-D slikama primjenom ESF metode, te ispitan na ispitnom podatkovnom skupu 3DNet.Object recognition for 2D or 3D images becomes more and more popular and technologies that are being developed are more efficiend and simpler, while quality and precision remain high. The method used in this project is ESF(Ensemble of shape functions). It is one of many methodes included in the PCL library. ESF is shape function that is simple and has various applications. This method is primarily designed for object recognition, but there are other applications like registration, calculating distances or calculating normals in geometric environment. In this project, a program for object classification using ESF method is described and it is tested on the 3DNet data set
Generative and Discriminative Voxel Modeling with Convolutional Neural Networks
When working with three-dimensional data, choice of representation is key. We
explore voxel-based models, and present evidence for the viability of
voxellated representations in applications including shape modeling and object
classification. Our key contributions are methods for training voxel-based
variational autoencoders, a user interface for exploring the latent space
learned by the autoencoder, and a deep convolutional neural network
architecture for object classification. We address challenges unique to
voxel-based representations, and empirically evaluate our models on the
ModelNet benchmark, where we demonstrate a 51.5% relative improvement in the
state of the art for object classification.Comment: 9 pages, 5 figures, 2 table
V2V-PoseNet: Voxel-to-Voxel Prediction Network for Accurate 3D Hand and Human Pose Estimation from a Single Depth Map
Most of the existing deep learning-based methods for 3D hand and human pose
estimation from a single depth map are based on a common framework that takes a
2D depth map and directly regresses the 3D coordinates of keypoints, such as
hand or human body joints, via 2D convolutional neural networks (CNNs). The
first weakness of this approach is the presence of perspective distortion in
the 2D depth map. While the depth map is intrinsically 3D data, many previous
methods treat depth maps as 2D images that can distort the shape of the actual
object through projection from 3D to 2D space. This compels the network to
perform perspective distortion-invariant estimation. The second weakness of the
conventional approach is that directly regressing 3D coordinates from a 2D
image is a highly non-linear mapping, which causes difficulty in the learning
procedure. To overcome these weaknesses, we firstly cast the 3D hand and human
pose estimation problem from a single depth map into a voxel-to-voxel
prediction that uses a 3D voxelized grid and estimates the per-voxel likelihood
for each keypoint. We design our model as a 3D CNN that provides accurate
estimates while running in real-time. Our system outperforms previous methods
in almost all publicly available 3D hand and human pose estimation datasets and
placed first in the HANDS 2017 frame-based 3D hand pose estimation challenge.
The code is available in https://github.com/mks0601/V2V-PoseNet_RELEASE.Comment: HANDS 2017 Challenge Frame-based 3D Hand Pose Estimation Winner (ICCV
2017), Published at CVPR 201
- …