34,857 research outputs found
V2V-PoseNet: Voxel-to-Voxel Prediction Network for Accurate 3D Hand and Human Pose Estimation from a Single Depth Map
Most of the existing deep learning-based methods for 3D hand and human pose
estimation from a single depth map are based on a common framework that takes a
2D depth map and directly regresses the 3D coordinates of keypoints, such as
hand or human body joints, via 2D convolutional neural networks (CNNs). The
first weakness of this approach is the presence of perspective distortion in
the 2D depth map. While the depth map is intrinsically 3D data, many previous
methods treat depth maps as 2D images that can distort the shape of the actual
object through projection from 3D to 2D space. This compels the network to
perform perspective distortion-invariant estimation. The second weakness of the
conventional approach is that directly regressing 3D coordinates from a 2D
image is a highly non-linear mapping, which causes difficulty in the learning
procedure. To overcome these weaknesses, we firstly cast the 3D hand and human
pose estimation problem from a single depth map into a voxel-to-voxel
prediction that uses a 3D voxelized grid and estimates the per-voxel likelihood
for each keypoint. We design our model as a 3D CNN that provides accurate
estimates while running in real-time. Our system outperforms previous methods
in almost all publicly available 3D hand and human pose estimation datasets and
placed first in the HANDS 2017 frame-based 3D hand pose estimation challenge.
The code is available in https://github.com/mks0601/V2V-PoseNet_RELEASE.Comment: HANDS 2017 Challenge Frame-based 3D Hand Pose Estimation Winner (ICCV
2017), Published at CVPR 201
Hand Keypoint Detection in Single Images using Multiview Bootstrapping
We present an approach that uses a multi-camera system to train fine-grained
detectors for keypoints that are prone to occlusion, such as the joints of a
hand. We call this procedure multiview bootstrapping: first, an initial
keypoint detector is used to produce noisy labels in multiple views of the
hand. The noisy detections are then triangulated in 3D using multiview geometry
or marked as outliers. Finally, the reprojected triangulations are used as new
labeled training data to improve the detector. We repeat this process,
generating more labeled data in each iteration. We derive a result analytically
relating the minimum number of views to achieve target true and false positive
rates for a given detector. The method is used to train a hand keypoint
detector for single images. The resulting keypoint detector runs in realtime on
RGB images and has accuracy comparable to methods that use depth sensors. The
single view detector, triangulated over multiple views, enables 3D markerless
hand motion capture with complex object interactions.Comment: CVPR 201
Hybrid One-Shot 3D Hand Pose Estimation by Exploiting Uncertainties
Model-based approaches to 3D hand tracking have been shown to perform well in
a wide range of scenarios. However, they require initialisation and cannot
recover easily from tracking failures that occur due to fast hand motions.
Data-driven approaches, on the other hand, can quickly deliver a solution, but
the results often suffer from lower accuracy or missing anatomical validity
compared to those obtained from model-based approaches. In this work we propose
a hybrid approach for hand pose estimation from a single depth image. First, a
learned regressor is employed to deliver multiple initial hypotheses for the 3D
position of each hand joint. Subsequently, the kinematic parameters of a 3D
hand model are found by deliberately exploiting the inherent uncertainty of the
inferred joint proposals. This way, the method provides anatomically valid and
accurate solutions without requiring manual initialisation or suffering from
track losses. Quantitative results on several standard datasets demonstrate
that the proposed method outperforms state-of-the-art representatives of the
model-based, data-driven and hybrid paradigms.Comment: BMVC 2015 (oral); see also
http://lrs.icg.tugraz.at/research/hybridhape
Respiratory organ motion in interventional MRI : tracking, guiding and modeling
Respiratory organ motion is one of the major challenges in interventional MRI, particularly in interventions with therapeutic ultrasound in the abdominal region. High-intensity focused ultrasound found an application in interventional MRI for noninvasive treatments of different abnormalities. In order to guide surgical and treatment interventions, organ motion imaging and modeling is commonly required before a treatment start. Accurate tracking of organ motion during various interventional MRI procedures is prerequisite for a successful outcome and safe therapy.
In this thesis, an attempt has been made to develop approaches using focused ultrasound which could be used in future clinically for the treatment of abdominal organs, such as the liver and the kidney. Two distinct methods have been presented with its ex vivo and in vivo treatment results. In the first method, an MR-based pencil-beam navigator has been used to track organ motion and provide the motion information for acoustic focal point steering, while in the second approach a hybrid imaging using both ultrasound and magnetic resonance imaging was combined for advanced guiding capabilities.
Organ motion modeling and four-dimensional imaging of organ motion is increasingly required before the surgical interventions. However, due to the current safety limitations and hardware restrictions, the MR acquisition of a time-resolved sequence of volumetric images is not possible with high temporal and spatial resolution. A novel multislice acquisition scheme that is based on a two-dimensional navigator, instead of a commonly used pencil-beam navigator, was devised to acquire the data slices and the corresponding navigator simultaneously using a CAIPIRINHA parallel imaging method. The acquisition duration for four-dimensional dataset sampling is reduced compared to the existing approaches, while the image contrast and quality are improved as well.
Tracking respiratory organ motion is required in interventional procedures and during MR imaging of moving organs. An MR-based navigator is commonly used, however, it is usually associated with image artifacts, such as signal voids. Spectrally selective navigators can come in handy in cases where the imaging organ is surrounding with an adipose tissue, because it can provide an indirect measure of organ motion. A novel spectrally selective navigator based on a crossed-pair navigator has been developed. Experiments show the advantages of the application of this novel navigator for the volumetric imaging of the liver in vivo, where this navigator was used to gate the gradient-recalled echo sequence
A Comprehensive Performance Evaluation of Deformable Face Tracking "In-the-Wild"
Recently, technologies such as face detection, facial landmark localisation
and face recognition and verification have matured enough to provide effective
and efficient solutions for imagery captured under arbitrary conditions
(referred to as "in-the-wild"). This is partially attributed to the fact that
comprehensive "in-the-wild" benchmarks have been developed for face detection,
landmark localisation and recognition/verification. A very important technology
that has not been thoroughly evaluated yet is deformable face tracking
"in-the-wild". Until now, the performance has mainly been assessed
qualitatively by visually assessing the result of a deformable face tracking
technology on short videos. In this paper, we perform the first, to the best of
our knowledge, thorough evaluation of state-of-the-art deformable face tracking
pipelines using the recently introduced 300VW benchmark. We evaluate many
different architectures focusing mainly on the task of on-line deformable face
tracking. In particular, we compare the following general strategies: (a)
generic face detection plus generic facial landmark localisation, (b) generic
model free tracking plus generic facial landmark localisation, as well as (c)
hybrid approaches using state-of-the-art face detection, model free tracking
and facial landmark localisation technologies. Our evaluation reveals future
avenues for further research on the topic.Comment: E. Antonakos and P. Snape contributed equally and have joint second
authorshi
- …