37,659 research outputs found
VNect: Real-time 3D Human Pose Estimation with a Single RGB Camera
We present the first real-time method to capture the full global 3D skeletal
pose of a human in a stable, temporally consistent manner using a single RGB
camera. Our method combines a new convolutional neural network (CNN) based pose
regressor with kinematic skeleton fitting. Our novel fully-convolutional pose
formulation regresses 2D and 3D joint positions jointly in real time and does
not require tightly cropped input frames. A real-time kinematic skeleton
fitting method uses the CNN output to yield temporally stable 3D global pose
reconstructions on the basis of a coherent kinematic skeleton. This makes our
approach the first monocular RGB method usable in real-time applications such
as 3D character control---thus far, the only monocular methods for such
applications employed specialized RGB-D cameras. Our method's accuracy is
quantitatively on par with the best offline 3D monocular RGB pose estimation
methods. Our results are qualitatively comparable to, and sometimes better
than, results from monocular RGB-D approaches, such as the Kinect. However, we
show that our approach is more broadly applicable than RGB-D solutions, i.e. it
works for outdoor scenes, community videos, and low quality commodity RGB
cameras.Comment: Accepted to SIGGRAPH 201
Measurements by A LEAP-Based Virtual Glove for the hand rehabilitation
Hand rehabilitation is fundamental after stroke or surgery. Traditional rehabilitation
requires a therapist and implies high costs, stress for the patient, and subjective evaluation of
the therapy effectiveness. Alternative approaches, based on mechanical and tracking-based gloves,
can be really effective when used in virtual reality (VR) environments. Mechanical devices are often
expensive, cumbersome, patient specific and hand specific, while tracking-based devices are not
affected by these limitations but, especially if based on a single tracking sensor, could suffer from
occlusions. In this paper, the implementation of a multi-sensors approach, the Virtual Glove (VG),
based on the simultaneous use of two orthogonal LEAP motion controllers, is described. The VG is
calibrated and static positioning measurements are compared with those collected with an accurate
spatial positioning system. The positioning error is lower than 6 mm in a cylindrical region of interest
of radius 10 cm and height 21 cm. Real-time hand tracking measurements are also performed, analysed
and reported. Hand tracking measurements show that VG operated in real-time (60 fps), reduced
occlusions, and managed two LEAP sensors correctly, without any temporal and spatial discontinuity
when skipping from one sensor to the other. A video demonstrating the good performance of VG
is also collected and presented in the Supplementary Materials. Results are promising but further
work must be done to allow the calculation of the forces exerted by each finger when constrained by
mechanical tools (e.g., peg-boards) and for reducing occlusions when grasping these tools. Although
the VG is proposed for rehabilitation purposes, it could also be used for tele-operation of tools and
robots, and for other VR applications
Analysis domain model for shared virtual environments
The field of shared virtual environments, which also
encompasses online games and social 3D environments, has a
system landscape consisting of multiple solutions that share great functional overlap. However, there is little system interoperability between the different solutions. A shared virtual environment has an associated problem domain that is highly complex raising difficult challenges to the development process, starting with the architectural design of the underlying system. This paper has two main contributions. The first contribution is a broad domain analysis of shared virtual environments, which enables developers to have a better understanding of the whole rather than the part(s). The second contribution is a reference domain model for discussing and describing solutions - the Analysis Domain Model
Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery
One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
- …