8 research outputs found
Rule Of Thumb: Deep derotation for improved fingertip detection
We investigate a novel global orientation regression approach for articulated
objects using a deep convolutional neural network. This is integrated with an
in-plane image derotation scheme, DeROT, to tackle the problem of per-frame
fingertip detection in depth images. The method reduces the complexity of
learning in the space of articulated poses which is demonstrated by using two
distinct state-of-the-art learning based hand pose estimation methods applied
to fingertip detection. Significant classification improvements are shown over
the baseline implementation. Our framework involves no tracking, kinematic
constraints or explicit prior model of the articulated object in hand. To
support our approach we also describe a new pipeline for high accuracy magnetic
annotation and labeling of objects imaged by a depth camera.Comment: To be published in proceedings of BMVC 201
ThirdLight: low-cost and high-speed 3D interaction using photosensor markers
We present a low-cost 3D tracking system for virtual reality, gesture modeling, and robot manipulation applications which require fast and precise localization of headsets, data gloves, props, or controllers. Our system removes the need for cameras or projectors for sensing, and instead uses cheap LEDs and printed masks for illumination, and low-cost photosensitive markers. The illumination device transmits a spatiotemporal pattern as a series of binary Gray-code patterns. Multiple illumination devices can be combined to localize each marker in 3D at high speed (333Hz). Our method has strengths in accuracy, speed, cost, ambient performance, large working space (1m-5m) and robustness to noise compared with conventional techniques. We compare with a state-of-the-art instrumented glove and vision-based systems to demonstrate the accuracy, scalability, and robustness of our approach. We propose a fast and accurate method for hand gesture modeling using an inverse kinematics approach with the six photosensitive markers. We additionally propose a passive markers system and demonstrate various interaction scenarios as practical applications
Practical Processing Techniques for Magnetic 3D Motion Tracking
Tohoku University北村喜文課
Capturing Hand-Object Interaction and Reconstruction of Manipulated Objects
Hand motion capture with an RGB-D sensor gained recently a lot of research attention, however, even most recent approaches focus on the case of a single isolated hand. We focus instead on hands that interact with other hands or with a rigid or articulated object. Our framework successfully captures motion in such scenarios by combining a generative model with discriminatively trained salient points, collision detection and physics simulation to achieve a low tracking error with physically plausible poses. All components are unified in a single objective function that can be optimized with standard optimization techniques. We initially assume a-priori knowledge of the object’s shape and skeleton. In case of unknown object shape there are existing 3d reconstruction methods that capitalize on distinctive geometric or texture features. These methods though fail for textureless and highly symmetric objects like household articles, mechanical parts or toys. We show that extracting 3d hand motion for in-hand scanning e↵ectively facilitates the reconstruction of such objects and we fuse the rich additional information of hands into a 3d reconstruction pipeline. Finally, although shape reconstruction is enough for rigid objects, there is a lack of tools that build rigged models of articulated objects that deform realistically using RGB-D data. We propose a method that creates a fully rigged model consisting of a watertight mesh, embedded skeleton and skinning weights by employing a combination of deformable mesh tracking, motion segmentation based on spectral clustering and skeletonization based on mean curvature flow
Capture and generalisation of close interaction with objects
Robust manipulation capture and retargeting has been a longstanding goal in both the
fields of animation and robotics. In this thesis I describe a new approach to capture
both the geometry and motion of interactions with objects, dealing with the problems
of occlusion by the use of magnetic systems, and performing the reconstruction of the
geometry by an RGB-D sensor alongside visual markers. This ‘interaction capture’
allows the scene to be described in terms of the spatial relationships between the character
and the object using novel topological representations such as the Electric Parameters,
which parametrise the outer space of an object using properties of the surface of
the object. I describe the properties of these representations for motion generalisation
and discuss how they can be applied to the problems of human-like motion generation
and programming by demonstration. These generalised interactions are shown
to be valid by demonstration of retargeting grasping and manipulation to robots with
dissimilar kinematics and morphology using only local, gradient-based planning
Development of an augmented reality guided computer assisted orthopaedic surgery system
Previously held under moratorium from 1st December 2016 until 1st December 2021.This body of work documents the developed of a proof of concept augmented reality
guided computer assisted orthopaedic surgery system – ARgCAOS.
After initial investigation a visible-spectrum single camera tool-mounted tracking
system based upon fiducial planar markers was implemented. The use of
visible-spectrum cameras, as opposed to the infra-red cameras typically used by
surgical tracking systems, allowed the captured image to be streamed to a display in
an intelligible fashion. The tracking information defined the location of physical
objects relative to the camera. Therefore, this information allowed virtual models to
be overlaid onto the camera image. This produced a convincing augmented
experience, whereby the virtual objects appeared to be within the physical world,
moving with both the camera and markers as expected of physical objects.
Analysis of the first generation system identified both accuracy and graphical
inadequacies, prompting the development of a second generation system. This too
was based upon a tool-mounted fiducial marker system, and improved performance
to near-millimetre probing accuracy. A resection system was incorporated into the
system, and utilising the tracking information controlled resection was performed,
producing sub-millimetre accuracies.
Several complications resulted from the tool-mounted approach. Therefore, a third
generation system was developed. This final generation deployed a stereoscopic
visible-spectrum camera system affixed to a head-mounted display worn by the user.
The system allowed the augmentation of the natural view of the user, providing
convincing and immersive three dimensional augmented guidance, with probing and
resection accuracies of 0.55±0.04 and 0.34±0.04 mm, respectively.This body of work documents the developed of a proof of concept augmented reality
guided computer assisted orthopaedic surgery system – ARgCAOS.
After initial investigation a visible-spectrum single camera tool-mounted tracking
system based upon fiducial planar markers was implemented. The use of
visible-spectrum cameras, as opposed to the infra-red cameras typically used by
surgical tracking systems, allowed the captured image to be streamed to a display in
an intelligible fashion. The tracking information defined the location of physical
objects relative to the camera. Therefore, this information allowed virtual models to
be overlaid onto the camera image. This produced a convincing augmented
experience, whereby the virtual objects appeared to be within the physical world,
moving with both the camera and markers as expected of physical objects.
Analysis of the first generation system identified both accuracy and graphical
inadequacies, prompting the development of a second generation system. This too
was based upon a tool-mounted fiducial marker system, and improved performance
to near-millimetre probing accuracy. A resection system was incorporated into the
system, and utilising the tracking information controlled resection was performed,
producing sub-millimetre accuracies.
Several complications resulted from the tool-mounted approach. Therefore, a third
generation system was developed. This final generation deployed a stereoscopic
visible-spectrum camera system affixed to a head-mounted display worn by the user.
The system allowed the augmentation of the natural view of the user, providing
convincing and immersive three dimensional augmented guidance, with probing and
resection accuracies of 0.55±0.04 and 0.34±0.04 mm, respectively