6,359 research outputs found

    Improving 6D Pose Estimation of Objects in Clutter via Physics-aware Monte Carlo Tree Search

    Full text link
    This work proposes a process for efficiently searching over combinations of individual object 6D pose hypotheses in cluttered scenes, especially in cases involving occlusions and objects resting on each other. The initial set of candidate object poses is generated from state-of-the-art object detection and global point cloud registration techniques. The best-scored pose per object by using these techniques may not be accurate due to overlaps and occlusions. Nevertheless, experimental indications provided in this work show that object poses with lower ranks may be closer to the real poses than ones with high ranks according to registration techniques. This motivates a global optimization process for improving these poses by taking into account scene-level physical interactions between objects. It also implies that the Cartesian product of candidate poses for interacting objects must be searched so as to identify the best scene-level hypothesis. To perform the search efficiently, the candidate poses for each object are clustered so as to reduce their number but still keep a sufficient diversity. Then, searching over the combinations of candidate object poses is performed through a Monte Carlo Tree Search (MCTS) process that uses the similarity between the observed depth image of the scene and a rendering of the scene given the hypothesized pose as a score that guides the search procedure. MCTS handles in a principled way the tradeoff between fine-tuning the most promising poses and exploring new ones, by using the Upper Confidence Bound (UCB) technique. Experimental results indicate that this process is able to quickly identify in cluttered scenes physically-consistent object poses that are significantly closer to ground truth compared to poses found by point cloud registration methods.Comment: 8 pages, 4 figure

    American Sign Language alphabet recognition using Microsoft Kinect

    Get PDF
    American Sign Language (ASL) fingerspelling recognition using marker-less vision sensors is a challenging task due to the complexity of ASL signs, self-occlusion of the hand, and limited resolution of the sensors. This thesis describes a new method for ASL fingerspelling recognition using a low-cost vision camera, which is Microsoft\u27s Kinect. A segmented hand configuration is first obtained by using a depth contrast feature based per-pixel classification algorithm. Then, a hierarchical mode-finding method is developed and implemented to localize hand joint positions under kinematic constraints. Finally, a Random Decision Forest (RDF) classifier is built to recognize ASL signs according to the joint angles. To validate the performance of this method, a dataset containing 75,000 samples of 24 static ASL alphabet signs is used. The system is able to achieve a mean accuracy of 92%. We have also used a publicly available dataset from Surrey University to evaluate our method. The results have shown that our method can achieve higher accuracy in recognizing ASL alphabet signs in comparison to the previous benchmarks. --Abstract, page iii
    • …
    corecore