3,799 research outputs found

    Pixel-level hand detection with shape-aware structured forests

    Get PDF
    LNCS v. 9006 entitled: Computer Vision -- ACCV 2014: 12th Asian Conference on Computer Vision, Singapore, Singapore, November 1-5, 2014, Revised Selected Papers, Part IVHand detection has many important applications in HCI, yet it is a challenging problem because the appearance of hands can vary greatly in images. In this paper, we propose a novel method for efficient pixel-level hand detection. Unlike previous method which assigns a binary label to every pixel independently, our method estimates a probability shape mask for a pixel using structured forests. This approach can better exploit hand shape information in the training data, and enforce shape constraints in the estimation. Aggregation of multiple predictions generated from neighboring pixels further improves the robustness of our method. We evaluate our method on both ego-centric videos and unconstrained still images. Experiment results show that our method can detect hands efficiently and outperform other state-of-the-art methods.postprin

    Left/Right Hand Segmentation in Egocentric Videos

    Full text link
    Wearable cameras allow people to record their daily activities from a user-centered (First Person Vision) perspective. Due to their favorable location, wearable cameras frequently capture the hands of the user, and may thus represent a promising user-machine interaction tool for different applications. Existent First Person Vision methods handle hand segmentation as a background-foreground problem, ignoring two important facts: i) hands are not a single "skin-like" moving element, but a pair of interacting cooperative entities, ii) close hand interactions may lead to hand-to-hand occlusions and, as a consequence, create a single hand-like segment. These facts complicate a proper understanding of hand movements and interactions. Our approach extends traditional background-foreground strategies, by including a hand-identification step (left-right) based on a Maxwell distribution of angle and position. Hand-to-hand occlusions are addressed by exploiting temporal superpixels. The experimental results show that, in addition to a reliable left/right hand-segmentation, our approach considerably improves the traditional background-foreground hand-segmentation
    corecore