3,733 research outputs found

    On Using Gait Biometrics to Enhance Face Pose Estimation

    No full text
    Many face biometrics systems use controlled environments where subjects are viewed directly facing the camera. This is less likely to occur in surveillance environments, so a process is required to handle the pose variation of the human head, change in illumination, and low frame rate of input image sequences. This has been achieved using scale invariant features and 3D models to determine the pose of the human subject. Then, a gait trajectory model is generated to obtain the correct the face region whilst handing the looming effect. In this way, we describe a new approach aimed to estimate accurate face pose. The contributions of this research include the construction of a 3D model for pose estimation from planar imagery and the first use of gait information to enhance the face pose estimation process

    Linearized Motion Estimation for Articulated Planes

    Full text link

    Multi-modal RGB–Depth–Thermal Human Body Segmentation

    Get PDF

    Limbs detection and tracking of head-fixed mice for behavioral phenotyping using motion tubes and deep learning

    Get PDF
    The broad accessibility of affordable and reliable recording equipment and its relative ease of use has enabled neuroscientists to record large amounts of neurophysiological and behavioral data. Given that most of this raw data is unlabeled, great effort is required to adapt it for behavioral phenotyping or signal extraction, for behavioral and neurophysiological data, respectively. Traditional methods for labeling datasets rely on human annotators which is a resource and time intensive process, which often produce data that that is prone to reproducibility errors. Here, we propose a deep learning-based image segmentation framework to automatically extract and label limb movements from movies capturing frontal and lateral views of head-fixed mice. The method decomposes the image into elemental regions (superpixels) with similar appearance and concordant dynamics and stacks them following their partial temporal trajectory. These 3D descriptors (referred as motion cues) are used to train a deep convolutional neural network (CNN). We use the features extracted at the last fully connected layer of the network for training a Long Short Term Memory (LSTM) network that introduces spatio-temporal coherence to the limb segmentation. We tested the pipeline in two video acquisition settings. In the first, the camera is installed on the right side of the mouse (lateral setting). In the second, the camera is installed facing the mouse directly (frontal setting). We also investigated the effect of the noise present in the videos and the amount of training data needed, and we found that reducing the number of training samples does not result in a drop of more than 5% in detection accuracy even when as little as 10% of the available data is used for training

    Remo Dance Motion Estimation with Markerless Motion Capture Using The Optical Flow Method

    Get PDF
    Motion capture has been developed and applied in various fields, one of them is dancing. Remo dance is a dance from East Java that tells the struggle of a prince who fought on the battlefield. Remo dancer does not use body-tight costume. He wears a few costume pieces and accessories, so required a motion detection method that can detect limb motion which does not damage the beauty of the costumes and does not interfere motion of the dancer. The method is Markerless Motion Capture. Limbs motions are partial behavior. This means that all limbs do not move simultaneously, but alternately. It required motion tracking to detect parts of the body moving and where the direction of motion. Optical flow is a method that is suitable for the above conditions. Moving body parts will be detected by the bounding box. A bounding box differential value between frames can determine the direction of the motion and how far the object is moving. The optical flow method is simple and does not require a monochrome background. This method does not use complex feature extraction process so it can be applied to real-time motion capture. Performance of motion detection with optical flow method is determined by the value of the ratio between the area of the blob and the area of the bounding box. Estimate coordinates are not necessarily like original coordinates, but if the chart of estimate motion similar to the chart of the original motion, it means motion estimation it can be said to have the same motion with the original.Keywords: Motion Capture, Markerless, Remo Dance, Optical Flo

    Action Recognition in Videos: from Motion Capture Labs to the Web

    Full text link
    This paper presents a survey of human action recognition approaches based on visual data recorded from a single video camera. We propose an organizing framework which puts in evidence the evolution of the area, with techniques moving from heavily constrained motion capture scenarios towards more challenging, realistic, "in the wild" videos. The proposed organization is based on the representation used as input for the recognition task, emphasizing the hypothesis assumed and thus, the constraints imposed on the type of video that each technique is able to address. Expliciting the hypothesis and constraints makes the framework particularly useful to select a method, given an application. Another advantage of the proposed organization is that it allows categorizing newest approaches seamlessly with traditional ones, while providing an insightful perspective of the evolution of the action recognition task up to now. That perspective is the basis for the discussion in the end of the paper, where we also present the main open issues in the area.Comment: Preprint submitted to CVIU, survey paper, 46 pages, 2 figures, 4 table

    Gait Analysis from Wearable Devices using Image and Signal Processing

    Get PDF
    We present the results of analyzing gait motion in-person video taken from a commercially available wearable camera embedded in a pair of glasses. The video is analyzed with three different computer vision methods to extract motion vectors from different gait sequences from four individuals for comparison against a manually annotated ground truth dataset. Using a combination of signal processing and computer vision techniques, gait features are extracted to identify the walking pace of the individual wearing the camera and are validated using the ground truth dataset. We perform an additional data collection with both the camera and a body-worn accelerometer to understand the correlation between our vision-based data and a more traditional set of accelerometer data. Our results indicate that the extraction of activity from the video in a controlled setting shows strong promise of being utilized in different activity monitoring applications such as in the eldercare environment, as well as for monitoring chronic healthcare conditions
    • 

    corecore