40,626 research outputs found

    A Monocular Marker-Free Gait Measurement System

    Get PDF
    This paper presents a new, user-friendly, portable motion capture and gait analysis system for capturing and analyzing human gait, designed as a telemedicine tool to monitor remotely the progress of patients through treatment. The system requires minimal user input and simple single-camera filming (which can be acquired from a basic webcam) making it very accessible to nontechnical, nonclinical personnel. This system can allow gait studies to acquire a much larger data set and allow trained gait analysts to focus their skills on the interpretation phase of gait analysis. The design uses a novel motion capture method derived from spatiotemporal segmentation and model-based tracking. Testing is performed on four monocular, sagittal-view, sample gait videos. Results of modeling, tracking, and analysis stages are presented with standard gait graphs and parameters compared to manually acquired data

    Temporal archetypal analysis for action segmentation

    Get PDF
    Unsupervised learning of invariant representations that efficiently describe high-dimensional time series has several applications in dynamic visual data analysis. Clearly, the problem becomes more challenging when dealing with multiple time series arising from different modalities. A prominent example of this multimodal setting is the human motion which can be represented by multimodal time series of pixel intensities, depth maps, and motion capture data. Here, we study, for the first time, the problem of unsupervised learning of temporally and modality invariant informative representations, referred to as archetypes, from multiple time series originating from different modalities. To this end a novel method, coined as temporal archetypal analysis, is proposed. The performance of the proposed method is assessed by conducting experiments in unsupervised action segmentation. Experimental results on three different real world datasets using single modal and multimodal visual representations indicate the robustness and effectiveness of the proposed methods, outperforming compared state-of-the-art methods by a large, in most of the cases, margin

    Temporal archetypal analysis for action segmentation

    Get PDF
    Unsupervised learning of invariant representations that efficiently describe high-dimensional time series has several applications in dynamic visual data analysis. Clearly, the problem becomes more challenging when dealing with multiple time series arising from different modalities. A prominent example of this multimodal setting is the human motion which can be represented by multimodal time series of pixel intensities, depth maps, and motion capture data. Here, we study, for the first time, the problem of unsupervised learning of temporally and modality invariant informative representations, referred to as archetypes, from multiple time series originating from different modalities. To this end a novel method, coined as temporal archetypal analysis, is proposed. The performance of the proposed method is assessed by conducting experiments in unsupervised action segmentation. Experimental results on three different real world datasets using single modal and multimodal visual representations indicate the robustness and effectiveness of the proposed methods, outperforming compared state-of-the-art methods by a large, in most of the cases, margin

    Human Motion Capture Data Tailored Transform Coding

    Full text link
    Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed

    LabelFusion: A Pipeline for Generating Ground Truth Labels for Real RGBD Data of Cluttered Scenes

    Full text link
    Deep neural network (DNN) architectures have been shown to outperform traditional pipelines for object segmentation and pose estimation using RGBD data, but the performance of these DNN pipelines is directly tied to how representative the training data is of the true data. Hence a key requirement for employing these methods in practice is to have a large set of labeled data for your specific robotic manipulation task, a requirement that is not generally satisfied by existing datasets. In this paper we develop a pipeline to rapidly generate high quality RGBD data with pixelwise labels and object poses. We use an RGBD camera to collect video of a scene from multiple viewpoints and leverage existing reconstruction techniques to produce a 3D dense reconstruction. We label the 3D reconstruction using a human assisted ICP-fitting of object meshes. By reprojecting the results of labeling the 3D scene we can produce labels for each RGBD image of the scene. This pipeline enabled us to collect over 1,000,000 labeled object instances in just a few days. We use this dataset to answer questions related to how much training data is required, and of what quality the data must be, to achieve high performance from a DNN architecture

    MonoPerfCap: Human Performance Capture from Monocular Video

    Full text link
    We present the first marker-less approach for temporally coherent 3D performance capture of a human with general clothing from monocular video. Our approach reconstructs articulated human skeleton motion as well as medium-scale non-rigid surface deformations in general scenes. Human performance capture is a challenging problem due to the large range of articulation, potentially fast motion, and considerable non-rigid deformations, even from multi-view data. Reconstruction from monocular video alone is drastically more challenging, since strong occlusions and the inherent depth ambiguity lead to a highly ill-posed reconstruction problem. We tackle these challenges by a novel approach that employs sparse 2D and 3D human pose detections from a convolutional neural network using a batch-based pose estimation strategy. Joint recovery of per-batch motion allows to resolve the ambiguities of the monocular reconstruction problem based on a low dimensional trajectory subspace. In addition, we propose refinement of the surface geometry based on fully automatically extracted silhouettes to enable medium-scale non-rigid alignment. We demonstrate state-of-the-art performance capture results that enable exciting applications such as video editing and free viewpoint video, previously infeasible from monocular video. Our qualitative and quantitative evaluation demonstrates that our approach significantly outperforms previous monocular methods in terms of accuracy, robustness and scene complexity that can be handled.Comment: Accepted to ACM TOG 2018, to be presented on SIGGRAPH 201
    corecore