168 research outputs found

    Multi-body Non-rigid Structure-from-Motion

    Get PDF
    Conventional structure-from-motion (SFM) research is primarily concerned with the 3D reconstruction of a single, rigidly moving object seen by a static camera, or a static and rigid scene observed by a moving camera --in both cases there are only one relative rigid motion involved. Recent progress have extended SFM to the areas of {multi-body SFM} (where there are {multiple rigid} relative motions in the scene), as well as {non-rigid SFM} (where there is a single non-rigid, deformable object or scene). Along this line of thinking, there is apparently a missing gap of "multi-body non-rigid SFM", in which the task would be to jointly reconstruct and segment multiple 3D structures of the multiple, non-rigid objects or deformable scenes from images. Such a multi-body non-rigid scenario is common in reality (e.g. two persons shaking hands, multi-person social event), and how to solve it represents a natural {next-step} in SFM research. By leveraging recent results of subspace clustering, this paper proposes, for the first time, an effective framework for multi-body NRSFM, which simultaneously reconstructs and segments each 3D trajectory into their respective low-dimensional subspace. Under our formulation, 3D trajectories for each non-rigid structure can be well approximated with a sparse affine combination of other 3D trajectories from the same structure (self-expressiveness). We solve the resultant optimization with the alternating direction method of multipliers (ADMM). We demonstrate the efficacy of the proposed framework through extensive experiments on both synthetic and real data sequences. Our method clearly outperforms other alternative methods, such as first clustering the 2D feature tracks to groups and then doing non-rigid reconstruction in each group or first conducting 3D reconstruction by using single subspace assumption and then clustering the 3D trajectories into groups.Comment: 21 pages, 16 figure

    Visualizing and Modeling Interior Spaces of Dangerous Structures using Lidar

    Get PDF
    LIght Detection and Ranging (LIDAR) scanning can be used to safely and remotely provide intelligence on the interior of dangerous structures for use by first responders that need to enter these structures. By scanning into structures through windows and other openings or moving the LIDAR scanning into the structure, in both cases carried by a remote controlled robotic crawler, the presence of dangerous items or personnel can be confi rmed or denied. Entry and egress pathways can be determined in advance, and potential hiding/ambush locations identifi ed. This paper describes an integrated system of a robotic crawler and LIDAR scanner. Both the scanner and the robot are wirelessly remote controlled from a single laptop computer. This includes navigation of the crawler with real-time video, self-leveling of the LIDAR platform, and the ability to raise the scanner up to heights of 2.5 m. Multiple scans can be taken from different angles to fi ll in detail and provide more complete coverage. These scans can quickly be registered to each other using user defi ned \u27pick points\u27, creating a single point cloud from multiple scans. Software has been developed to deconstruct the point clouds, and identify specifi c objects in the interior of the structure from the point cloud. Software has been developed to interactively visualize and walk through the modeled structures. Floor plans are automatically generated and a data export facility has been developed. Tests have been conducted on multiple structures, simulating many of the contingencies that a fi rst responder would face

    Biview learning for human posture segmentation from 3D points cloud

    Get PDF
    Posture segmentation plays an essential role in human motion analysis. The state-of-the-art method extracts sufficiently high-dimensional features from 3D depth images for each 3D point and learns an efficient body part classifier. However, high-dimensional features are memory-consuming and difficult to handle on large-scale training dataset. In this paper, we propose an efficient two-stage dimension reduction scheme, termed biview learning, to encode two independent views which are depth-difference features (DDF) and relative position features (RPF). Biview learning explores the complementary property of DDF and RPF, and uses two stages to learn a compact yet comprehensive low-dimensional feature space for posture segmentation. In the first stage, discriminative locality alignment (DLA) is applied to the high-dimensional DDF to learn a discriminative low-dimensional representation. In the second stage, canonical correlation analysis (CCA) is used to explore the complementary property of RPF and the dimensionality reduced DDF. Finally, we train a support vector machine (SVM) over the output of CCA. We carefully validate the effectiveness of DLA and CCA utilized in the two-stage scheme on our 3D human points cloud dataset. Experimental results show that the proposed biview learning scheme significantly outperforms the state-of-the-art method for human posture segmentation. © 2014 Qiao et al

    Trends in Mathematical Imaging and Surface Processing

    Get PDF
    Motivated both by industrial applications and the challenge of new problems, one observes an increasing interest in the field of image and surface processing over the last years. It has become clear that even though the applications areas differ significantly the methodological overlap is enormous. Even if contributions to the field come from almost any discipline in mathematics, a major role is played by partial differential equations and in particular by geometric and variational modeling and by their numerical counterparts. The aim of the workshop was to gather a group of leading experts coming from mathematics, engineering and computer graphics to cover the main developments
    • …
    corecore