346 research outputs found

    A Revisit of Shape Editing Techniques: from the Geometric to the Neural Viewpoint

    Get PDF
    3D shape editing is widely used in a range of applications such as movie production, computer games and computer aided design. It is also a popular research topic in computer graphics and computer vision. In past decades, researchers have developed a series of editing methods to make the editing process faster, more robust, and more reliable. Traditionally, the deformed shape is determined by the optimal transformation and weights for an energy term. With increasing availability of 3D shapes on the Internet, data-driven methods were proposed to improve the editing results. More recently as the deep neural networks became popular, many deep learning based editing methods have been developed in this field, which is naturally data-driven. We mainly survey recent research works from the geometric viewpoint to those emerging neural deformation techniques and categorize them into organic shape editing methods and man-made model editing methods. Both traditional methods and recent neural network based methods are reviewed

    Analysis and Manipulation of Repetitive Structures of Varying Shape

    Get PDF
    Self-similarity and repetitions are ubiquitous in man-made and natural objects. Such structural regularities often relate to form, function, aesthetics, and design considerations. Discovering structural redundancies along with their dominant variations from 3D geometry not only allows us to better understand the underlying objects, but is also beneficial for several geometry processing tasks including compact representation, shape completion, and intuitive shape manipulation. To identify these repetitions, we present a novel detection algorithm based on analyzing a graph of surface features. We combine general feature detection schemes with a RANSAC-based randomized subgraph searching algorithm in order to reliably detect recurring patterns of locally unique structures. A subsequent segmentation step based on a simultaneous region growing is applied to verify that the actual data supports the patterns detected in the feature graphs. We introduce our graph based detection algorithm on the example of rigid repetitive structure detection. Then we extend the approach to allow more general deformations between the detected parts. We introduce subspace symmetries whereby we characterize similarity by requiring the set of repeating structures to form a low dimensional shape space. We discover these structures based on detecting linearly correlated correspondences among graphs of invariant features. The found symmetries along with the modeled variations are useful for a variety of applications including non-local and non-rigid denoising. Employing subspace symmetries for shape editing, we introduce a morphable part model for smart shape manipulation. The input geometry is converted to an assembly of deformable parts with appropriate boundary conditions. Our method uses self-similarities from a single model or corresponding parts of shape collections as training input and allows the user also to reassemble the identified parts in new configurations, thus exploiting both the discrete and continuous learned variations while ensuring appropriate boundary conditions across part boundaries. We obtain an interactive yet intuitive shape deformation framework producing realistic deformations on classes of objects that are difficult to edit using repetition-unaware deformation techniques

    Image-based clothes changing system

    Get PDF
    Abstract Current image-editing tools do not match up to the demands of personalized image manipulation, one application of which is changing clothes in usercaptured images. Previous work can change single color clothes using parametric human warping methods. In this paper, we propose an image-based clothes changing system, exploiting body factor extraction and content-aware image warping. Image segmentation and mask generation are first applied to the user input. Afterwards, we determine joint positions via a neural network. Then, body shape matching is performed and the shape of the model is warped to the user’s shape. Finally, head swapping is performed to produce realistic virtual results. We also provide a supervision and labeling tool for refinement and further assistance when creating a dataset.https://deepblue.lib.umich.edu/bitstream/2027.42/136772/1/41095_2017_Article_84.pd

    Iterative consolidation on unorganized point clouds and its application in design.

    Get PDF
    Chan, Kwan Chung.Thesis (M.Phil.)--Chinese University of Hong Kong, 2011.Includes bibliographical references (leaves 63-69).Abstracts in English and Chinese.Abstract --- p.vAcknowledgements --- p.ixList of Figures --- p.xiiiList of Tables --- p.xvChapter 1 --- Introduction --- p.1Chapter 1.1 --- Main contributions --- p.4Chapter 1.2 --- Overview --- p.4Chapter 2 --- Related Work --- p.7Chapter 2.1 --- Point cloud processing --- p.7Chapter 2.2 --- Model repairing --- p.9Chapter 2.3 --- Deformation and reconstruction --- p.10Chapter 3 --- Iterative Consolidation on Un-orientated Point Clouds --- p.11Chapter 3.1 --- Algorithm overview --- p.12Chapter 3.2 --- Down-sampling and outliers removal --- p.14Chapter 3.2.1 --- Normal estimation --- p.14Chapter 3.2.2 --- Down-sampling --- p.15Chapter 3.2.3 --- Particle noise removal --- p.17Chapter 3.3 --- APSS based repulsion --- p.19Chapter 3.4 --- Refinement --- p.22Chapter 3.4.1 --- Adaptive up-sampling --- p.22Chapter 3.4.2 --- Selection of up-sampled points --- p.23Chapter 3.4.3 --- Sample noise removal --- p.23Chapter 3.5 --- Set constraints to sample points --- p.24Chapter 4 --- Shape Modeling by Point Set --- p.27Chapter 4.1 --- Principle of deformation --- p.27Chapter 4.2 --- Selection --- p.29Chapter 4.3 --- Stretching and compressing --- p.30Chapter 4.4 --- Bending and twisting --- p.30Chapter 4.5 --- Inserting points --- p.30Chapter 5 --- Results and Discussion --- p.37Chapter 5.1 --- Program environment --- p.37Chapter 5.2 --- Results of iterative consolidation on un-orientated points --- p.37Chapter 5.3 --- Effect of our de-noising based on up-sampled points --- p.44Chapter 6 --- Conclusions --- p.49Chapter 6.1 --- Advantages --- p.49Chapter 6.2 --- Factors affecting our algorithm --- p.50Chapter 6.3 --- Possible future works --- p.51Chapter 6.3.1 --- Improve on the quality of results --- p.51Chapter 6.3.2 --- Reduce user input --- p.52Chapter 6.3.3 --- Multi-thread computation --- p.52Chapter A --- Finding Neighbors --- p.53Chapter A.1 --- k-d Tree --- p.53Chapter A.2 --- Octree --- p.54Chapter A.3 --- Minimum spanning tree --- p.55Chapter B --- Principle Component Analysis --- p.57Chapter B.1 --- Principle component analysis --- p.57Chapter C --- UI of the program --- p.59Chapter C.1 --- User Interface --- p.59Chapter D --- Publications --- p.61Bibliography --- p.6

    Automatic Alignment of 3D Multi-Sensor Point Clouds

    Get PDF
    Automatic 3D point cloud alignment is a major research topic in photogrammetry, computer vision and computer graphics. In this research, two keypoint feature matching approaches have been developed and proposed for the automatic alignment of 3D point clouds, which have been acquired from different sensor platforms and are in different 3D conformal coordinate systems. The first proposed approach is based on 3D keypoint feature matching. First, surface curvature information is utilized for scale-invariant 3D keypoint extraction. Adaptive non-maxima suppression (ANMS) is then applied to retain the most distinct and well-distributed set of keypoints. Afterwards, every keypoint is characterized by a scale, rotation and translation invariant 3D surface descriptor, called the radial geodesic distance-slope histogram. Similar keypoints descriptors on the source and target datasets are then matched using bipartite graph matching, followed by a modified-RANSAC for outlier removal. The second proposed method is based on 2D keypoint matching performed on height map images of the 3D point clouds. Height map images are generated by projecting the 3D point clouds onto a planimetric plane. Afterwards, a multi-scale wavelet 2D keypoint detector with ANMS is proposed to extract keypoints on the height maps. Then, a scale, rotation and translation-invariant 2D descriptor referred to as the Gabor, Log-Polar-Rapid Transform descriptor is computed for all keypoints. Finally, source and target height map keypoint correspondences are determined using a bi-directional nearest neighbour matching, together with the modified-RANSAC for outlier removal. Each method is assessed on multi-sensor, urban and non-urban 3D point cloud datasets. Results show that unlike the 3D-based method, the height map-based approach is able to align source and target datasets with differences in point density, point distribution and missing point data. Findings also show that the 3D-based method obtained lower transformation errors and a greater number of correspondences when the source and target have similar point characteristics. The 3D-based approach attained absolute mean alignment differences in the range of 0.23m to 2.81m, whereas the height map approach had a range from 0.17m to 1.21m. These differences meet the proximity requirements of the data characteristics and the further application of fine co-registration approaches

    State of the Art in Dense Monocular Non-Rigid 3D Reconstruction

    Get PDF
    3D reconstruction of deformable (or non-rigid) scenes from a set of monocular2D image observations is a long-standing and actively researched area ofcomputer vision and graphics. It is an ill-posed inverse problem,since--without additional prior assumptions--it permits infinitely manysolutions leading to accurate projection to the input 2D images. Non-rigidreconstruction is a foundational building block for downstream applicationslike robotics, AR/VR, or visual content creation. The key advantage of usingmonocular cameras is their omnipresence and availability to the end users aswell as their ease of use compared to more sophisticated camera set-ups such asstereo or multi-view systems. This survey focuses on state-of-the-art methodsfor dense non-rigid 3D reconstruction of various deformable objects andcomposite scenes from monocular videos or sets of monocular views. It reviewsthe fundamentals of 3D reconstruction and deformation modeling from 2D imageobservations. We then start from general methods--that handle arbitrary scenesand make only a few prior assumptions--and proceed towards techniques makingstronger assumptions about the observed objects and types of deformations (e.g.human faces, bodies, hands, and animals). A significant part of this STAR isalso devoted to classification and a high-level comparison of the methods, aswell as an overview of the datasets for training and evaluation of thediscussed techniques. We conclude by discussing open challenges in the fieldand the social aspects associated with the usage of the reviewed methods.<br
    corecore