222 research outputs found

    Superquadric representation of scenes from multi-view range data

    Get PDF
    Object representation denotes representing three-dimensional (3D) real-world objects with known graphic or mathematic primitives recognizable to computers. This research has numerous applications for object-related tasks in areas including computer vision, computer graphics, reverse engineering, etc. Superquadrics, as volumetric and parametric models, have been selected to be the representation primitives throughout this research. Superquadrics are able to represent a large family of solid shapes by a single equation with only a few parameters. This dissertation addresses superquadric representation of multi-part objects and multiobject scenes. Two issues motivate this research. First, superquadric representation of multipart objects or multi-object scenes has been an unsolved problem due to the complex geometry of objects. Second, superquadrics recovered from single-view range data tend to have low confidence and accuracy due to partially scanned object surfaces caused by inherent occlusions. To address these two problems, this dissertation proposes a multi-view superquadric representation algorithm. By incorporating both part decomposition and multi-view range data, the proposed algorithm is able to not only represent multi-part objects or multi-object scenes, but also achieve high confidence and accuracy of recovered superquadrics. The multi-view superquadric representation algorithm consists of (i) initial superquadric model recovery from single-view range data, (ii) pairwise view registration based on recovered superquadric models, (iii) view integration, (iv) part decomposition, and (v) final superquadric fitting for each decomposed part. Within the multi-view superquadric representation framework, this dissertation proposes a 3D part decomposition algorithm to automatically decompose multi-part objects or multiobject scenes into their constituent single parts consistent with human visual perception. Superquadrics can then be recovered for each decomposed single-part object. The proposed part decomposition algorithm is based on curvature analysis, and includes (i) Gaussian curvature estimation, (ii) boundary labeling, (iii) part growing and labeling, and (iv) post-processing. In addition, this dissertation proposes an extended view registration algorithm based on superquadrics. The proposed view registration algorithm is able to handle deformable superquadrics as well as 3D unstructured data sets. For superquadric fitting, two objective functions primarily used in the literature have been comprehensively investigated with respect to noise, viewpoints, sample resolutions, etc. The objective function proved to have better performance has been used throughout this dissertation. In summary, the three algorithms (contributions) proposed in this dissertation are generic and flexible in the sense of handling triangle meshes, which are standard surface primitives in computer vision and graphics. For each proposed algorithm, the dissertation presents both theory and experimental results. The results demonstrate the efficiency of the algorithms using both synthetic and real range data of a large variety of objects and scenes. In addition, the experimental results include comparisons with previous methods from the literature. Finally, the dissertation concludes with a summary of the contributions to the state of the art in superquadric representation, and presents possible future extensions to this research

    Part Description and Segmentation Using Contour, Surface and Volumetric Primitives

    Get PDF
    The problem of part definition, description, and decomposition is central to the shape recognition systems. The Ultimate goal of segmenting range images into meaningful parts and objects has proved to be very difficult to realize, mainly due to the isolation of the segmentation problem from the issue of representation. We propose a paradigm for part description and segmentation by integration of contour, surface, and volumetric primitives. Unlike previous approaches, we have used geometric properties derived from both boundary-based (surface contours and occluding contours), and primitive-based (quadric patches and superquadric models) representations to define and recover part-whole relationships, without a priori knowledge about the objects or object domain. The object shape is described at three levels of complexity, each contributing to the overall shape. Our approach can be summarized as answering the following question : Given that we have all three different modules for extracting volume, surface and boundary properties, how should they be invoked, evaluated and integrated? Volume and boundary fitting, and surface description are performed in parallel to incorporate the best of the coarse to fine and fine to coarse segmentation strategy. The process involves feedback between the segmentor (the Control Module) and individual shape description modules. The control module evaluates the intermediate descriptions and formulates hypotheses about parts. Hypotheses are further tested by the segmentor and the descriptors. The descriptions thus obtained are independent of position, orientation, scale, domain and domain properties, and are based purely on geometric considerations. They are extremely useful for the high level domain dependent symbolic reasoning processes, which need not deal with tremendous amount of data, but only with a rich description of data in terms of primitives recovered at various levels of complexity

    Geometry-Aware Network for Non-Rigid Shape Prediction from a Single View

    Get PDF
    We propose a method for predicting the 3D shape of a deformable surface from a single view. By contrast with previous approaches, we do not need a pre-registered template of the surface, and our method is robust to the lack of texture and partial occlusions. At the core of our approach is a {\it geometry-aware} deep architecture that tackles the problem as usually done in analytic solutions: first perform 2D detection of the mesh and then estimate a 3D shape that is geometrically consistent with the image. We train this architecture in an end-to-end manner using a large dataset of synthetic renderings of shapes under different levels of deformation, material properties, textures and lighting conditions. We evaluate our approach on a test split of this dataset and available real benchmarks, consistently improving state-of-the-art solutions with a significantly lower computational time.Comment: Accepted at CVPR 201

    A Method of 3D Object Reconstruction by Fusing Vision with Touch Using Internal Models with Global and Local Deformations

    Get PDF
    PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATIO

    Surface and Volumetric Segmentation of Complex 3-D Objects Using Parametric Shape Models

    Get PDF
    The problem of part definition, description, and decomposition is central to the shape recognition systems. In this dissertation, we develop an integrated framework for segmenting dense range data of complex 3-D scenes into their constituent parts in terms of surface and volumetric primitives. Unlike previous approaches, we use geometric properties derived from surface, as well as volumetric models, to recover structured descriptions of complex objects without a priori domain knowledge or stored models. To recover shape descriptions, we use bi-quadric models for surface representation and superquadric models for object-centered volumetric representation. The surface segmentation uses a novel approach of searching for the best piecewise description of the image in terms of bi-quadric (z = f(x,y)) models. It is used to generate the region adjacency graphs, to localize surface discontinuities, and to derive global shape properties of the surfaces. A superquadric model is recovered for the entire data set and residuals are computed to evaluate the fit. The goodness-of-fit value based on the inside-outside function, and the mean-squared distance of data from the model provide quantitative evaluation of the model. The qualitative evaluation criteria check the local consistency of the model in the form of residual maps of overestimated and underestimated data regions. The control structure invokes the models in a systematic manner, evaluates the intermediate descriptions, and integrates them to achieve final segmentation. Superquadric and bi-quadric models are recovered in parallel to incorporate the best of the coarse-to-fine and fine-to-coarse segmentation strategies. The model evaluation criteria determine the dimensionality of the scene, and decide whether to terminate the procedure, or selectively refine the segmentation by following a global-to-local part segmentation approach. The control module generates hypotheses about superquadric models at clusters of underestimated data and performs controlled extrapolation of the part-model by shrinking the global model. As the global model shrinks and the local models grow, they are evaluated and tested for termination or further segmentation. We present results on real range images of scenes of varying complexity, including objects with occluding parts, and scenes where surface segmentation is not sufficient to guide the volumetric segmentation. We analyze the issue of segmentation of complex scenes thoroughly by studying the effect of missing data on volumetric model recovery, generating object-centered descriptions, and presenting a complete set of criteria for the evaluation of the superquadric models. We conclude by discussing the applications of our approach in data reduction, 3-D object recognition, geometric modeling, automatic model generation. object manipulation, and active vision

    Range Image Segmentation for 3-D Object Recognition

    Get PDF
    Three dimensional scene analysis in an unconstrained and uncontrolled environment is the ultimate goal of computer vision. Explicit depth information about the scene is of tremendous help in segmentation and recognition of objects. Range image interpretation with a view of obtaining low-level features to guide mid-level and high-level segmentation and recognition processes is described. No assumptions about the scene are made and algorithms are applicable to any general single viewpoint range image. Low-level features like step edges and surface characteristics are extracted from the images and segmentation is performed based on individual features as well as combination of features. A high level recognition process based on superquadric fitting is described to demonstrate the usefulness of initial segmentation based on edges. A classification algorithm based on surface curvatures is used to obtain initial segmentation of the scene. Objects segmented using edge information are then classified using surface curvatures. Various applications of surface curvatures in mid and high level recognition processes are discussed. These include surface reconstruction, segmentation into convex patches and detection of smooth edges. Algorithms are run on real range images and results are discussed in detail

    Implicit meshes:unifying implicit and explicit surface representations for 3D reconstruction and tracking

    Get PDF
    This thesis proposes novel ways both to represent the static surfaces, and to parameterize their deformations. This can be used both by automated algorithms for efficient 3–D shape reconstruction, and by graphics designers for editing and animation. Deformable 3–D models can be represented either as traditional explicit surfaces, such as triangulated meshes, or as implicit surfaces. Explicit surfaces are widely accepted because they are simple to deform and render, however fitting them involves minimizing a non-differentiable distance function. By contrast, implicit surfaces allow fitting by minimizing a differentiable algebraic distance, but they are harder to meaningfully deform and render. Here we propose a method that combines the strength of both representations to avoid their drawbacks, and in this way build robust surface representation, called implicit mesh, suitable for automated shape recovery from video sequences. This surface representation lets us automatically detect and exploit silhouette constraints in uncontrolled environments that may involve occlusions and changing or cluttered backgrounds, which limit the applicability of most silhouette based methods. We advocate the use of Dirichlet Free Form Deformation (DFFD) as generic surface deformation technique that can be used to parameterize objects of arbitrary geometry defined as explicit meshes. It is based on the small set of control points and the generalized interpolant. Control points become model parameters and their change causes model's shape modification. Using such parameterization the problem dimensionality can be dramatically reduced, which is desirable property for most optimization algorithms, thus makes DFFD good tool for automated fitting. Combining DFFD as a generic parameterization method for explicit surfaces and implicit meshes as a generic surface representation we obtained a powerfull tool for automated shape recovery from images. However, we also argue that any other avaliable surface parameterization can be used. We demonstrate the applicability of our technique to 3–D reconstruction of the human upper-body including – face, neck and shoulders, and the human ear, from noisy stereo and silhouette data. We also reconstruct the shape of a high resolution human faces parametrized in terms of a Principal Component Analysis model from interest points and automatically detected silhouettes. Tracking of deformable objects using implicit meshes from silhouettes and interest points in monocular sequences is shown in following two examples: Modeling the deformations of a piece of paper represented by an ordinary triangulated mesh; tracking a person's shoulders whose deformations are expressed in terms of Dirichlet Free Form Deformations
    corecore