217 research outputs found

    Doctor of Philosophy

    Get PDF
    dissertationThe medial axis of an object is a shape descriptor that intuitively presents the morphology or structure of the object as well as intrinsic geometric properties of the object’s shape. These properties have made the medial axis a vital ingredient for shape analysis applications, and therefore the computation of which is a fundamental problem in computational geometry. This dissertation presents new methods for accurately computing the 2D medial axis of planar objects bounded by B-spline curves, and the 3D medial axis of objects bounded by B-spline surfaces. The proposed methods for the 3D case are the first techniques that automatically compute the complete medial axis along with its topological structure directly from smooth boundary representations. Our approach is based on the eikonal (grassfire) flow where the boundary is offset along the inward normal direction. As the boundary deforms, different regions start intersecting with each other to create the medial axis. In the generic situation, the (self-) intersection set is born at certain creation-type transition points, then grows and undergoes intermediate transitions at special isolated points, and finally ends at annihilation-type transition points. The intersection set evolves smoothly in between transition points. Our approach first computes and classifies all types of transition points. The medial axis is then computed as a time trace of the evolving intersection set of the boundary using theoretically derived evolution vector fields. This dynamic approach enables accurate tracking of elements of the medial axis as they evolve and thus also enables computation of topological structure of the solution. Accurate computation of geometry and topology of 3D medial axes enables a new graph-theoretic method for shape analysis of objects represented with B-spline surfaces. Structural components are computed via the cycle basis of the graph representing the 1-complex of a 3D medial axis. This enables medial axis based surface segmentation, and structure based surface region selection and modification. We also present a new approach for structural analysis of 3D objects based on scalar functions defined on their surfaces. This approach is enabled by accurate computation of geometry and structure of 2D medial axes of level sets of the scalar functions. Edge curves of the 3D medial axis correspond to a subset of ridges on the bounding surfaces. Ridges are extremal curves of principal curvatures on a surface indicating salient intrinsic features of its shape, and hence are of particular interest as tools for shape analysis. This dissertation presents a new algorithm for accurately extracting all ridges directly from B-spline surfaces. The proposed technique is also extended to accurately extract ridges from isosurfaces of volumetric data using smooth implicit B-spline representations. Accurate ridge curves enable new higher-order methods for surface analysis. We present a new definition of salient regions in order to capture geometrically significant surface regions in the neighborhood of ridges as well as to identify salient segments of ridges

    Enhanced 3D Capture for Room-sized Dynamic Scenes with Commodity Depth Cameras

    Get PDF
    3D reconstruction of dynamic scenes can find many applications in areas such as virtual/augmented reality, 3D telepresence and 3D animation, while it is challenging to achieve a complete and high quality reconstruction due to the sensor noise and occlusions in the scene. This dissertation demonstrates our efforts toward building a 3D capture system for room-sized dynamic environments. A key observation is that reconstruction insufficiency (e.g., incompleteness and noise) can be mitigated by accumulating data from multiple frames. In dynamic environments, dropouts in 3D reconstruction generally do not consistently appear in the same locations. Thus, accumulation of the captured 3D data over time can fill in the missing fragments. Reconstruction noise is reduced as well. The first piece of the system builds 3D models for room-scale static scenes with one hand-held depth sensor, where we use plane features, in addition to image salient points, for robust pairwise matching and bundle adjustment over the whole data sequence. In the second piece of the system, we designed a robust non-rigid matching algorithm that considers both dense point alignment and color similarity, so that the data sequence for a continuously deforming object captured by multiple depth sensors can be aligned together and fused into a high quality 3D model. We further extend this work for deformable object scanning with a single depth sensor. To deal with the drift problem, we designed a dense nonrigid bundle adjustment algorithm to simultaneously optimize for the final mesh and the deformation parameters of every frame. Finally, we integrate static scanning and nonrigid matching into a reconstruction system for room-sized dynamic environments, where we prescan the static parts of the scene and perform data accumulation for dynamic parts. Both rigid and nonrigid motions of objects are tracked in a unified framework, and close contacts between objects are also handled. The dissertation demonstrates significant improvements for dense reconstruction over state-of-the-art. Our plane-based scanning system for indoor environments delivers reliable reconstruction for challenging situations, such as lack of both visual and geometrical salient features. Our nonrigid alignment algorithm enables data fusion for deforming objects and thus achieves dramatically enhanced reconstruction. Our novel bundle adjustment algorithm handles dense input partial scans with nonrigid motion and outputs dense reconstruction with comparably high quality as the static scanning algorithm (e.g., KinectFusion). Finally, we demonstrate enhanced reconstruction results for room-sized dynamic environments by integrating the above techniques, which significantly advances state-of-the-art.Doctor of Philosoph

    Mandoline: robust cut-cell generation for arbitrary triangle meshes

    Get PDF
    Although geometry arising "in the wild" most often comes in the form of a surface representation, a plethora of geometrical and physical applications require the construction of volumetric embeddings either of the geometry itself or the domain surrounding it. Cartesian cut-cell-based mesh generation provides an attractive solution in which volumetric elements are constructed from the intersection of the input surface geometry with a uniform or adaptive hexahedral grid. This choice, especially common in computational fluid dynamics, has the potential to efficiently generate accurate, surface-conforming cells; unfortunately, current solutions are often slow, fragile, or cannot handle many common topological situations. We therefore propose a novel, robust cut-cell construction technique for triangle surface meshes that explicitly computes the precise geometry of the intersection cells, even on meshes that are open or non-manifold. Its fundamental geometric primitive is the intersection of an arbitrary segment with an axis-aligned plane. Beginning from the set of intersection points between triangle mesh edges and grid planes, our bottom-up approach robustly determines cut-edges, cut-faces, and finally cut-cells, in a manner designed to guarantee topological correctness. We demonstrate its effectiveness and speed on a wide range of input meshes and grid resolutions, and make the code available as open source.This work is graciously supported by NSERC Discovery Grants (RGPIN-04360-2014 & RGPIN-2017-05524), NSERC Accelerator Grant (RGPAS-2017-507909), Connaught Fund (503114), and the Canada Research Chairs Program

    Differential operators on sketches via alpha contours

    Full text link
    A vector sketch is a popular and natural geometry representation depicting a 2D shape. When viewed from afar, the disconnected vector strokes of a sketch and the empty space around them visually merge into positive space and negative space, respectively. Positive and negative spaces are the key elements in the composition of a sketch and define what we perceive as the shape. Nevertheless, the notion of positive or negative space is mathematically ambiguous: While the strokes unambiguously indicate the interior or boundary of a 2D shape, the empty space may or may not belong to the shape’s exterior. For standard discrete geometry representations, such as meshes or point clouds, some of the most robust pipelines rely on discretizations of differential operators, such as Laplace-Beltrami. Such discretizations are not available for vector sketches; defining them may enable numerous applications of classical methods on vector sketches. However, to do so, one needs to define the positive space of a vector sketch, or the sketch shape. Even though extracting this 2D sketch shape is mathematically ambiguous, we propose a robust algorithm, Alpha Contours, constructing its conservative estimate: a 2D shape containing all the input strokes, which lie in its interior or on its boundary, and aligning tightly to a sketch. This allows us to define popular differential operators on vector sketches, such as Laplacian and Steklov operators. We demonstrate that our construction enables robust tools for vector sketches, such as As-Rigid-As-Possible sketch deformation and functional maps between sketches, as well as solving partial differential equations on a vector sketch

    Collision Detection and Merging of Deformable B-Spline Surfaces in Virtual Reality Environment

    Get PDF
    This thesis presents a computational framework for representing, manipulating and merging rigid and deformable freeform objects in virtual reality (VR) environment. The core algorithms for collision detection, merging, and physics-based modeling used within this framework assume that all 3D deformable objects are B-spline surfaces. The interactive design tool can be represented as a B-spline surface, an implicit surface or a point, to allow the user a variety of rigid or deformable tools. The collision detection system utilizes the fact that the blending matrices used to discretize the B-spline surface are independent of the position of the control points and, therefore, can be pre-calculated. Complex B-spline surfaces can be generated by merging various B-spline surface patches using the B-spline surface patches merging algorithm presented in this thesis. Finally, the physics-based modeling system uses the mass-spring representation to determine the deformation and the reaction force values provided to the user. This helps to simulate realistic material behaviour of the model and assist the user in validating the design before performing extensive product detailing or finite element analysis using commercially available CAD software. The novelty of the proposed method stems from the pre-calculated blending matrices used to generate the points for graphical rendering, collision detection, merging of B-spline patches, and nodes for the mass spring system. This approach reduces computational time by avoiding the need to solve complex equations for blending functions of B-splines and perform the inversion of large matrices. This alternative approach to the mechanical concept design will also help to do away with the need to build prototypes for conceptualization and preliminary validation of the idea thereby reducing the time and cost of concept design phase and the wastage of resources

    Non-Rigid Structure from Motion

    Get PDF
    This thesis revisits a challenging classical problem in geometric computer vision known as "Non-Rigid Structure-from-Motion" (NRSfM). It is a well-known problem where the task is to recover the 3D shape and motion of a non-rigidly moving object from image data. A reliable solution to this problem is valuable in several industrial applications such as virtual reality, medical surgery, animation movies etc. Nevertheless, to date, there does not exist any algorithm that can solve NRSfM for all kinds of conceivable motion. As a result, additional constraints and assumptions are often employed to solve NRSfM. The task is challenging due to the inherent unconstrained nature of the problem itself as many 3D varying configurations can have similar image projections. The problem becomes even more challenging if the camera is moving along with the object. The thesis takes on a modern view to this challenging problem and proposes a few algorithms that have set a new performance benchmark to solve NRSfM. The thesis not only discusses the classical work in NRSfM but also proposes some powerful elementary modification to it. The foundation of this thesis surpass the traditional single object NRSFM and for the first time provides an effective formulation to realise multi-body NRSfM. Most techniques for NRSfM under factorisation can only handle sparse feature correspondences. These sparse features are then used to construct a scene using the organisation of points, lines, planes or other elementary geometric primitive. Nevertheless, sparse representation of the scene provides an incomplete information about the scene. This thesis goes from sparse NRSfM to dense NRSfM for a single object, and then slowly lifts the intuition to realise dense 3D reconstruction of the entire dynamic scene as a global as rigid as possible deformation problem. The core of this work goes beyond the traditional approach to deal with deformation. It shows that relative scales for multiple deforming objects can be recovered under some mild assumption about the scene. The work proposes a new approach for dense detailed 3D reconstruction of a complex dynamic scene from two perspective frames. Since the method does not need any depth information nor it assumes a template prior, or per-object segmentation, or knowledge about the rigidity of the dynamic scene, it is applicable to a wide range of scenarios including YouTube Videos. Lastly, this thesis provides a new way to perceive the depth of a dynamic scene which essentially trivialises the notion of motion estimation as a compulsory step to solve this problem. Conventional geometric methods to address depth estimation requires a reliable estimate of motion parameters for each moving object, which is difficult to obtain and validate. In contrast, this thesis introduces a new motion-free approach to estimate the dense depth map of a complex dynamic scene for successive/multiple frames. The work show that given per-pixel optical flow correspondences between two consecutive frames and the sparse depth prior for the reference frame, we can recover the dense depth map for the successive frames without solving for motion parameters. By assigning the locally rigid structure to the piece-wise planar approximation of a dynamic scene which transforms as rigid as possible over frames, we can bypass the motion estimation step. Experiments results and MATLAB codes on relevant examples are provided to validate the motion-free idea

    Toward Controllable and Robust Surface Reconstruction from Spatial Curves

    Get PDF
    Reconstructing surface from a set of spatial curves is a fundamental problem in computer graphics and computational geometry. It often arises in many applications across various disciplines, such as industrial prototyping, artistic design and biomedical imaging. While the problem has been widely studied for years, challenges remain for handling different type of curve inputs while satisfying various constraints. We study studied three related computational tasks in this thesis. First, we propose an algorithm for reconstructing multi-labeled material interfaces from cross-sectional curves that allows for explicit topology control. Second, we addressed the consistency restoration, a critical but overlooked problem in applying algorithms of surface reconstruction to real-world cross-sections data. Lastly, we propose the Variational Implicit Point Set Surface which allows us to robustly handle noisy, sparse and non-uniform inputs, such as samples from spatial curves

    Probabilistic modeling of texture transition for fast tracking and delineation

    Get PDF
    In this thesis a probabilistic approach to texture boundary detection for tracking applications is presented. We have developed a novel fast algorithm for Bayesian estimation of texture transition locations from a short sequence of pixels on a scanline that combines the desirable speed of edge-based line search and the sophistication of Bayesian texture analysis given a small set of observations. For the cases where the given observations are too few for reliable Bayesian estimation of probability of texture change we propose an innovative machine learning technique to generate a probabilistic texture transition model. This is achieved by considering a training dataset containing small patches of blending textures. By encompassing in the training set enough examples to accurately model texture transitions of interest we can construct a predictor that can be used for object boundary tracking that can deal with few observations and demanding cases of tracking of arbitrary textured objects against cluttered background. Object outlines are then obtained by combining the texture crossing probabilities across a set of scanlines. We show that a rigid geometric model of the object to be tracked or smoothness constraints in the absence of such a model can be used to coalesce the scanline texture crossing probabilities obtained using the methods mentioned above. We propose a Hidden Markov Model to aggregate robustly the sparse transition probabilities of scanlines sampled along the projected hypothesis model contour. As a result continuous object contours can be extracted using a posteriori maximization of texture transition probabilities. On the other hand, stronger geometric constraints such as available rigid models of the target are directly enforced by robust stochastic optimization. In addition to being fast, the allure of the proposed probabilistic framework is that it accommodates a unique infrastructure for tracking of heterogeneous objects which utilizes the machine learning-based predictor as well as the Bayesian estimator interchangeably in conjunction with robust optimization to extract object contours robustly. We apply the developed methods to tracking of textured and non textured rigid objects as well as deformable body outlines and monocular articulated human motion in challenging conditions. Finally, because it is fast, our method can also serve as an interactive texture segmentation tool
    • …
    corecore