1,722 research outputs found

    Towards recovery of complex shapes in meshes using digital images for reverse engineering applications

    Get PDF
    When an object owns complex shapes, or when its outer surfaces are simply inaccessible, some of its parts may not be captured during its reverse engineering. These deficiencies in the point cloud result in a set of holes in the reconstructed mesh. This paper deals with the use of information extracted from digital images to recover missing areas of a physical object. The proposed algorithm fills in these holes by solving an optimization problem that combines two kinds of information: (1) the geometric information available on the surrounding of the holes, (2) the information contained in an image of the real object. The constraints come from the image irradiance equation, a first-order non-linear partial differential equation that links the position of the mesh vertices to the light intensity of the image pixels. The blending conditions are satisfied by using an objective function based on a mechanical model of bar network that simulates the curvature evolution over the mesh. The inherent shortcomings both to the current holefilling algorithms and the resolution of the image irradiance equations are overcom

    A framework for hull form reverse engineering and geometry integration into numerical simulations

    Get PDF
    The thesis presents a ship hull form specific reverse engineering and CAD integration framework. The reverse engineering part proposes three alternative suitable reconstruction approaches namely curves network, direct surface fitting, and triangulated surface reconstruction. The CAD integration part includes surface healing, region identification, and domain preparation strategies which used to adapt the CAD model to downstream application requirements. In general, the developed framework bridges a point cloud and a CAD model obtained from IGES and STL file into downstream applications

    Deformable meshes for shape recovery: models and applications

    Get PDF
    With the advance of scanning and imaging technology, more and more 3D objects become available. Among them, deformable objects have gained increasing interests. They include medical instances such as organs, a sequence of objects in motion, and objects of similar shapes where a meaningful correspondence can be established between each other. Thus, it requires tools to store, compare, and retrieve them. Many of these operations depend on successful shape recovery. Shape recovery is the task to retrieve an object from the environment where its geometry is hidden or implicitly known. As a simple and versatile tool, mesh is widely used in computer graphics for modelling and visualization. In particular, deformable meshes are meshes which can take the deformation of deformable objects. They extend the modelling ability of meshes. This dissertation focuses on using deformable meshes to approach the 3D shape recovery problem. Several models are presented to solve the challenges for shape recovery under different circumstances. When the object is hidden in an image, a PDE deformable model is designed to extract its surface shape. The algorithm uses a mesh representation so that it can model any non-smooth surface with an arbitrary precision compared to a parametric model. It is more computational efficient than a level-set approach. When the explicit geometry of the object is known but is hidden in a bank of shapes, we simplify the deformation of the model to a graph matching procedure through a hierarchical surface abstraction approach. The framework is used for shape matching and retrieval. This idea is further extended to retain the explicit geometry during the abstraction. A novel motion abstraction framework for deformable meshes is devised based on clustering of local transformations and is successfully applied to 3D motion compression

    Single View Modeling and View Synthesis

    Get PDF
    This thesis develops new algorithms to produce 3D content from a single camera. Today, amateurs can use hand-held camcorders to capture and display the 3D world in 2D, using mature technologies. However, there is always a strong desire to record and re-explore the 3D world in 3D. To achieve this goal, current approaches usually make use of a camera array, which suffers from tedious setup and calibration processes, as well as lack of portability, limiting its application to lab experiments. In this thesis, I try to produce the 3D contents using a single camera, making it as simple as shooting pictures. It requires a new front end capturing device rather than a regular camcorder, as well as more sophisticated algorithms. First, in order to capture the highly detailed object surfaces, I designed and developed a depth camera based on a novel technique called light fall-off stereo (LFS). The LFS depth camera outputs color+depth image sequences and achieves 30 fps, which is necessary for capturing dynamic scenes. Based on the output color+depth images, I developed a new approach that builds 3D models of dynamic and deformable objects. While the camera can only capture part of a whole object at any instance, partial surfaces are assembled together to form a complete 3D model by a novel warping algorithm. Inspired by the success of single view 3D modeling, I extended my exploration into 2D-3D video conversion that does not utilize a depth camera. I developed a semi-automatic system that converts monocular videos into stereoscopic videos, via view synthesis. It combines motion analysis with user interaction, aiming to transfer as much depth inferring work from the user to the computer. I developed two new methods that analyze the optical flow in order to provide additional qualitative depth constraints. The automatically extracted depth information is presented in the user interface to assist with user labeling work. In this thesis, I developed new algorithms to produce 3D contents from a single camera. Depending on the input data, my algorithm can build high fidelity 3D models for dynamic and deformable objects if depth maps are provided. Otherwise, it can turn the video clips into stereoscopic video
    • …
    corecore