1,621 research outputs found

    A Unified Surface Geometric Framework for Feature-Aware Denoising, Hole Filling and Context-Aware Completion

    Get PDF
    Technologies for 3D data acquisition and 3D printing have enormously developed in the past few years, and, consequently, the demand for 3D virtual twins of the original scanned objects has increased. In this context, feature-aware denoising, hole filling and context-aware completion are three essential (but far from trivial) tasks. In this work, they are integrated within a geometric framework and realized through a unified variational model aiming at recovering triangulated surfaces from scanned, damaged and possibly incomplete noisy observations. The underlying non-convex optimization problem incorporates two regularisation terms: a discrete approximation of the Willmore energy forcing local sphericity and suited for the recovery of rounded features, and an approximation of the l(0) pseudo-norm penalty favouring sparsity in the normal variation. The proposed numerical method solving the model is parameterization-free, avoids expensive implicit volumebased computations and based on the efficient use of the Alternating Direction Method of Multipliers. Experiments show how the proposed framework can provide a robust and elegant solution suited for accurate restorations even in the presence of severe random noise and large damaged areas

    Surface Modeling and Analysis Using Range Images: Smoothing, Registration, Integration, and Segmentation

    Get PDF
    This dissertation presents a framework for 3D reconstruction and scene analysis, using a set of range images. The motivation for developing this framework came from the needs to reconstruct the surfaces of small mechanical parts in reverse engineering tasks, build a virtual environment of indoor and outdoor scenes, and understand 3D images. The input of the framework is a set of range images of an object or a scene captured by range scanners. The output is a triangulated surface that can be segmented into meaningful parts. A textured surface can be reconstructed if color images are provided. The framework consists of surface smoothing, registration, integration, and segmentation. Surface smoothing eliminates the noise present in raw measurements from range scanners. This research proposes area-decreasing flow that is theoretically identical to the mean curvature flow. Using area-decreasing flow, there is no need to estimate the curvature value and an optimal step size of the flow can be obtained. Crease edges and sharp corners are preserved by an adaptive scheme. Surface registration aligns measurements from different viewpoints in a common coordinate system. This research proposes a new surface representation scheme named point fingerprint. Surfaces are registered by finding corresponding point pairs in an overlapping region based on fingerprint comparison. Surface integration merges registered surface patches into a whole surface. This research employs an implicit surface-based integration technique. The proposed algorithm can generate watertight models by space carving or filling the holes based on volumetric interpolation. Textures from different views are integrated inside a volumetric grid. Surface segmentation is useful to decompose CAD models in reverse engineering tasks and help object recognition in a 3D scene. This research proposes a watershed-based surface mesh segmentation approach. The new algorithm accurately segments the plateaus by geodesic erosion using fast marching method. The performance of the framework is presented using both synthetic and real world data from different range scanners. The dissertation concludes by summarizing the development of the framework and then suggests future research topics

    Surface-guided computing to analyze subcellular morphology and membrane-associated signals in 3D

    Full text link
    Signal transduction and cell function are governed by the spatiotemporal organization of membrane-associated molecules. Despite significant advances in visualizing molecular distributions by 3D light microscopy, cell biologists still have limited quantitative understanding of the processes implicated in the regulation of molecular signals at the whole cell scale. In particular, complex and transient cell surface morphologies challenge the complete sampling of cell geometry, membrane-associated molecular concentration and activity and the computing of meaningful parameters such as the cofluctuation between morphology and signals. Here, we introduce u-Unwrap3D, a framework to remap arbitrarily complex 3D cell surfaces and membrane-associated signals into equivalent lower dimensional representations. The mappings are bidirectional, allowing the application of image processing operations in the data representation best suited for the task and to subsequently present the results in any of the other representations, including the original 3D cell surface. Leveraging this surface-guided computing paradigm, we track segmented surface motifs in 2D to quantify the recruitment of Septin polymers by blebbing events; we quantify actin enrichment in peripheral ruffles; and we measure the speed of ruffle movement along topographically complex cell surfaces. Thus, u-Unwrap3D provides access to spatiotemporal analyses of cell biological parameters on unconstrained 3D surface geometries and signals.Comment: 49 pages, 10 figure

    A survey on 3D CAD model quality assurance and testing

    Get PDF
    [EN] A new taxonomy of issues related to CAD model quality is presented, which distinguishes between explicit and procedural models. For each type of model, morphologic, syntactic, and semantic errors are characterized. The taxonomy was validated successfully when used to classify quality testing tools, which are aimed at detecting and repairing data errors that may affect the simplification, interoperability, and reusability of CAD models. The study shows that low semantic level errors that hamper simplification are reasonably covered in explicit representations, although many CAD quality testers are still unaffordable for Small and Medium Enterprises, both in terms of cost and training time. Interoperability has been reasonably solved by standards like STEP AP 203 and AP214, but model reusability is not feasible in explicit representations. Procedural representations are promising, as interactive modeling editors automatically prevent most morphologic errors derived from unsuitable modeling strategies. Interoperability problems between procedural representations are expected to decrease dramatically with STEP AP242. Higher semantic aspects of quality such as assurance of design intent, however, are hardly supported by current CAD quality testers. (C) 2016 Elsevier Ltd. All rights reserved.This work was supported by the Spanish Ministry of Economy and Competitiveness and the European Regional Development Fund, through the ANNOTA project (Ref. TIN2013-46036-C3-1-R).González-Lluch, C.; Company, P.; Contero, M.; Camba, J.; Plumed, R. (2017). A survey on 3D CAD model quality assurance and testing. Computer-Aided Design. 83:64-79. https://doi.org/10.1016/j.cad.2016.10.003S64798

    Single View Modeling and View Synthesis

    Get PDF
    This thesis develops new algorithms to produce 3D content from a single camera. Today, amateurs can use hand-held camcorders to capture and display the 3D world in 2D, using mature technologies. However, there is always a strong desire to record and re-explore the 3D world in 3D. To achieve this goal, current approaches usually make use of a camera array, which suffers from tedious setup and calibration processes, as well as lack of portability, limiting its application to lab experiments. In this thesis, I try to produce the 3D contents using a single camera, making it as simple as shooting pictures. It requires a new front end capturing device rather than a regular camcorder, as well as more sophisticated algorithms. First, in order to capture the highly detailed object surfaces, I designed and developed a depth camera based on a novel technique called light fall-off stereo (LFS). The LFS depth camera outputs color+depth image sequences and achieves 30 fps, which is necessary for capturing dynamic scenes. Based on the output color+depth images, I developed a new approach that builds 3D models of dynamic and deformable objects. While the camera can only capture part of a whole object at any instance, partial surfaces are assembled together to form a complete 3D model by a novel warping algorithm. Inspired by the success of single view 3D modeling, I extended my exploration into 2D-3D video conversion that does not utilize a depth camera. I developed a semi-automatic system that converts monocular videos into stereoscopic videos, via view synthesis. It combines motion analysis with user interaction, aiming to transfer as much depth inferring work from the user to the computer. I developed two new methods that analyze the optical flow in order to provide additional qualitative depth constraints. The automatically extracted depth information is presented in the user interface to assist with user labeling work. In this thesis, I developed new algorithms to produce 3D contents from a single camera. Depending on the input data, my algorithm can build high fidelity 3D models for dynamic and deformable objects if depth maps are provided. Otherwise, it can turn the video clips into stereoscopic video

    Digital 3D documentation of cultural heritage sites based on terrestrial laser scanning

    Get PDF

    Appearance Modelling and Reconstruction for Navigation in Minimally Invasive Surgery

    Get PDF
    Minimally invasive surgery is playing an increasingly important role for patient care. Whilst its direct patient benefit in terms of reduced trauma, improved recovery and shortened hospitalisation has been well established, there is a sustained need for improved training of the existing procedures and the development of new smart instruments to tackle the issue of visualisation, ergonomic control, haptic and tactile feedback. For endoscopic intervention, the small field of view in the presence of a complex anatomy can easily introduce disorientation to the operator as the tortuous access pathway is not always easy to predict and control with standard endoscopes. Effective training through simulation devices, based on either virtual reality or mixed-reality simulators, can help to improve the spatial awareness, consistency and safety of these procedures. This thesis examines the use of endoscopic videos for both simulation and navigation purposes. More specifically, it addresses the challenging problem of how to build high-fidelity subject-specific simulation environments for improved training and skills assessment. Issues related to mesh parameterisation and texture blending are investigated. With the maturity of computer vision in terms of both 3D shape reconstruction and localisation and mapping, vision-based techniques have enjoyed significant interest in recent years for surgical navigation. The thesis also tackles the problem of how to use vision-based techniques for providing a detailed 3D map and dynamically expanded field of view to improve spatial awareness and avoid operator disorientation. The key advantage of this approach is that it does not require additional hardware, and thus introduces minimal interference to the existing surgical workflow. The derived 3D map can be effectively integrated with pre-operative data, allowing both global and local 3D navigation by taking into account tissue structural and appearance changes. Both simulation and laboratory-based experiments are conducted throughout this research to assess the practical value of the method proposed

    A Concept For Surface Reconstruction From Digitised Data

    Get PDF
    Reverse engineering and in particular the reconstruction of surfaces from digitized data is an important task in industry. With the development of new digitizing technologies such as laser or photogrammetry, real objects can be measured or digitized quickly and cost effectively. The result of the digitizing process is a set of discrete 3D sample points. These sample points have to be converted into a mathematical, continuous surface description, which can be further processed in different computer applications. The main goal of this work is to develop a concept for such a computer aided surface generation tool, that supports the new scanning technologies and meets the requirements in industry towards such a product. Therefore first, the requirements to be met by a surface reconstruction tool are determined. This marketing study has been done by analysing different departments of several companies. As a result, a catalogue of requirements is developed. The number of tasks and applications shows the importance of a fast and precise computer aided reconstruction tool in industry. The main result from the analysis is, that many important applications such as stereolithographie, copy milling etc. are based on triangular meshes or they are able to handle these polygonal surfaces. Secondly the digitizer, currently available on the market and used in industry are analysed. Any scanning system has its strength and weaknesses. A typical problem in digitizing is, that some areas of a model cannot be digitized due to occlusion or obstruction. The systems are also different in terms of accuracy, flexibility etc. The analysis of the systems leads to a second catalogue of requirements and tasks, which have to be solved in order to provide a complete and effective software tool. The analysis also shows, that the reconstruction problem cannot be solved fully automatically due to many limitations of the scanning technologies. Based on the two requirements, a concept for a software tool in order to process digitized data is developed and presented. The concept is restricted to the generation of polygonal surfaces. It combines automatic processes, such as the generation of triangular meshes from digitized data, as well as user interactive tools such as the reconstruction of sharp corners or the compensation of the scanning probe radius in tactile measured data. The most difficult problem in this reconstruction process is the automatic generation of a surface from discrete measured sample points. Hence, an algorithm for generating triangular meshes from digitized data has been developed. The algorithm is based on the principle of multiple view combination. The proposed approach is able to handle large numbers of data points (examples with up to 20 million data points were processed). Two pre-processing algorithm for triangle decimation and surface smoothing are also presented and part of the mesh generation process. Several practical examples, which show the effectiveness, robustness and reliability of the algorithm are presented
    • …
    corecore