352 research outputs found

    A Geometric Processing Workflow for Transforming Reality-Based 3D Models in Volumetric Meshes Suitable for FEA

    Get PDF
    Conservation of Cultural Heritage is a key issue and structural changes and damages can influence the mechanical behaviour of artefacts and buildings. The use of Finite Elements Methods (FEM) for mechanical analysis is largely used in modelling stress behaviour. The typical workflow involves the use of CAD 3D models made by Non-Uniform Rational B-splines (NURBS) surfaces, representing the ideal shape of the object to be simulated. Nowadays, 3D documentation of CH has been widely developed through reality-based approaches, but the models are not suitable for a direct use in FEA: the mesh has in fact to be converted to volumetric, and the density has to be reduced since the computational complexity of a FEA grows exponentially with the number of nodes

    2D and 3D surface image processing algorithms and their applications

    Get PDF
    This doctoral dissertation work aims to develop algorithms for 2D image segmentation application of solar filament disappearance detection, 3D mesh simplification, and 3D image warping in pre-surgery simulation. Filament area detection in solar images is an image segmentation problem. A thresholding and region growing combined method is proposed and applied in this application. Based on the filament area detection results, filament disappearances are reported in real time. The solar images in 1999 are processed with this proposed system and three statistical results of filaments are presented. 3D images can be obtained by passive and active range sensing. An image registration process finds the transformation between each pair of range views. To model an object, a common reference frame in which all views can be transformed must be defined. After the registration, the range views should be integrated into a non-redundant model. Optimization is necessary to obtain a complete 3D model. One single surface representation can better fit to the data. It may be further simplified for rendering, storing and transmitting efficiently, or the representation can be converted to some other formats. This work proposes an efficient algorithm for solving the mesh simplification problem, approximating an arbitrary mesh by a simplified mesh. The algorithm uses Root Mean Square distance error metric to decide the facet curvature. Two vertices of one edge and the surrounding vertices decide the average plane. The simplification results are excellent and the computation speed is fast. The algorithm is compared with six other major simplification algorithms. Image morphing is used for all methods that gradually and continuously deform a source image into a target image, while producing the in-between models. Image warping is a continuous deformation of a: graphical object. A morphing process is usually composed of warping and interpolation. This work develops a direct-manipulation-of-free-form-deformation-based method and application for pre-surgical planning. The developed user interface provides a friendly interactive tool in the plastic surgery. Nose augmentation surgery is presented as an example. Displacement vector and lattices resulting in different resolution are used to obtain various deformation results. During the deformation, the volume change of the model is also considered based on a simplified skin-muscle model

    A Concept For Surface Reconstruction From Digitised Data

    Get PDF
    Reverse engineering and in particular the reconstruction of surfaces from digitized data is an important task in industry. With the development of new digitizing technologies such as laser or photogrammetry, real objects can be measured or digitized quickly and cost effectively. The result of the digitizing process is a set of discrete 3D sample points. These sample points have to be converted into a mathematical, continuous surface description, which can be further processed in different computer applications. The main goal of this work is to develop a concept for such a computer aided surface generation tool, that supports the new scanning technologies and meets the requirements in industry towards such a product. Therefore first, the requirements to be met by a surface reconstruction tool are determined. This marketing study has been done by analysing different departments of several companies. As a result, a catalogue of requirements is developed. The number of tasks and applications shows the importance of a fast and precise computer aided reconstruction tool in industry. The main result from the analysis is, that many important applications such as stereolithographie, copy milling etc. are based on triangular meshes or they are able to handle these polygonal surfaces. Secondly the digitizer, currently available on the market and used in industry are analysed. Any scanning system has its strength and weaknesses. A typical problem in digitizing is, that some areas of a model cannot be digitized due to occlusion or obstruction. The systems are also different in terms of accuracy, flexibility etc. The analysis of the systems leads to a second catalogue of requirements and tasks, which have to be solved in order to provide a complete and effective software tool. The analysis also shows, that the reconstruction problem cannot be solved fully automatically due to many limitations of the scanning technologies. Based on the two requirements, a concept for a software tool in order to process digitized data is developed and presented. The concept is restricted to the generation of polygonal surfaces. It combines automatic processes, such as the generation of triangular meshes from digitized data, as well as user interactive tools such as the reconstruction of sharp corners or the compensation of the scanning probe radius in tactile measured data. The most difficult problem in this reconstruction process is the automatic generation of a surface from discrete measured sample points. Hence, an algorithm for generating triangular meshes from digitized data has been developed. The algorithm is based on the principle of multiple view combination. The proposed approach is able to handle large numbers of data points (examples with up to 20 million data points were processed). Two pre-processing algorithm for triangle decimation and surface smoothing are also presented and part of the mesh generation process. Several practical examples, which show the effectiveness, robustness and reliability of the algorithm are presented

    Laser-scanning based tomato plant modeling for virtual greenhouse environment.

    Get PDF

    High-Quality Simplification and Repair of Polygonal Models

    Get PDF
    Because of the rapid evolution of 3D acquisition and modelling methods, highly complex and detailed polygonal models with constantly increasing polygon count are used as three-dimensional geometric representations of objects in computer graphics and engineering applications. The fact that this particular representation is arguably the most widespread one is due to its simplicity, flexibility and rendering support by 3D graphics hardware. Polygonal models are used for rendering of objects in a broad range of disciplines like medical imaging, scientific visualization, computer aided design, film industry, etc. The handling of huge scenes composed of these high-resolution models rapidly approaches the computational capabilities of any graphics accelerator. In order to be able to cope with the complexity and to build level-of-detail representations, concentrated efforts were dedicated in the recent years to the development of new mesh simplification methods that produce high-quality approximations of complex models by reducing the number of polygons used in the surface while keeping the overall shape, volume and boundaries preserved as much as possible. Many well-established methods and applications require "well-behaved" models as input. Degenerate or incorectly oriented faces, T-joints, cracks and holes are just a few of the possible degenaracies that are often disallowed by various algorithms. Unfortunately, it is all too common to find polygonal models that contain, due to incorrect modelling or acquisition, such artefacts. Applications that may require "clean" models include finite element analysis, surface smoothing, model simplification, stereo lithography. Mesh repair is the task of removing artefacts from a polygonal model in order to produce an output model that is suitable for further processing by methods and applications that have certain quality requirements on their input. This thesis introduces a set of new algorithms that address several particular aspects of mesh repair and mesh simplification. One of the two mesh repair methods is dealing with the inconsistency of normal orientation, while another one, removes the inconsistency of vertex connectivity. Of the three mesh simplification approaches presented here, the first one attempts to simplify polygonal models with the highest possible quality, the second, applies the developed technique to out-of-core simplification, and the third, prevents self-intersections of the model surface that can occur during mesh simplification

    An Integrated Procedure to Assess the Stability of Coastal Rocky Cliffs: From UAV Close-Range Photogrammetry to Geomechanical Finite Element Modeling

    Get PDF
    The present paper explores the combination of unmanned aerial vehicle (UAV) photogrammetry and three-dimensional geomechanical modeling in the investigation of instability processes of long sectors of coastal rocky cliffs. The need of a reliable and detailed reconstruction of the geometry of the cliff surfaces, beside the geomechanical characterization of the rock materials, could represent a very challenging requirement for sub-vertical coastal cliffs overlooking the sea. Very often, no information could be acquired by alternative surveying methodologies, due to the absence of vantage points, and the fieldwork could pose a risk for personnel. The case study is represented by a 600 m long sea cliff located at Sant\u2019Andrea (Melendugno, Apulia, Italy). The cliff is characterized by a very complex geometrical setting, with a suggestive alternation of 10 to 20 m high vertical walls, with frequent caves, arches and rock-stacks. Initially, the rocky cliff surface was reconstructed at very fine spatial resolution from the combination of nadir and oblique images acquired by unmanned aerial vehicles. Successively, a limited area has been selected for further investigation. In particular, data refinement/decimation procedure has been assessed to find a convenient three-dimensional model to be used in the finite element geomechanical modeling without loss of information on the surface complexity. Finally, to test integrated procedure, the potential modes of failure of such sector of the investigated cliff were achieved. Results indicate that the most likely failure mechanism along the sea cliff examined is represented by the possible propagation of shear fractures or tensile failures along concave cliff portions or over-hanging due to previous collapses or erosion of the underlying rock volumes. The proposed approach to the investigation of coastal cliff stability has proven to be a possible and flexible tool in the rapid and highly-automated investigation of hazards to slope failure in coastal areas

    Digital 3D documentation of cultural heritage sites based on terrestrial laser scanning

    Get PDF

    Consistent Density Scanning and Information Extraction From Point Clouds of Building Interiors

    Get PDF
    Over the last decade, 3D range scanning systems have improved considerably enabling the designers to capture large and complex domains such as building interiors. The captured point cloud is processed to extract specific Building Information Models, where the main research challenge is to simultaneously handle huge and cohesive point clouds representing multiple objects, occluded features and vast geometric diversity. These domain characteristics increase the data complexities and thus make it difficult to extract accurate information models from the captured point clouds. The research work presented in this thesis improves the information extraction pipeline with the development of novel algorithms for consistent density scanning and information extraction automation for building interiors. A restricted density-based, scan planning methodology computes the number of scans to cover large linear domains while ensuring desired data density and reducing rigorous post-processing of data sets. The research work further develops effective algorithms to transform the captured data into information models in terms of domain features (layouts), meaningful data clusters (segmented data) and specific shape attributes (occluded boundaries) having better practical utility. Initially, a direct point-based simplification and layout extraction algorithm is presented that can handle the cohesive point clouds by adaptive simplification and an accurate layout extraction approach without generating an intermediate model. Further, three information extraction algorithms are presented that transforms point clouds into meaningful clusters. The novelty of these algorithms lies in the fact that they work directly on point clouds by exploiting their inherent characteristic. First a rapid data clustering algorithm is presented to quickly identify objects in the scanned scene using a robust hue, saturation and value (H S V) color model for better scene understanding. A hierarchical clustering algorithm is developed to handle the vast geometric diversity ranging from planar walls to complex freeform objects. The shape adaptive parameters help to segment planar as well as complex interiors whereas combining color and geometry based segmentation criterion improves clustering reliability and identifies unique clusters from geometrically similar regions. Finally, a progressive scan line based, side-ratio constraint algorithm is presented to identify occluded boundary data points by investigating their spatial discontinuity
    • …
    corecore