36 research outputs found

    Consistent Density Scanning and Information Extraction From Point Clouds of Building Interiors

    Get PDF
    Over the last decade, 3D range scanning systems have improved considerably enabling the designers to capture large and complex domains such as building interiors. The captured point cloud is processed to extract specific Building Information Models, where the main research challenge is to simultaneously handle huge and cohesive point clouds representing multiple objects, occluded features and vast geometric diversity. These domain characteristics increase the data complexities and thus make it difficult to extract accurate information models from the captured point clouds. The research work presented in this thesis improves the information extraction pipeline with the development of novel algorithms for consistent density scanning and information extraction automation for building interiors. A restricted density-based, scan planning methodology computes the number of scans to cover large linear domains while ensuring desired data density and reducing rigorous post-processing of data sets. The research work further develops effective algorithms to transform the captured data into information models in terms of domain features (layouts), meaningful data clusters (segmented data) and specific shape attributes (occluded boundaries) having better practical utility. Initially, a direct point-based simplification and layout extraction algorithm is presented that can handle the cohesive point clouds by adaptive simplification and an accurate layout extraction approach without generating an intermediate model. Further, three information extraction algorithms are presented that transforms point clouds into meaningful clusters. The novelty of these algorithms lies in the fact that they work directly on point clouds by exploiting their inherent characteristic. First a rapid data clustering algorithm is presented to quickly identify objects in the scanned scene using a robust hue, saturation and value (H S V) color model for better scene understanding. A hierarchical clustering algorithm is developed to handle the vast geometric diversity ranging from planar walls to complex freeform objects. The shape adaptive parameters help to segment planar as well as complex interiors whereas combining color and geometry based segmentation criterion improves clustering reliability and identifies unique clusters from geometrically similar regions. Finally, a progressive scan line based, side-ratio constraint algorithm is presented to identify occluded boundary data points by investigating their spatial discontinuity

    Quad Meshing

    Get PDF
    Triangle meshes have been nearly ubiquitous in computer graphics, and a large body of data structures and geometry processing algorithms based on them has been developed in the literature. At the same time, quadrilateral meshes, especially semi-regular ones, have advantages for many applications, and significant progress was made in quadrilateral mesh generation and processing during the last several years. In this State of the Art Report, we discuss the advantages and problems of techniques operating on quadrilateral meshes, including surface analysis and mesh quality, simplification, adaptive refinement, alignment with features, parametrization, and remeshing

    Heterogeneous Computing with Focus on Mechanical Engineering

    Get PDF
    During the past few years there has been a revolution in the design of desktop computers. Most processors today include more than one processor core, allowing parallel execution of programs. Furthermore, most commodity computers include a graphical processor that outperforms the central processor by at least one order of magnitude. Tapping into this vast resource is commonly referred to as heterogeneous computing. The change in hardware invalidates old software-design truths. There is therefore need for new algorithms, and research into adapting existing algorithms to these architectures. Our main focus has been to accelerate algorithms relevant for mechanical engineering. In this dissertation we present four algorithms devoted to take advantage of the computational strengths of heterogeneous architectures. Each work is based on state-of-the-art hardware available at the time the research was performed. First we describe an algorithm for high-quality visualization of parametric surfaces. This is useful in a CAD setting, were an accurate rendering is important for visual validation of model quality. We further describe simulation of shallow-water waves using a state-of-the-art numerical scheme. Our accelerated implementation gave a speedup of up to 40 times compared to an optimized reference implementation. Our implementation features real time simulation and visualization of semi-realistic nonlinear wave effects. Finally we present two algorithms for shape simplification of 3D-models. The algorithms aim at reducing time spent on preparing models for finite element analysis. Finite element analysis is important to determine mechanical properties of objects prior to manufacture. Such analysis can be used to investigate thermal behavior and determine the strengths and weaknesses of physical components. Before the analysis can take place the models must undergo a preparation phase where shape simplification plays an important role. The first work we describe for shape simplification is a hybrid algorithm, using graphics hardware for the computationally demanding operations, and the main processor for maintaining the data structure. Our second work describes a shape simplification algorithm highly suitable for heterogeneous architectures and a reference implementation on the Cell BE

    Finite Element Modeling Driven by Health Care and Aerospace Applications

    Get PDF
    This thesis concerns the development, analysis, and computer implementation of mesh generation algorithms encountered in finite element modeling in health care and aerospace. The finite element method can reduce a continuous system to a discrete idealization that can be solved in the same manner as a discrete system, provided the continuum is discretized into a finite number of simple geometric shapes (e.g., triangles in two dimensions or tetrahedrons in three dimensions). In health care, namely anatomic modeling, a discretization of the biological object is essential to compute tissue deformation for physics-based simulations. This thesis proposes an efficient procedure to convert 3-dimensional imaging data into adaptive lattice-based discretizations of well-shaped tetrahedra or mixed elements (i.e., tetrahedra, pentahedra and hexahedra). This method operates directly on segmented images, thus skipping a surface reconstruction that is required by traditional Computer-Aided Design (CAD)-based meshing techniques and is convoluted, especially in complex anatomic geometries. Our approach utilizes proper mesh gradation and tissue-specific multi-resolution, without sacrificing the fidelity and while maintaining a smooth surface to reflect a certain degree of visual reality. Image-to-mesh conversion can facilitate accurate computational modeling for biomechanical registration of Magnetic Resonance Imaging (MRI) in image-guided neurosurgery. Neuronavigation with deformable registration of preoperative MRI to intraoperative MRI allows the surgeon to view the location of surgical tools relative to the preoperative anatomical (MRI) or functional data (DT-MRI, fMRI), thereby avoiding damage to eloquent areas during tumor resection. This thesis presents a deformable registration framework that utilizes multi-tissue mesh adaptation to map preoperative MRI to intraoperative MRI of patients who have undergone a brain tumor resection. Our enhancements with mesh adaptation improve the accuracy of the registration by more than 5 times compared to rigid and traditional physics-based non-rigid registration, and by more than 4 times compared to publicly available B-Spline interpolation methods. The adaptive framework is parallelized for shared memory multiprocessor architectures. Performance analysis shows that this method could be applied, on average, in less than two minutes, achieving desirable speed for use in a clinical setting. The last part of this thesis focuses on finite element modeling of CAD data. This is an integral part of the design and optimization of components and assemblies in industry. We propose a new parallel mesh generator for efficient tetrahedralization of piecewise linear complex domains in aerospace. CAD-based meshing algorithms typically improve the shape of the elements in a post-processing step due to high complexity and cost of the operations involved. On the contrary, our method optimizes the shape of the elements throughout the generation process to obtain a maximum quality and utilizes high performance computing to reduce the overheads and improve end-user productivity. The proposed mesh generation technique is a combination of Advancing Front type point placement, direct point insertion, and parallel multi-threaded connectivity optimization schemes. The mesh optimization is based on a speculative (optimistic) approach that has been proven to perform well on hardware-shared memory. The experimental evaluation indicates that the high quality and performance attributes of this method see substantial improvement over existing state-of-the-art unstructured grid technology currently incorporated in several commercial systems. The proposed mesh generator will be part of an Extreme-Scale Anisotropic Mesh Generation Environment to meet industries expectations and NASA\u27s CFD visio

    Doctor of Philosophy

    Get PDF
    dissertationComputational simulation has become an indispensable tool in the study of both basic mechanisms and pathophysiology of all forms of cardiac electrical activity. Because the heart is comprised of approximately 4 billion electrically active cells, it is not possible to geometrically model or computationally simulate each individual cell. As a result computational models of the heart are, of necessity, abstractions that approximate electrical behavior at the cell, tissue, and whole body level. The goal of this PhD dissertation was to evaluate several aspects of these abstractions by exploring a set of modeling approaches in the field of cardiac electrophysiology and to develop means to evaluate both the amplitude of these errors from a purely technical perspective as well as the impacts of those errors in terms of physiological parameters. The first project used subject specific models and experiments with acute myocardial ischemia to show that one common simplification used to model myocardial ischemia-the simplest form of the border zone between healthy and ischemic tissue-was not supported by the experimental results. We propose a alternative approximation of the border zone that better simulates the experimental results. The second study examined the impact of simplifications in geometric models on simulations of cardiac electrophysiology. Such models consist of a connected mesh of polygonal elements and must often capture complex external and internal boundaries. A conforming mesh contains elements that follow closely the shapes of boundaries; nonconforming meshes fit the boundaries only approximately and are easier to construct but their impact on simulation accuracy has, to our knowledge, remained unknown. We evaluated the impact of this simplification on a set of three different forms of bioelectric field simulations. The third project evaluated the impact of an additional geometric modeling error; positional uncertainty of the heart in simulations of the ECG. We applied a relatively novel and highly efficient statistical approach, the generalized Polynomial Chaos-Stochastic Collocation method (gPC-SC), to a boundary element formulation of the electrocardiographic forward problem to carry out the necessary comprehensive sensitivity analysis. We found variations large enough to mask or to mimic signs of ischemia in the ECG

    A Characterization Of Low Cost Simulator Image Generation Systems

    Get PDF
    Report identifies and briefly discusses the characteristics that should be considered in the evaluation, comparison, and selection of low cost computer image generation systems to be used for simulator applications

    Regular Hierarchical Surface Models: A conceptual model of scale variation in a GIS and its application to hydrological geomorphometry

    Get PDF
    Environmental and geographical process models inevitably involve parameters that vary spatially. One example is hydrological modelling, where parameters derived from the shape of the ground such as flow direction and flow accumulation are used to describe the spatial complexity of drainage networks. One way of handling such parameters is by using a Digital Elevation Model (DEM), such modelling is the basis of the science of geomorphometry. A frequently ignored but inescapable challenge when modellers work with DEMs is the effect of scale and geometry on the model outputs. Many parameters vary with scale as much as they vary with position. Modelling variability with scale is necessary to simplify and generalise surfaces, and desirable to accurately reconcile model components that are measured at different scales. This thesis develops a surface model that is optimised to represent scale in environmental models. A Regular Hierarchical Surface Model (RHSM) is developed that employs a regular tessellation of space and scale that forms a self-similar regular hierarchy, and incorporates Level Of Detail (LOD) ideas from computer graphics. Following convention from systems science, the proposed model is described in its conceptual, mathematical, and computational forms. The RHSM development was informed by a categorisation of Geographical Information Science (GISc) surfaces within a cohesive framework of geometry, structure, interpolation, and data model. The positioning of the RHSM within this broader framework made it easier to adapt algorithms designed for other surface models to conform to the new model. The RHSM has an implicit data model that utilises a variation of Middleton and Sivaswamy (2001)’s intrinsically hierarchical Hexagonal Image Processing referencing system, which is here generalised for rectangular and triangular geometries. The RHSM provides a simple framework to form a pyramid of coarser values in a process characterised as a scaling function. In addition, variable density realisations of the hierarchical representation can be generated by defining an error value and decision rule to select the coarsest appropriate scale for a given region to satisfy the modeller’s intentions. The RHSM is assessed using adaptions of the geomorphometric algorithms flow direction and flow accumulation. The effects of scale and geometry on the anistropy and accuracy of model results are analysed on dispersive and concentrative cones, and Light Detection And Ranging (LiDAR) derived surfaces of the urban area of Dunedin, New Zealand. The RHSM modelling process revealed aspects of the algorithms not obvious within a single geometry, such as, the influence of node geometry on flow direction results, and a conceptual weakness of flow accumulation algorithms on dispersive surfaces that causes asymmetrical results. In addition, comparison of algorithm behaviour between geometries undermined the hypothesis that variance of cell cross section with direction is important for conversion of cell accumulations to point values. The ability to analyse algorithms for scale and geometry and adapt algorithms within a cohesive conceptual framework offers deeper insight into algorithm behaviour than previously achieved. The deconstruction of algorithms into geometry neutral forms and the application of scaling functions are important contributions to the understanding of spatial parameters within GISc

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation
    corecore