13 research outputs found

    Studies of several tetrahedralization problems

    Get PDF
    The main purpose of decomposing an object into simpler components is to simplify a problem involving the complex object into a number of subproblems having simpler components. In particular, a tetrahedralization is a partition of the input domain in R3 into a number of tetrahedra that meet only at shared faces. Tetrahedralizations have applications in the finite element method, mesh generation, computer graphics, and robotics. This thesis investigates four problems in tetrahedralizations and triangulations. The first problem is on the computational complexity of tetrahedralization detections. We present an O(nm log n) algorithm to determine whether a set of line segments .C is the edge set of a tetrahedralization, where m is the number of segments and n is the number of endpoints in .C. We show that it is NP-complete to decide whether .C contains the edge set of a tetrahedralization. We also show that it is NP-complete to decide whether .C is tetrahedralizable. The second problem is on minimal tetrahedralizations. After deriving some properties of the graph of polyhedra, we identify a class of polyhedra and show that this class of polyhedra can be minimally tetrahedralized in O(n²) time. The third problem is on the tetrahedralization of two nested convex polyhedra. We give a method to tetrahedralize the region between two nested convex polyhedra into a linear number of tetrahedra without introducing Steiner points. This result answers an open problem raised by Bern [16]. The fourth problem is on the lower bound for β-skeletons belonging to minimum weight triangulations. We prove a lower bound on β (β = [one sixth times the square root of two times the square root of 3] + 45 such that if β is less than this value, the β-skeleton of a point set may not always be a subgraph of the minimum weight triangulation of this point set. This result settles Keil's conjecture [62]

    AUTOMATIC 3D RECONSTRUCTION OF BUILDINGS ROOF TOPS IN DENSELY URBANIZED AREAS

    Get PDF
    3D reconstruction of the urban environment constitutes a well-studied problem in the field of photogrammetry and computer vision, attracting the growing interest of the scientific community, for many years. Although the current state of the art present very impressive results, there is still room for improvements. The production of reliable and accurate 3D reconstructions is useful for a wide range of applications, such as urban planning, GIS, tax assessment, cadastre, insurance, 3D city modelling, etc. In this paper, a methodology for the automatic 3D reconstruction of buildings roof tops in densely urbanized areas, utilizing dense point clouds data, is proposed. It consists of three (3) main phases, each of which comprises a set of processing steps. In the first phase, the point cloud is simplified and smoothed. Outliers and non-roof elements are detected and removed utilizing shape, position and area criteria. In the second phase, the geometry buildings roof tops is optimized, by detecting and normalizing the edges. In the last phase, the reconstruction of the buildings roof tops is conducted. A progressive process, utilizing a plane fitting algorithm in combination with Screened Poisson Surface Reconstruction is performed. Buildings roof tops surfaces are produced and optimized. A software tool is developed and utilized for the implementation of the proposed methodology. The produced results are assessed and a comparison with another open-source software is conducted. The proposed methodology seems to be effective providing satisfactory results, as it can manage properly the really noisy point clouds of densely urbanized environments

    Scene Reconstruction from Multi-Scale Input Data

    Get PDF
    Geometry acquisition of real-world objects by means of 3D scanning or stereo reconstruction constitutes a very important and challenging problem in computer vision. 3D scanners and stereo algorithms usually provide geometry from one viewpoint only, and several of the these scans need to be merged into one consistent representation. Scanner data generally has lower noise levels than stereo methods and the scanning scenario is more controlled. In image-based stereo approaches, the aim is to reconstruct the 3D surface of an object solely from multiple photos of the object. In many cases, the stereo geometry is contaminated with noise and outliers, and exhibits large variations in scale. Approaches that fuse such data into one consistent surface must be resilient to such imperfections. In this thesis, we take a closer look at geometry reconstruction using both scanner data and the more challenging image-based scene reconstruction approaches. In particular, this work focuses on the uncontrolled setting where the input images are not constrained, may be taken with different camera models, under different lighting and weather conditions, and from vastly different points of view. A typical dataset contains many views that observe the scene from an overview perspective, and relatively few views capture small details of the geometry. What results from these datasets are surface samples of the scene with vastly different resolution. As we will show in this thesis, the multi-resolution, or, "multi-scale" nature of the input is a relevant aspect for surface reconstruction, which has rarely been considered in literature yet. Integrating scale as additional information in the reconstruction process can make a substantial difference in surface quality. We develop and study two different approaches for surface reconstruction that are able to cope with the challenges resulting from uncontrolled images. The first approach implements surface reconstruction by fusion of depth maps using a multi-scale hierarchical signed distance function. The hierarchical representation allows fusion of multi-resolution depth maps without mixing geometric information at incompatible scales, which preserves detail in high-resolution regions. An incomplete octree is constructed by incrementally adding triangulated depth maps to the hierarchy, which leads to scattered samples of the multi-resolution signed distance function. A continuous representation of the scattered data is defined by constructing a tetrahedral complex, and a final, highly-adaptive surface is extracted by applying the Marching Tetrahedra algorithm. A second, point-based approach is based on a more abstract, multi-scale implicit function defined as a sum of basis functions. Each input sample contributes a single basis function which is parameterized solely by the sample's attributes, effectively yielding a parameter-free method. Because the scale of each sample controls the size of the basis function, the method automatically adapts to data redundancy for noise reduction and is highly resilient to the quality-degrading effects of low-resolution samples, thus favoring high-resolution surfaces. Furthermore, we present a robust, image-based reconstruction system for surface modeling: MVE, the Multi-View Environment. The implementation provides all steps involved in the pipeline: Calibration and registration of the input images, dense geometry reconstruction by means of stereo, a surface reconstruction step and post-processing, such as remeshing and texturing. In contrast to other software solutions for image-based reconstruction, MVE handles large, uncontrolled, multi-scale datasets as well as input from more controlled capture scenarios. The reason lies in the particular choice of the multi-view stereo and surface reconstruction algorithms. The resulting surfaces are represented using a triangular mesh, which is a piecewise linear approximation to the real surface. The individual triangles are often so small that they barely contribute any geometric information and can be ill-shaped, which can cause numerical problems. A surface remeshing approach is introduced which changes the surface discretization such that more favorable triangles are created. It distributes the vertices of the mesh according to a density function, which is derived from the curvature of the geometry. Such a mesh is better suited for further processing and has reduced storage requirements. We thoroughly compare the developed methods against the state-of-the art and also perform a qualitative evaluation of the two surface reconstruction methods on a wide range of datasets with different properties. The usefulness of the remeshing approach is demonstrated on both scanner and multi-view stereo data

    Segmentation and Deformable Modelling Techniques for a Virtual Reality Surgical Simulator in Hepatic Oncology

    No full text
    Liver surgical resection is one of the most frequently used curative therapies. However, resectability is problematic. There is a need for a computer-assisted surgical planning and simulation system which can accurately and efficiently simulate the liver, vessels and tumours in actual patients. The present project describes the development of these core segmentation and deformable modelling techniques. For precise detection of irregularly shaped areas with indistinct boundaries, the segmentation incorporated active contours - gradient vector flow (GVF) snakes and level sets. To improve efficiency, a chessboard distance transform was used to replace part of the GVF effort. To automatically initialize the liver volume detection process, a rotating template was introduced to locate the starting slice. For shape maintenance during the segmentation process, a simplified object shape learning step was introduced to avoid occasional significant errors. Skeletonization with fuzzy connectedness was used for vessel segmentation. To achieve real-time interactivity, the deformation regime of this system was based on a single-organ mass-spring system (MSS), which introduced an on-the-fly local mesh refinement to raise the deformation accuracy and the mesh control quality. This method was now extended to a multiple soft-tissue constraint system, by supplementing it with an adaptive constraint mesh generation. A mesh quality measure was tailored based on a wide comparison of classic measures. Adjustable feature and parameter settings were thus provided, to make tissues of interest distinct from adjacent structures, keeping the mesh suitable for on-line topological transformation and deformation. More than 20 actual patient CT and 2 magnetic resonance imaging (MRI) liver datasets were tested to evaluate the performance of the segmentation method. Instrument manipulations of probing, grasping, and simple cutting were successfully simulated on deformable constraint liver tissue models. This project was implemented in conjunction with the Division of Surgery, Hammersmith Hospital, London; the preliminary reality effect was judged satisfactory by the consultant hepatic surgeon

    Development of a SGM-based multi-view reconstruction framework for aerial imagery

    Get PDF
    Advances in the technology of digital airborne camera systems allow for the observation of surfaces with sampling rates in the range of a few centimeters. In combination with novel matching approaches, which estimate depth information for virtually every pixel, surface reconstructions of impressive density and precision can be generated. Therefore, image based surface generation meanwhile is a serious alternative to LiDAR based data collection for many applications. Surface models serve as primary base for geographic products as for example map creation, production of true-ortho photos or visualization purposes within the framework of virtual globes. The goal of the presented theses is the development of a framework for the fully automatic generation of 3D surface models based on aerial images - both standard nadir as well as oblique views. This comprises several challenges. On the one hand dimensions of aerial imagery is considerable and the extend of the areas to be reconstructed can encompass whole countries. Beside scalability of methods this also requires decent processing times and efficient handling of the given hardware resources. Moreover, beside high precision requirements, a high degree of automation has to be guaranteed to limit manual interaction as much as possible. Due to the advantages of scalability, a stereo method is utilized in the presented thesis. The approach for dense stereo is based on an adapted version of the semi global matching (SGM) algorithm. Following a hierarchical approach corresponding image regions and meaningful disparity search ranges are identified. It will be verified that, dependent on undulations of the scene, time and memory demands can be reduced significantly, by up to 90% within some of the conducted tests. This enables the processing of aerial datasets on standard desktop machines in reasonable times even for large fields of depth. Stereo approaches generate disparity or depth maps, in which redundant depth information is available. To exploit this redundancy, a method for the refinement of stereo correspondences is proposed. Thereby redundant observations across stereo models are identified, checked for geometric consistency and their reprojection error is minimized. This way outliers are removed and precision of depth estimates is improved. In order to generate consistent surfaces, two algorithms for depth map fusion were developed. The first fusion strategy aims for the generation of 2.5D height models, also known as digital surface models (DSM). The proposed method improves existing methods regarding quality in areas of depth discontinuities, for example at roof edges. Utilizing benchmarks designed for the evaluation of image based DSM generation we show that the developed approaches favorably compare to state-of-the-art algorithms and that height precisions of few GSDs can be achieved. Furthermore, methods for the derivation of meshes based on DSM data are discussed. The fusion of depth maps for 3D scenes, as e.g. frequently required during evaluation of high resolution oblique aerial images in complex urban environments, demands for a different approach since scenes can in general not be represented as height fields. Moreover, depths across depth maps possess varying precision and sampling rates due to variances in image scale, errors in orientation and other effects. Within this thesis a median-based fusion methodology is proposed. By using geometry-adaptive triangulation of depth maps depth-wise normals are extracted and, along the point coordinates are filtered and fused using tree structures. The output of this method are oriented points which then can be used to generate meshes. Precision and density of the method will be evaluated using established multi-view benchmarks. Beside the capability to process close range datasets, results for large oblique airborne data sets will be presented. The report closes with a summary, discussion of limitations and perspectives regarding improvements and enhancements. The implemented algorithms are core elements of the commercial software package SURE, which is freely available for scientific purposes

    COMPUTATIONAL ULTRASOUND ELASTOGRAPHY: A FEASIBILITY STUDY

    Get PDF
    Ultrasound Elastography (UE) is an emerging set of imaging modalities used to assess the biomechanical properties of soft tissues. UE has been applied to numerous clinical applications. Particularly, results from clinical trials of UE in breast lesion differentiation and staging liver fibrosis indicated that there was a lack of confidence in UE measurements or image interpretation. Confidence on UE measurements interpretation is critically important for improving the clinical utility of UE. The primary objective of my thesis is to develop a computational simulation platform based on open-source software packages including Field II, VTK, FEBio and Tetgen. The proposed virtual simulation platform can be used to simulate SE and acoustic radiation force based SWE simulations, including pSWE, SSI and ARFI. To demonstrate its usefulness, in this thesis, examples for breast cancer detections were provided. The simulated results can reproduce what has been reported in the literature. To statistically analyze the intrinsic variations of shear wave speed (SWS) in the fibrotic liver tissues, a probability density function (PDF) of the SWS distribution in conjunction with a lossless stochastic tissue model was derived using the principle of Maximum Entropy (ME). The performance of the proposed PDF was evaluated using Monte-Carlo (MC) simulated shear wave data and against three other commonly used PDFs. We theoretically demonstrated that SWS measurements follow a non-Gaussian distribution for the first time. One advantage of the proposed PDF is its physically meaningful parameters. Also, we conducted a case study of the relationship between shear wave measurements and the microstructure of fibrotic liver tissues. Three different virtual tissue models were used to represent underlying microstructures of fibrotic liver tissues. Furthermore, another innovation of this thesis is the inclusion of “biologically-relevant” fibrotic liver tissue models for simulation of shear wave elastography. To link tissue structure, composition and architecture to the ultrasound measurements directly, a “biologically relevant” tissue model was established using Systems Biology. Our initial results demonstrated that the simulated virtual liver tissues qualitatively could reproduce histological results and wave speed measurements. In conclusions, these computational tools and theoretical analysis can improve the confidence on UE image/measurements interpretation

    Towards Intelligent Runtime Framework for Distributed Heterogeneous Systems

    Get PDF
    Scientific applications strive for increased memory and computing performance, requiring massive amounts of data and time to produce results. Applications utilize large-scale, parallel computing platforms with advanced architectures to accommodate their needs. However, developing performance-portable applications for modern, heterogeneous platforms requires lots of effort and expertise in both the application and systems domains. This is more relevant for unstructured applications whose workflow is not statically predictable due to their heavily data-dependent nature. One possible solution for this problem is the introduction of an intelligent Domain-Specific Language (iDSL) that transparently helps to maintain correctness, hides the idiosyncrasies of lowlevel hardware, and scales applications. An iDSL includes domain-specific language constructs, a compilation toolchain, and a runtime providing task scheduling, data placement, and workload balancing across and within heterogeneous nodes. In this work, we focus on the runtime framework. We introduce a novel design and extension of a runtime framework, the Parallel Runtime Environment for Multicore Applications. In response to the ever-increasing intra/inter-node concurrency, the runtime system supports efficient task scheduling and workload balancing at both levels while allowing the development of custom policies. Moreover, the new framework provides abstractions supporting the utilization of heterogeneous distributed nodes consisting of CPUs and GPUs and is extensible to other devices. We demonstrate that by utilizing this work, an application (or the iDSL) can scale its performance on heterogeneous exascale-era supercomputers with minimal effort. A future goal for this framework (out of the scope of this thesis) is to be integrated with machine learning to improve its decision-making and performance further. As a bridge to this goal, since the framework is under development, we experiment with data from Nuclear Physics Particle Accelerators and demonstrate the significant improvements achieved by utilizing machine learning in the hit-based track reconstruction process

    Geometric algorithms for cavity detection on protein surfaces

    Get PDF
    Macromolecular structures such as proteins heavily empower cellular processes or functions. These biological functions result from interactions between proteins and peptides, catalytic substrates, nucleotides or even human-made chemicals. Thus, several interactions can be distinguished: protein-ligand, protein-protein, protein-DNA, and so on. Furthermore, those interactions only happen under chemical- and shapecomplementarity conditions, and usually take place in regions known as binding sites. Typically, a protein consists of four structural levels. The primary structure of a protein is made up of its amino acid sequences (or chains). Its secondary structure essentially comprises -helices and -sheets, which are sub-sequences (or sub-domains) of amino acids of the primary structure. Its tertiary structure results from the composition of sub-domains into domains, which represent the geometric shape of the protein. Finally, the quaternary structure of a protein results from the aggregate of two or more tertiary structures, usually known as a protein complex. This thesis fits in the scope of structure-based drug design and protein docking. Specifically, one addresses the fundamental problem of detecting and identifying protein cavities, which are often seen as tentative binding sites for ligands in protein-ligand interactions. In general, cavity prediction algorithms split into three main categories: energy-based, geometry-based, and evolution-based. Evolutionary methods build upon evolutionary sequence conservation estimates; that is, these methods allow us to detect functional sites through the computation of the evolutionary conservation of the positions of amino acids in proteins. Energy-based methods build upon the computation of interaction energies between protein and ligand atoms. In turn, geometry-based algorithms build upon the analysis of the geometric shape of the protein (i.e., its tertiary structure) to identify cavities. This thesis focuses on geometric methods. We introduce here three new geometric-based algorithms for protein cavity detection. The main contribution of this thesis lies in the use of computer graphics techniques in the analysis and recognition of cavities in proteins, much in the spirit of molecular graphics and modeling. As seen further ahead, these techniques include field-of-view (FoV), voxel ray casting, back-face culling, shape diameter functions, Morse theory, and critical points. The leading idea is to come up with protein shape segmentation, much like we commonly do in mesh segmentation in computer graphics. In practice, protein cavity algorithms are nothing more than segmentation algorithms designed for proteins.Estruturas macromoleculares tais como as proteínas potencializam processos ou funções celulares. Estas funções resultam das interações entre proteínas e peptídeos, substratos catalíticos, nucleótideos, ou até mesmo substâncias químicas produzidas pelo homem. Assim, há vários tipos de interacções: proteína-ligante, proteína-proteína, proteína-DNA e assim por diante. Além disso, estas interações geralmente ocorrem em regiões conhecidas como locais de ligação (binding sites, do inglês) e só acontecem sob condições de complementaridade química e de forma. É também importante referir que uma proteína pode ser estruturada em quatro níveis. A estrutura primária que consiste em sequências de aminoácidos (ou cadeias), a estrutura secundária que compreende essencialmente por hélices e folhas , que são subsequências (ou subdomínios) dos aminoácidos da estrutura primária, a estrutura terciária que resulta da composição de subdomínios em domínios, que por sua vez representa a forma geométrica da proteína, e por fim a estrutura quaternária que é o resultado da agregação de duas ou mais estruturas terciárias. Este último nível estrutural é frequentemente conhecido por um complexo proteico. Esta tese enquadra-se no âmbito da conceção de fármacos baseados em estrutura e no acoplamento de proteínas. Mais especificamente, aborda-se o problema fundamental da deteção e identificação de cavidades que são frequentemente vistos como possíveis locais de ligação (putative binding sites, do inglês) para os seus ligantes (ligands, do inglês). De forma geral, os algoritmos de identificação de cavidades dividem-se em três categorias principais: baseados em energia, geometria ou evolução. Os métodos evolutivos baseiam-se em estimativas de conservação das sequências evolucionárias. Isto é, estes métodos permitem detectar locais funcionais através do cálculo da conservação evolutiva das posições dos aminoácidos das proteínas. Em relação aos métodos baseados em energia estes baseiam-se no cálculo das energias de interação entre átomos da proteína e do ligante. Por fim, os algoritmos geométricos baseiam-se na análise da forma geométrica da proteína para identificar cavidades. Esta tese foca-se nos métodos geométricos. Apresentamos nesta tese três novos algoritmos geométricos para detecção de cavidades em proteínas. A principal contribuição desta tese está no uso de técnicas de computação gráfica na análise e reconhecimento de cavidades em proteínas, muito no espírito da modelação e visualização molecular. Como pode ser visto mais à frente, estas técnicas incluem o field-of-view (FoV), voxel ray casting, back-face culling, funções de diâmetro de forma, a teoria de Morse, e os pontos críticos. A ideia principal é segmentar a proteína, à semelhança do que acontece na segmentação de malhas em computação gráfica. Na prática, os algoritmos de detecção de cavidades não são nada mais que algoritmos de segmentação de proteínas
    corecore