28 research outputs found

    Geometric Approximations and their Application to Motion Planning

    Get PDF
    Geometric approximation methods are a preferred solution to handle complexities (such as a large volume or complex features such as concavities) in geometric objects or environments containing them. Complexities often pose a computational bottleneck for applications such as motion planning. Exact resolution of these complexities might introduce other complexities such as unmanageable number of components. Hence, approximation methods provide a way to handle these complexities in a manageable state by trading off some accuracy. In this dissertation, two novel geometric approximation methods are studied: aggregation hierarchy and shape primitive skeleton. The aggregation hierarchy is a hierarchical clustering of polygonal or polyhedral objects. The shape primitive skeleton provides an approximation of bounded space as a skeleton of shape primitives. These methods are further applied to improve the performance of motion planning applications. We evaluate the methods in environments with 2D and 3D objects. The aggregation hierarchy groups nearby objects into individual objects. The hierarchy is created by varying the distance threshold that determines which objects are nearby. This creates levels of detail of the environment. The hierarchy of the obstacle space is then used to create a decom-position of the complementary space (i.e, free space) into a set of sampling regions to improve the efficiency and accuracy of the sampling operation of the sampling based motion planners. Our results show that the method can improve the efficiency (10 − 70% of planning time) of sampling based motion planning algorithms. The shape primitive skeleton inscribes a set of shape primitives (e.g., sphere, boxes) inside a bounded space such that they represent the skeleton or the connectivity of the space. We apply the shape primitive skeletons of the free space and obstacle space in motion planning problems to improve the collision detection operation. Our results also show the use of shape primitive skeleton in both spaces improves the performance of collision detectors (by 20 − 70% of collision detection time) used in motion planning algorithms. In summary, this dissertation evaluates how geometric approximation methods can be applied to improve the performance of motion planning methods, especially, sampling based motion planning method

    3D mesh processing using GAMer 2 to enable reaction-diffusion simulations in realistic cellular geometries

    Full text link
    Recent advances in electron microscopy have enabled the imaging of single cells in 3D at nanometer length scale resolutions. An uncharted frontier for in silico biology is the ability to simulate cellular processes using these observed geometries. Enabling such simulations requires watertight meshing of electron micrograph images into 3D volume meshes, which can then form the basis of computer simulations of such processes using numerical techniques such as the Finite Element Method. In this paper, we describe the use of our recently rewritten mesh processing software, GAMer 2, to bridge the gap between poorly conditioned meshes generated from segmented micrographs and boundary marked tetrahedral meshes which are compatible with simulation. We demonstrate the application of a workflow using GAMer 2 to a series of electron micrographs of neuronal dendrite morphology explored at three different length scales and show that the resulting meshes are suitable for finite element simulations. This work is an important step towards making physical simulations of biological processes in realistic geometries routine. Innovations in algorithms to reconstruct and simulate cellular length scale phenomena based on emerging structural data will enable realistic physical models and advance discovery at the interface of geometry and cellular processes. We posit that a new frontier at the intersection of computational technologies and single cell biology is now open.Comment: 39 pages, 14 figures. High resolution figures and supplemental movies available upon reques

    Geometric algorithms for cavity detection on protein surfaces

    Get PDF
    Macromolecular structures such as proteins heavily empower cellular processes or functions. These biological functions result from interactions between proteins and peptides, catalytic substrates, nucleotides or even human-made chemicals. Thus, several interactions can be distinguished: protein-ligand, protein-protein, protein-DNA, and so on. Furthermore, those interactions only happen under chemical- and shapecomplementarity conditions, and usually take place in regions known as binding sites. Typically, a protein consists of four structural levels. The primary structure of a protein is made up of its amino acid sequences (or chains). Its secondary structure essentially comprises -helices and -sheets, which are sub-sequences (or sub-domains) of amino acids of the primary structure. Its tertiary structure results from the composition of sub-domains into domains, which represent the geometric shape of the protein. Finally, the quaternary structure of a protein results from the aggregate of two or more tertiary structures, usually known as a protein complex. This thesis fits in the scope of structure-based drug design and protein docking. Specifically, one addresses the fundamental problem of detecting and identifying protein cavities, which are often seen as tentative binding sites for ligands in protein-ligand interactions. In general, cavity prediction algorithms split into three main categories: energy-based, geometry-based, and evolution-based. Evolutionary methods build upon evolutionary sequence conservation estimates; that is, these methods allow us to detect functional sites through the computation of the evolutionary conservation of the positions of amino acids in proteins. Energy-based methods build upon the computation of interaction energies between protein and ligand atoms. In turn, geometry-based algorithms build upon the analysis of the geometric shape of the protein (i.e., its tertiary structure) to identify cavities. This thesis focuses on geometric methods. We introduce here three new geometric-based algorithms for protein cavity detection. The main contribution of this thesis lies in the use of computer graphics techniques in the analysis and recognition of cavities in proteins, much in the spirit of molecular graphics and modeling. As seen further ahead, these techniques include field-of-view (FoV), voxel ray casting, back-face culling, shape diameter functions, Morse theory, and critical points. The leading idea is to come up with protein shape segmentation, much like we commonly do in mesh segmentation in computer graphics. In practice, protein cavity algorithms are nothing more than segmentation algorithms designed for proteins.Estruturas macromoleculares tais como as proteínas potencializam processos ou funções celulares. Estas funções resultam das interações entre proteínas e peptídeos, substratos catalíticos, nucleótideos, ou até mesmo substâncias químicas produzidas pelo homem. Assim, há vários tipos de interacções: proteína-ligante, proteína-proteína, proteína-DNA e assim por diante. Além disso, estas interações geralmente ocorrem em regiões conhecidas como locais de ligação (binding sites, do inglês) e só acontecem sob condições de complementaridade química e de forma. É também importante referir que uma proteína pode ser estruturada em quatro níveis. A estrutura primária que consiste em sequências de aminoácidos (ou cadeias), a estrutura secundária que compreende essencialmente por hélices e folhas , que são subsequências (ou subdomínios) dos aminoácidos da estrutura primária, a estrutura terciária que resulta da composição de subdomínios em domínios, que por sua vez representa a forma geométrica da proteína, e por fim a estrutura quaternária que é o resultado da agregação de duas ou mais estruturas terciárias. Este último nível estrutural é frequentemente conhecido por um complexo proteico. Esta tese enquadra-se no âmbito da conceção de fármacos baseados em estrutura e no acoplamento de proteínas. Mais especificamente, aborda-se o problema fundamental da deteção e identificação de cavidades que são frequentemente vistos como possíveis locais de ligação (putative binding sites, do inglês) para os seus ligantes (ligands, do inglês). De forma geral, os algoritmos de identificação de cavidades dividem-se em três categorias principais: baseados em energia, geometria ou evolução. Os métodos evolutivos baseiam-se em estimativas de conservação das sequências evolucionárias. Isto é, estes métodos permitem detectar locais funcionais através do cálculo da conservação evolutiva das posições dos aminoácidos das proteínas. Em relação aos métodos baseados em energia estes baseiam-se no cálculo das energias de interação entre átomos da proteína e do ligante. Por fim, os algoritmos geométricos baseiam-se na análise da forma geométrica da proteína para identificar cavidades. Esta tese foca-se nos métodos geométricos. Apresentamos nesta tese três novos algoritmos geométricos para detecção de cavidades em proteínas. A principal contribuição desta tese está no uso de técnicas de computação gráfica na análise e reconhecimento de cavidades em proteínas, muito no espírito da modelação e visualização molecular. Como pode ser visto mais à frente, estas técnicas incluem o field-of-view (FoV), voxel ray casting, back-face culling, funções de diâmetro de forma, a teoria de Morse, e os pontos críticos. A ideia principal é segmentar a proteína, à semelhança do que acontece na segmentação de malhas em computação gráfica. Na prática, os algoritmos de detecção de cavidades não são nada mais que algoritmos de segmentação de proteínas

    Pre-processing, classification and semantic querying of large-scale Earth observation spaceborne/airborne/terrestrial image databases: Process and product innovations.

    Get PDF
    By definition of Wikipedia, “big data is the term adopted for a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications. The big data challenges typically include capture, curation, storage, search, sharing, transfer, analysis and visualization”. Proposed by the intergovernmental Group on Earth Observations (GEO), the visionary goal of the Global Earth Observation System of Systems (GEOSS) implementation plan for years 2005-2015 is systematic transformation of multisource Earth Observation (EO) “big data” into timely, comprehensive and operational EO value-adding products and services, submitted to the GEO Quality Assurance Framework for Earth Observation (QA4EO) calibration/validation (Cal/Val) requirements. To date the GEOSS mission cannot be considered fulfilled by the remote sensing (RS) community. This is tantamount to saying that past and existing EO image understanding systems (EO-IUSs) have been outpaced by the rate of collection of EO sensory big data, whose quality and quantity are ever-increasing. This true-fact is supported by several observations. For example, no European Space Agency (ESA) EO Level 2 product has ever been systematically generated at the ground segment. By definition, an ESA EO Level 2 product comprises a single-date multi-spectral (MS) image radiometrically calibrated into surface reflectance (SURF) values corrected for geometric, atmospheric, adjacency and topographic effects, stacked with its data-derived scene classification map (SCM), whose thematic legend is general-purpose, user- and application-independent and includes quality layers, such as cloud and cloud-shadow. Since no GEOSS exists to date, present EO content-based image retrieval (CBIR) systems lack EO image understanding capabilities. Hence, no semantic CBIR (SCBIR) system exists to date either, where semantic querying is synonym of semantics-enabled knowledge/information discovery in multi-source big image databases. In set theory, if set A is a strict superset of (or strictly includes) set B, then A B. This doctoral project moved from the working hypothesis that SCBIR computer vision (CV), where vision is synonym of scene-from-image reconstruction and understanding EO image understanding (EO-IU) in operating mode, synonym of GEOSS ESA EO Level 2 product human vision. Meaning that necessary not sufficient pre-condition for SCBIR is CV in operating mode, this working hypothesis has two corollaries. First, human visual perception, encompassing well-known visual illusions such as Mach bands illusion, acts as lower bound of CV within the multi-disciplinary domain of cognitive science, i.e., CV is conditioned to include a computational model of human vision. Second, a necessary not sufficient pre-condition for a yet-unfulfilled GEOSS development is systematic generation at the ground segment of ESA EO Level 2 product. Starting from this working hypothesis the overarching goal of this doctoral project was to contribute in research and technical development (R&D) toward filling an analytic and pragmatic information gap from EO big sensory data to EO value-adding information products and services. This R&D objective was conceived to be twofold. First, to develop an original EO-IUS in operating mode, synonym of GEOSS, capable of systematic ESA EO Level 2 product generation from multi-source EO imagery. EO imaging sources vary in terms of: (i) platform, either spaceborne, airborne or terrestrial, (ii) imaging sensor, either: (a) optical, encompassing radiometrically calibrated or uncalibrated images, panchromatic or color images, either true- or false color red-green-blue (RGB), multi-spectral (MS), super-spectral (SS) or hyper-spectral (HS) images, featuring spatial resolution from low (> 1km) to very high (< 1m), or (b) synthetic aperture radar (SAR), specifically, bi-temporal RGB SAR imagery. The second R&D objective was to design and develop a prototypical implementation of an integrated closed-loop EO-IU for semantic querying (EO-IU4SQ) system as a GEOSS proof-of-concept in support of SCBIR. The proposed closed-loop EO-IU4SQ system prototype consists of two subsystems for incremental learning. A primary (dominant, necessary not sufficient) hybrid (combined deductive/top-down/physical model-based and inductive/bottom-up/statistical model-based) feedback EO-IU subsystem in operating mode requires no human-machine interaction to automatically transform in linear time a single-date MS image into an ESA EO Level 2 product as initial condition. A secondary (dependent) hybrid feedback EO Semantic Querying (EO-SQ) subsystem is provided with a graphic user interface (GUI) to streamline human-machine interaction in support of spatiotemporal EO big data analytics and SCBIR operations. EO information products generated as output by the closed-loop EO-IU4SQ system monotonically increase their value-added with closed-loop iterations

    Multi-Material Mesh Representation of Anatomical Structures for Deep Brain Stimulation Planning

    Get PDF
    The Dual Contouring algorithm (DC) is a grid-based process used to generate surface meshes from volumetric data. However, DC is unable to guarantee 2-manifold and watertight meshes due to the fact that it produces only one vertex for each grid cube. We present a modified Dual Contouring algorithm that is capable of overcoming this limitation. The proposed method decomposes an ambiguous grid cube into a set of tetrahedral cells and uses novel polygon generation rules that produce 2-manifold and watertight surface meshes with good-quality triangles. These meshes, being watertight and 2-manifold, are geometrically correct, and therefore can be used to initialize tetrahedral meshes. The 2-manifold DC method has been extended into the multi-material domain. Due to its multi-material nature, multi-material surface meshes will contain non-manifold elements along material interfaces or shared boundaries. The proposed multi-material DC algorithm can (1) generate multi-material surface meshes where each material sub-mesh is a 2-manifold and watertight mesh, (2) preserve the non-manifold elements along the material interfaces, and (3) ensure that the material interface or shared boundary between materials is consistent. The proposed method is used to generate multi-material surface meshes of deep brain anatomical structures from a digital atlas of the basal ganglia and thalamus. Although deep brain anatomical structures can be labeled as functionally separate, they are in fact continuous tracts of soft tissue in close proximity to each other. The multi-material meshes generated by the proposed DC algorithm can accurately represent the closely-packed deep brain structures as a single mesh consisting of multiple material sub-meshes. Each sub-mesh represents a distinct functional structure of the brain. Printed and/or digital atlases are important tools for medical research and surgical intervention. While these atlases can provide guidance in identifying anatomical structures, they do not take into account the wide variations in the shape and size of anatomical structures that occur from patient to patient. Accurate, patient-specific representations are especially important for surgical interventions like deep brain stimulation, where even small inaccuracies can result in dangerous complications. The last part of this research effort extends the discrete deformable 2-simplex mesh into the multi-material domain where geometry-based internal forces and image-based external forces are used in the deformation process. This multi-material deformable framework is used to segment anatomical structures of the deep brain region from Magnetic Resonance (MR) data

    Proceedings, MSVSCC 2017

    Get PDF
    Proceedings of the 11th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 20, 2017 at VMASC in Suffolk, Virginia. 211 pp

    VolRoverN: Enhancing Surface and Volumetric Reconstruction for Realistic Dynamical Simulation of Cellular and Subcellular Function

    Get PDF
    Establishing meaningful relationships between cellular structure and function requires accurate morphological reconstructions. In particular, there is an unmet need for high quality surface reconstructions to model subcellular and synaptic interactions among neurons and glia at nanometer resolution. We address this need with VolRoverN, a software package that produces accurate, efficient, and automated 3D surface reconstructions from stacked 2D contour tracings. While many techniques and tools have been developed in the past for 3D visualization of cellular structure, the reconstructions from VolRoverN meet specific quality criteria that are important for dynamical simulations. These criteria include manifoldness, water-tightness, lack of self- and object-object-intersections, and geometric accuracy. These enhanced surface reconstructions are readily extensible to any cell type and are used here on spiny dendrites with complex morphology and axons from mature rat hippocampal area CA1. Both spatially realistic surface reconstructions and reduced skeletonizations are produced and formatted by VolRoverN for easy input into analysis software packages for neurophysiological simulations at multiple spatial and temporal scales ranging from ion electro-diffusion to electrical cable models

    Multi-Surface Simplex Spine Segmentation for Spine Surgery Simulation and Planning

    Get PDF
    This research proposes to develop a knowledge-based multi-surface simplex deformable model for segmentation of healthy as well as pathological lumbar spine data. It aims to provide a more accurate and robust segmentation scheme for identification of intervertebral disc pathologies to assist with spine surgery planning. A robust technique that combines multi-surface and shape statistics-aware variants of the deformable simplex model is presented. Statistical shape variation within the dataset has been captured by application of principal component analysis and incorporated during the segmentation process to refine results. In the case where shape statistics hinder detection of the pathological region, user-assistance is allowed to disable the prior shape influence during deformation. Results have been validated against user-assisted expert segmentation

    Large-scale Geometric Data Decomposition, Processing and Structured Mesh Generation

    Get PDF
    Mesh generation is a fundamental and critical problem in geometric data modeling and processing. In most scientific and engineering tasks that involve numerical computations and simulations on 2D/3D regions or on curved geometric objects, discretizing or approximating the geometric data using a polygonal or polyhedral meshes is always the first step of the procedure. The quality of this tessellation often dictates the subsequent computation accuracy, efficiency, and numerical stability. When compared with unstructured meshes, the structured meshes are favored in many scientific/engineering tasks due to their good properties. However, generating high-quality structured mesh remains challenging, especially for complex or large-scale geometric data. In industrial Computer-aided Design/Engineering (CAD/CAE) pipelines, the geometry processing to create a desirable structural mesh of the complex model is the most costly step. This step is semi-manual, and often takes up to several weeks to finish. Several technical challenges remains unsolved in existing structured mesh generation techniques. This dissertation studies the effective generation of structural mesh on large and complex geometric data. We study a general geometric computation paradigm to solve this problem via model partitioning and divide-and-conquer. To apply effective divide-and-conquer, we study two key technical components: the shape decomposition in the divide stage, and the structured meshing in the conquer stage. We test our algorithm on vairous data set, the results demonstrate the efficiency and effectiveness of our framework. The comparisons also show our algorithm outperforms existing partitioning methods in final meshing quality. We also show our pipeline scales up efficiently on HPC environment

    Doctor of Philosophy

    Get PDF
    dissertationOne of the fundamental building blocks of many computational sciences is the construction and use of a discretized, geometric representation of a problem domain, often referred to as a mesh. Such a discretization enables an otherwise complex domain to be represented simply, and computation to be performed over that domain with a finite number of basis elements. As mesh generation techniques have become more sophisticated over the years, focus has largely shifted to quality mesh generation techniques that guarantee or empirically generate numerically well-behaved elements. In this dissertation, the two complementary meshing subproblems of vertex placement and element creation are analyzed, both separately and together. First, a dynamic particle system achieves adaptivity over domains by inferring feature size through a new information passing algorithm. Second, a new tetrahedral algorithm is constructed that carefully combines lattice-based stenciling and mesh warping to produce guaranteed quality meshes on multimaterial volumetric domains. Finally, the ideas of lattice cleaving and dynamic particle systems are merged into a unified framework for producing guaranteed quality, unstructured and adaptive meshing of multimaterial volumetric domains
    corecore