144 research outputs found

    Large-scale Geometric Data Decomposition, Processing and Structured Mesh Generation

    Get PDF
    Mesh generation is a fundamental and critical problem in geometric data modeling and processing. In most scientific and engineering tasks that involve numerical computations and simulations on 2D/3D regions or on curved geometric objects, discretizing or approximating the geometric data using a polygonal or polyhedral meshes is always the first step of the procedure. The quality of this tessellation often dictates the subsequent computation accuracy, efficiency, and numerical stability. When compared with unstructured meshes, the structured meshes are favored in many scientific/engineering tasks due to their good properties. However, generating high-quality structured mesh remains challenging, especially for complex or large-scale geometric data. In industrial Computer-aided Design/Engineering (CAD/CAE) pipelines, the geometry processing to create a desirable structural mesh of the complex model is the most costly step. This step is semi-manual, and often takes up to several weeks to finish. Several technical challenges remains unsolved in existing structured mesh generation techniques. This dissertation studies the effective generation of structural mesh on large and complex geometric data. We study a general geometric computation paradigm to solve this problem via model partitioning and divide-and-conquer. To apply effective divide-and-conquer, we study two key technical components: the shape decomposition in the divide stage, and the structured meshing in the conquer stage. We test our algorithm on vairous data set, the results demonstrate the efficiency and effectiveness of our framework. The comparisons also show our algorithm outperforms existing partitioning methods in final meshing quality. We also show our pipeline scales up efficiently on HPC environment

    Study of Image Local Scale Structure Using Nonlinear Diffusion

    Get PDF
    Multi-scale representation and local scale extraction of images are important in computer vision research, as in general , structures within images are unknown. Traditionally, the multi-scale analysis is based on the linear diusion (i.e. heat diusion) with known limitation in edge distortions. In addition, the term scale which is used widely in multi-scale and local scale analysis does not have a consistent denition and it can pose potential diculties in real image analysis, especially for the proper interpretation of scale as a geometric measure. In this study, in order to overcome limitations of linear diusion, we focus on the multi-scale analysis based on total variation minimization model. This model has been used in image denoising with the power that it can preserve edge structures. Based on the total variation model, we construct the multi-scale space and propose a denition for image local scale. The new denition of local scale incorporates both pixel-wise and orientation information. This denition can be interpreted with a clear geometrical meaning and applied in general image analysis. The potential applications of total variation model in retinal fundus image analysis is explored. The existence of blood vessel and drusen structures within a single fundus image makes the image analysis a challenging problem. A multi-scale model based on total variation is used, showing the capabilities in both drusen and blood vessel detections. The performance of vessel detection is compared with publicly available methods, showing the improvements both quantitatively and qualitatively. This study provides a better insight into local scale study and shows the potentials of total variation model in medical image analysis

    Accurate and discernible photocollages

    Get PDF
    There currently exist several techniques for selecting and combining images from a digital image library into a single image so that the result meets certain prespecified visual criteria. Image mosaic methods, first explored by Connors and Trivedi[18], arrange library images according to some tiling arrangement, often a regular grid, so that the combination of images, when viewed as a whole, resembles some input target image. Other techniques, such as Autocollage of Rother et al.[78], seek only to combine images in an interesting and visually pleasing manner, according to certain composition principles, without attempting to approximate any target image. Each of these techniques provide a myriad of creative options for artists who wish to combine several levels of meaning into a single image or who wish to exploit the meaning and symbolism contained in each of a large set of images through an efficient and easy process. We first examine the most notable and successful of these methods, and summarize the advantages and limitations of each. We then formulate a set of goals for an image collage system that combines the advantages of these methods while addressing and mitigating the drawbacks. Particularly, we propose a system for creating photocollages that approximate a target image as an aggregation of smaller images, chosen from a large library, so that interesting visual correspondences between images are exploited. In this way, we allow users to create collages in which multiple layers of meaning are encoded, with meaningful visual links between each layer. In service of this goal, we ensure that the images used are as large as possible and are combined in such a way that boundaries between images are not immediately apparent, as in Autocollage. This has required us to apply a multiscale approach to searching and comparing images from a large database, which achieves both speed and accuracy. We also propose a new framework for color post-processing, and propose novel techniques for decomposing images according to object and texture information

    Inferring Geodesic Cerebrovascular Graphs: Image Processing, Topological Alignment and Biomarkers Extraction

    Get PDF
    A vectorial representation of the vascular network that embodies quantitative features - location, direction, scale, and bifurcations - has many potential neuro-vascular applications. Patient-specific models support computer-assisted surgical procedures in neurovascular interventions, while analyses on multiple subjects are essential for group-level studies on which clinical prediction and therapeutic inference ultimately depend. This first motivated the development of a variety of methods to segment the cerebrovascular system. Nonetheless, a number of limitations, ranging from data-driven inhomogeneities, the anatomical intra- and inter-subject variability, the lack of exhaustive ground-truth, the need for operator-dependent processing pipelines, and the highly non-linear vascular domain, still make the automatic inference of the cerebrovascular topology an open problem. In this thesis, brain vessels’ topology is inferred by focusing on their connectedness. With a novel framework, the brain vasculature is recovered from 3D angiographies by solving a connectivity-optimised anisotropic level-set over a voxel-wise tensor field representing the orientation of the underlying vasculature. Assuming vessels joining by minimal paths, a connectivity paradigm is formulated to automatically determine the vascular topology as an over-connected geodesic graph. Ultimately, deep-brain vascular structures are extracted with geodesic minimum spanning trees. The inferred topologies are then aligned with similar ones for labelling and propagating information over a non-linear vectorial domain, where the branching pattern of a set of vessels transcends a subject-specific quantized grid. Using a multi-source embedding of a vascular graph, the pairwise registration of topologies is performed with the state-of-the-art graph matching techniques employed in computer vision. Functional biomarkers are determined over the neurovascular graphs with two complementary approaches. Efficient approximations of blood flow and pressure drop account for autoregulation and compensation mechanisms in the whole network in presence of perturbations, using lumped-parameters analog-equivalents from clinical angiographies. Also, a localised NURBS-based parametrisation of bifurcations is introduced to model fluid-solid interactions by means of hemodynamic simulations using an isogeometric analysis framework, where both geometry and solution profile at the interface share the same homogeneous domain. Experimental results on synthetic and clinical angiographies validated the proposed formulations. Perspectives and future works are discussed for the group-wise alignment of cerebrovascular topologies over a population, towards defining cerebrovascular atlases, and for further topological optimisation strategies and risk prediction models for therapeutic inference. Most of the algorithms presented in this work are available as part of the open-source package VTrails

    Advanced machine learning algorithms for Canadian wetland mapping using polarimetric synthetic aperture radar (PolSAR) and optical imagery

    Get PDF
    Wetlands are complex land cover ecosystems that represent a wide range of biophysical conditions. They are one of the most productive ecosystems and provide several important environmental functionalities. As such, wetland mapping and monitoring using cost- and time-efficient approaches are of great interest for sustainable management and resource assessment. In this regard, satellite remote sensing data are greatly beneficial, as they capture a synoptic and multi-temporal view of landscapes. The ability to extract useful information from satellite imagery greatly affects the accuracy and reliability of the final products. This is of particular concern for mapping complex land cover ecosystems, such as wetlands, where complex, heterogeneous, and fragmented landscape results in similar backscatter/spectral signatures of land cover classes in satellite images. Accordingly, the overarching purpose of this thesis is to contribute to existing methodologies of wetland classification by proposing and developing several new techniques based on advanced remote sensing tools and optical and Synthetic Aperture Radar (SAR) imagery. Specifically, the importance of employing an efficient speckle reduction method for polarimetric SAR (PolSAR) image processing is discussed and a new speckle reduction technique is proposed. Two novel techniques are also introduced for improving the accuracy of wetland classification. In particular, a new hierarchical classification algorithm using multi-frequency SAR data is proposed that discriminates wetland classes in three steps depending on their complexity and similarity. The experimental results reveal that the proposed method is advantageous for mapping complex land cover ecosystems compared to single stream classification approaches, which have been extensively used in the literature. Furthermore, a new feature weighting approach is proposed based on the statistical and physical characteristics of PolSAR data to improve the discrimination capability of input features prior to incorporating them into the classification scheme. This study also demonstrates the transferability of existing classification algorithms, which have been developed based on RADARSAT-2 imagery, to compact polarimetry SAR data that will be collected by the upcoming RADARSAT Constellation Mission (RCM). The capability of several well-known deep Convolutional Neural Network (CNN) architectures currently employed in computer vision is first introduced in this thesis for classification of wetland complexes using multispectral remote sensing data. Finally, this research results in the first provincial-scale wetland inventory maps of Newfoundland and Labrador using the Google Earth Engine (GEE) cloud computing resources and open access Earth Observation (EO) collected by the Copernicus Sentinel missions. Overall, the methodologies proposed in this thesis address fundamental limitations/challenges of wetland mapping using remote sensing data, which have been ignored in the literature. These challenges include the backscattering/spectrally similar signature of wetland classes, insufficient classification accuracy of wetland classes, and limitations of wetland mapping on large scales. In addition to the capabilities of the proposed methods for mapping wetland complexes, the use of these developed techniques for classifying other complex land cover types beyond wetlands, such as sea ice and crop ecosystems, offers a potential avenue for further research

    Frame Fields for Hexahedral Mesh Generation

    Get PDF
    As a discretized representation of the volumetric domain, hexahedral meshes have been a popular choice in computational engineering science and serve as one of the main mesh types in leading industrial software of relevance. The generation of high quality hexahedral meshes is extremely challenging because it is essentially an optimization problem involving multiple (conflicting) objectives, such as fidelity, element quality, and structural regularity. Various hexahedral meshing methods have been proposed in past decades, attempting to solve the problem from different perspectives. Unfortunately, algorithmic hexahedral meshing with guarantees of robustness and quality remains unsolved. The frame field based hexahedral meshing method is the most promising approach that is capable of automatically generating hexahedral meshes of high quality, but unfortunately, it suffers from several robustness issues. Field based hexahedral meshing follows the idea of integer-grid maps, which pull back the Cartesian hexahedral grid formed by integer isoplanes from a parametric domain to a surface-conforming hexahedral mesh of the input object. Since directly optimizing for a high quality integer-grid map is mathematically challenging, the construction is usually split into two steps: (1) generation of a feature-aligned frame field and (2) generation of an integer-grid map that best aligns with the frame field. The main robustness issue stems from the fact that smooth frame fields frequently exhibit singularity graphs that are inappropriate for hexahedral meshing and induce heavily degenerate integer-grid maps. The thesis aims at analyzing the gap between the topologies of frame fields and hexahedral meshes and developing algorithms to realize a more robust field based hexahedral mesh generation. The first contribution of this work is an enumeration of all local configurations that exist in hexahedral meshes with bounded edge valence and a generalization of the Hopf-Poincaré formula to octahedral (orthonormal frame) fields, leading to necessary local and global conditions for the hex-meshability of an octahedral field in terms of its singularity graph. The second contribution is a novel algorithm to generate octahedral fields with prescribed hex-meshable singularity graphs, which requires the solution of a large non-linear mixed-integer algebraic system. This algorithm is an important step toward robust automatic hexahedral meshing since it enables the generation of a hex-meshable octahedral field. In the collaboration work with colleagues [BRK+22], the dataset HexMe consisting of practically relevant models with feature tags is set up, allowing a fair evaluation for practical hexahedral mesh generation algorithms. The extendable and mutable dataset remains valuable as hexahedral meshing algorithms develop. The results of the standard field based hexahedral meshing algorithms on the HexMesh dataset expose the fragility of the automatic pipeline. The major contribution of this thesis improves the robustness of the automatic field based hexahedral meshing by guaranteeing local meshability of general feature aligned smooth frame fields. We derive conditions on the meshability of frame fields when feature constraints are considered, and describe an algorithm to automatically turn a given non-meshable frame field into a similar but locally meshable one. Despite the fact that local meshability is only a necessary but not sufficient condition for the stronger requirement of meshability, our algorithm increases the 2% success rate of generating valid integer-grid maps with state-of-the-art methods to 57%, when compared on the challenging HexMe dataset

    Proceedings, MSVSCC 2014

    Get PDF
    Proceedings of the 8th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 17, 2014 at VMASC in Suffolk, Virginia

    Deep Model for Improved Operator Function State Assessment

    Get PDF
    A deep learning framework is presented for engagement assessment using EEG signals. Deep learning is a recently developed machine learning technique and has been applied to many applications. In this paper, we proposed a deep learning strategy for operator function state (OFS) assessment. Fifteen pilots participated in a flight simulation from Seattle to Chicago. During the four-hour simulation, EEG signals were recorded for each pilot. We labeled 20- minute data as engaged and disengaged to fine-tune the deep network and utilized the remaining vast amount of unlabeled data to initialize the network. The trained deep network was then used to assess if a pilot was engaged during the four-hour simulation
    • …
    corecore