516 research outputs found

    Regular Hierarchical Surface Models: A conceptual model of scale variation in a GIS and its application to hydrological geomorphometry

    Get PDF
    Environmental and geographical process models inevitably involve parameters that vary spatially. One example is hydrological modelling, where parameters derived from the shape of the ground such as flow direction and flow accumulation are used to describe the spatial complexity of drainage networks. One way of handling such parameters is by using a Digital Elevation Model (DEM), such modelling is the basis of the science of geomorphometry. A frequently ignored but inescapable challenge when modellers work with DEMs is the effect of scale and geometry on the model outputs. Many parameters vary with scale as much as they vary with position. Modelling variability with scale is necessary to simplify and generalise surfaces, and desirable to accurately reconcile model components that are measured at different scales. This thesis develops a surface model that is optimised to represent scale in environmental models. A Regular Hierarchical Surface Model (RHSM) is developed that employs a regular tessellation of space and scale that forms a self-similar regular hierarchy, and incorporates Level Of Detail (LOD) ideas from computer graphics. Following convention from systems science, the proposed model is described in its conceptual, mathematical, and computational forms. The RHSM development was informed by a categorisation of Geographical Information Science (GISc) surfaces within a cohesive framework of geometry, structure, interpolation, and data model. The positioning of the RHSM within this broader framework made it easier to adapt algorithms designed for other surface models to conform to the new model. The RHSM has an implicit data model that utilises a variation of Middleton and Sivaswamy (2001)’s intrinsically hierarchical Hexagonal Image Processing referencing system, which is here generalised for rectangular and triangular geometries. The RHSM provides a simple framework to form a pyramid of coarser values in a process characterised as a scaling function. In addition, variable density realisations of the hierarchical representation can be generated by defining an error value and decision rule to select the coarsest appropriate scale for a given region to satisfy the modeller’s intentions. The RHSM is assessed using adaptions of the geomorphometric algorithms flow direction and flow accumulation. The effects of scale and geometry on the anistropy and accuracy of model results are analysed on dispersive and concentrative cones, and Light Detection And Ranging (LiDAR) derived surfaces of the urban area of Dunedin, New Zealand. The RHSM modelling process revealed aspects of the algorithms not obvious within a single geometry, such as, the influence of node geometry on flow direction results, and a conceptual weakness of flow accumulation algorithms on dispersive surfaces that causes asymmetrical results. In addition, comparison of algorithm behaviour between geometries undermined the hypothesis that variance of cell cross section with direction is important for conversion of cell accumulations to point values. The ability to analyse algorithms for scale and geometry and adapt algorithms within a cohesive conceptual framework offers deeper insight into algorithm behaviour than previously achieved. The deconstruction of algorithms into geometry neutral forms and the application of scaling functions are important contributions to the understanding of spatial parameters within GISc

    Nondifferentiable energy minimization for cohesive fracture in a discontinuous Galerkin finite element framework

    Get PDF
    Until recently, most works on the computational modelling of fracture relied on a Newtonian mechanics approach, i.e., momentum balance equations describing the motion of the body along with fracture criteria describing the evolution of fractures. Robustness issues associated with this approach have been identified in the previous literature, several of which, as this thesis shows, due to the discontinuous dependence of stress field on the deformation field at the time of insertion of displacement discontinuities. Lack of continuity limits applicability of the models and undermines reliability of the numerical solutions. In particular, solutions often show non-convergent behaviour with time step refinement and exhibit nonphysical velocity fields and crack activation patterns. In addition, implicit time-stepping schemes, which are favoured in quasi-static and low-velocity problems, are challenging in such models. This is not a coincidence but a manifestation of algorithmic pitfalls of such methods. Continuity of stresses is in general hard to achieve in a computational model that employs a crack initiation criterion. Energy (variational) approaches to fracture have gained increased popularity in recent years. An energy approach has been shown to avoid introduction of a crack initiation criterion. The central idea of this model is the minimization of a mechanical energy functional, whose term representing the energy due to the cracks is a nondifferentiable function of the interface openings at zero opening displacement. A consequence of this formulation is that crack initiation happens automatically as a by-product of energy minimization. This avoids the complexities arising from the introduction of an extrinsic activation criterion but entails minimization of a nondifferentiable potential. The aim of this research is to develop robust and efficient computational algorithms for numerical implementation of the energy approach to cohesive fracture. Two computational algorithms have been proposed in a discontinuous Galerkin finite element framework, including a continuation algorithm which entails successive smooth approximations of the nondifferentiable functional and a block coordinate descent algorithm which uses generalized differential calculus for the treatment of nondifferentiability. These methods allow for a seamless transition from the uncracked to the cracked state, making possible the use of iterative solvers with implicit time-stepping, and completely sidestepping robustness issues of previous computational frameworks. A critical component of this work is validation of the robustness of the proposed numerical methods. Various numerical simulations are presented including time step and mesh size convergence studies and qualitative and quantitative comparison of simulations with experimental observations and theoretical findings. In addition, an energy-based hydro-mechanical model and computational algorithm is presented for hydraulic fracturing in impermeable media, which shows the crucial importance of continuity in multi-physics modelling. A search algorithm is developed on the basis of graph theory to identify the set of fluid-pressurized cracks among cracks in naturally fractured domains

    Contours and contrast

    Get PDF
    Contrast in photographic and computer-generated imagery communicates colour and lightness differences that would be perceived when viewing the represented scene. Due to depiction constraints, the amount of displayable contrast is limited, reducing the image's ability to accurately represent the scene. A local contrast enhancement technique called unsharp masking can overcome these constraints by adding high-frequency contours to an image that increase its apparent contrast. In three novel algorithms inspired by unsharp masking, specialized local contrast enhancements are shown to overcome constraints of a limited dynamic range, overcome an achromatic palette, and to improve the rendering of 3D shapes and scenes. The Beyond Tone Mapping approach restores original HDR contrast to its tone mapped LDR counterpart by adding highfrequency colour contours to the LDR image while preserving its luminance. Apparent Greyscale is a multi-scale two-step technique that first converts colour images and video to greyscale according to their chromatic lightness, then restores diminished colour contrast with high-frequency luminance contours. Finally, 3D Unsharp Masking performs scene coherent enhancement by introducing 3D high-frequency luminance contours to emphasize the details, shapes, tonal range and spatial organization of a 3D scene within the rendering pipeline. As a perceptual justification, it is argued that a local contrast enhancement made with unsharp masking is related to the Cornsweet illusion, and that this may explain its effect on apparent contrast.Seit vielen Jahren ist die realistische Erzeugung von virtuellen Charakteren ein zentraler Teil der Computergraphikforschung. Dennoch blieben bisher einige Probleme ungelöst. Dazu zählt unter anderem die Erzeugung von Charakteranimationen, welche unter der Benutzung der traditionellen, skelettbasierten Ansätze immer noch zeitaufwändig sind. Eine weitere Herausforderung stellt auch die passive Erfassung von Schauspielern in alltäglicher Kleidung dar. Darüber hinaus existieren im Gegensatz zu den zahlreichen skelettbasierten Ansätzen nur wenige Methoden zur Verarbeitung und Veränderung von Netzanimationen. In dieser Arbeit präsentieren wir Algorithmen zur Lösung jeder dieser Aufgaben. Unser erster Ansatz besteht aus zwei Netz-basierten Verfahren zur Vereinfachung von Charakteranimationen. Obwohl das kinematische Skelett beiseite gelegt wird, können beide Verfahren direkt in die traditionelle Pipeline integriert werden, wobei die Erstellung von Animationen mit wirklichkeitsgetreuen Körperverformungen ermöglicht wird. Im Anschluss präsentieren wir drei passive Aufnahmemethoden für Körperbewegung und Schauspiel, die ein deformierbares 3D-Modell zur Repräsentation der Szene benutzen. Diese Methoden können zur gemeinsamen Rekonstruktion von zeit- und raummässig kohärenter Geometrie, Bewegung und Oberflächentexturen benutzt werden, die auch zeitlich veränderlich sein dürfen. Aufnahmen von lockerer und alltäglicher Kleidung sind dabei problemlos möglich. Darüber hinaus ermöglichen die qualitativ hochwertigen Rekonstruktionen die realistische Darstellung von 3D Video-Sequenzen. Schließlich werden zwei neuartige Algorithmen zur Verarbeitung von Netz-Animationen beschrieben. Während der erste Algorithmus die vollautomatische Umwandlung von Netz-Animationen in skelettbasierte Animationen ermöglicht, erlaubt der zweite die automatische Konvertierung von Netz-Animationen in so genannte Animations-Collagen, einem neuen Kunst-Stil zur Animationsdarstellung. Die in dieser Dissertation beschriebenen Methoden können als Lösungen spezieller Probleme, aber auch als wichtige Bausteine größerer Anwendungen betrachtet werden. Zusammengenommen bilden sie ein leistungsfähiges System zur akkuraten Erfassung, zur Manipulation und zum realistischen Rendern von künstlerischen Aufführungen, dessen Fähigkeiten über diejenigen vieler verwandter Capture-Techniken hinausgehen. Auf diese Weise können wir die Bewegung, die im Zeitverlauf variierenden Details und die Textur-Informationen eines Schauspielers erfassen und sie in eine mit vollständiger Information versehene Charakter-Animation umwandeln, die unmittelbar weiterverwendet werden kann, sich aber auch zur realistischen Darstellung des Schauspielers aus beliebigen Blickrichtungen eignet

    Feature Driven Learning Techniques for 3D Shape Segmentation

    Get PDF
    Segmentation is a fundamental problem in 3D shape analysis and machine learning. The abil-ity to partition a 3D shape into meaningful or functional parts is a vital ingredient of many down stream applications like shape matching, classification and retrieval. Early segmentation methods were based on approaches like fitting primitive shapes to parts or extracting segmen-tations from feature points. However, such methods had limited success on shapes with more complex geometry. Observing this, research began using geometric features to aid the segmen-tation, as certain features (e.g. Shape Diameter Function (SDF)) are less sensitive to complex geometry. This trend was also incorporated in the shift to set-wide segmentations, called co-segmentation, which provides a consistent segmentation throughout a shape dataset, meaning similar parts have the same segment identifier. The idea of co-segmentation is that a set of same class shapes (i.e. chairs) contain more information about the class than a single shape would, which could lead to an overall improvement to the segmentation of the individual shapes. Over the past decade many different approaches of co-segmentation have been explored covering supervised, unsupervised and even user-driven active learning. In each of the areas, there has been widely adopted use of geometric features to aid proposed segmentation algorithms, with each method typically using different combinations of features. The aim of this thesis is to ex-plore these different areas of 3D shape segmentation, perform an analysis of the effectiveness of geometric features in these areas and tackle core issues that currently exist in the literature.Initially, we explore the area of unsupervised segmentation, specifically looking at co-segmentation, and perform an analysis of several different geometric features. Our analysis is intended to compare the different features in a single unsupervised pipeline to evaluate their usefulness and determine their strengths and weaknesses. Our analysis also includes several features that have not yet been explored in unsupervised segmentation but have been shown effective in other areas.Later, with the ever increasing popularity of deep learning, we explore the area of super-vised segmentation and investigate the current state of Neural Network (NN) driven techniques. We specifically observe limitations in the current state-of-the-art and propose a novel Convolu-tional Neural Network (CNN) based method which operates on multi-scale geometric features to gain more information about the shapes being segmented. We also perform an evaluation of several different supervised segmentation methods using the same input features, but with vary-ing complexity of model design. This is intended to see if the more complex models provide a significant performance increase.Lastly, we explore the user-driven area of active learning, to tackle the large amounts of inconsistencies in current ground truth segmentation, which are vital for most segmentation methods. Active learning has been used to great effect for ground truth generation in the past, so we present a novel active learning framework using deep learning and geometric features to assist the user in co-segmentation of a dataset. Our method emphasises segmentation accu-racy while minimising user effort, providing an interactive visualisation for co-segmentation analysis and the application of automated optimisation tools.In this thesis we explore the effectiveness of different geometric features across varying segmentation tasks, providing an in-depth analysis and comparison of state-of-the-art methods

    Modelado jerárquico de objetos 3D con superficies de subdivisión

    Get PDF
    Las SSs (Superficies de Subdivisión) son un potente paradigma de modelado de objetos 3D (tridimensionales) que establece un puente entre los dos enfoques tradicionales a la aproximación de superficies, basados en mallas poligonales y de parches alabeados, que conllevan problemas uno y otro. Los esquemas de subdivisión permiten definir una superficie suave (a tramos), como las más frecuentes en la práctica, como el límite de un proceso recursivo de refinamiento de una malla de control burda, que puede ser descrita muy compactamente. Además, la recursividad inherente a las SSs establece naturalmente una relación de anidamiento piramidal entre las mallas / NDs (Niveles de Detalle) generadas/os sucesivamente, por lo que las SSs se prestan extraordinariamente al AMRO (Análisis Multiresolución mediante Ondículas) de superficies, que tiene aplicaciones prácticas inmediatas e interesantísimas, como la codificación y la edición jerárquicas de modelos 3D. Empezamos describiendo los vínculos entre las tres áreas que han servido de base a nuestro trabajo (SSs, extracción automática de NDs y AMRO) para explicar como encajan estas tres piezas del puzzle del modelado jerárquico de objetos de 3D con SSs. El AMRO consiste en descomponer una función en una versión burda suya y un conjunto de refinamientos aditivos anidados jerárquicamente llamados "coeficientes ondiculares". La teoría clásica de ondículas estudia las señales clásicas nD: las definidas sobre dominios paramétricos homeomorfos a R" o (0,1)n como el audio (n=1), las imágenes (n=2) o el vídeo (n=3). En topologías menos triviales, como las variedades 2D) (superficies en el espacio 3D), el AMRO no es tan obvio, pero sigue siendo posible si se enfoca desde la perspectiva de las SSs. Basta con partir de una malla burda que aproxime a un bajo ND la superficie considerada, subdividirla recursivamente y, al hacerlo, ir añadiendo los coeficientes ondiculares, que son los detalles 3D necesarios para obtener aproximaciones más y más finas a la superficie original. Pasamos después a las aplicaciones prácticas que constituyen nuestros principal desarrollo original y, en particular, presentamos una técnica de codificación jerárquica de modelos 3D basada en SSs, que actúa sobre los detalles 3D mencionados: los expresa en un referencial normal loscal; los organiza según una estructura jerárquica basada en facetas; los cuantifica dedicando menos bits a sus componentes tangenciales, menos energéticas, y los "escalariza"; y los codifica dinalmente gracias a una técnica similar al SPIHT (Set Partitioning In Hierarchical Tress) de Said y Pearlman. El resultado es un código completamente embebido y al menos dos veces más compacto, para superficies mayormente suaves, que los obtenidos con técnicas de codificación progresiva de mallas 3D publicadas previamente, en las que además los NDs no están anidados piramidalmente. Finalmente, describimos varios métodos auxiliares que hemos desarrollado, mejorando técnicas previas y creando otras propias, ya que una solución completa al modelado de objetos 3D con SSs requiere resolver otros dos problemas. El primero es la extracción de una malla base (triangular, en nuestro caso) de la superficie original, habitualmente dada por una malla triangular fina con conectividad arbitraria. El segundo es la generación de un remallado recursivo con conectividad de subdivisión de la malla original/objetivo mediante un refinamiento recursivo de la malla base, calculando así los detalles 3D necesarios para corregir las posiciones predichas por la subdivisión para nuevos vértices

    Analysis of Blood Flow in Patient-specific Models of Type B Aortic Dissection

    No full text
    Aortic dissection is the most common acute catastrophic event affecting the aorta. The majority of patients presenting with an uncomplicated type B dissection are treated medically, but 25% of these patients develop subsequent dilatation and aortic aneurysm formation. The reasons behind the long‐term outcomes of type B aortic dissection are poorly understood. As haemodynamic factors have been involved in the development and progression of a variety of cardiovascular diseases, the flow phenomena and environment in patient‐specific models of type B aortic dissection have been studied in this thesis by applying computational fluid dynamics (CFD) to in vivo data. The present study aims to gain more detailed knowledge of the links between morphology, flow characteristics and clinical outcomes in type B dissection patients. The thesis includes two parts of patient‐specific study: a multiple case cross‐sectional study and a single case longitudinal study. The multiple cases study involved a group of ten patients with classic type B aortic dissection with a focus on examining the flow characteristics as well as the role of morphological factors in determining the flow patterns and haemodynamic parameters. The single case study was based on a series of follow‐up scans of a patient who has a stable dissection, with an aim to identify the specified haemodynamic factors that are associated with the progression of aortic dissection. Both studies were carried out based on computed tomography images acquired from the patients. 4D Phase‐contrast magnetic resonance imaging was performed on a typical type B aortic dissection patient to provide detailed flow data for validation purpose. This was achieved by qualitative and quantitative comparisons of velocity‐encoded images with simulation results of the CFD model. The analysis of simulation results, including velocity, wall shear stress and turbulence intensity profiles, demonstrates certain correlations between the morphological features and haemodynamic factors, and also their effects on long‐term outcomes of type B aortic dissections. The simulation results were in good agreement with in vivo MR flow data in the patient‐specific validation case, giving credence to the application of the computational model to the study of flow conditions in aortic dissection. This study made an important contribution by identifying the role of certain morphological and haemodynamic factors in the development of type B aortic dissection, which may help provide a better guideline to assist surgeons in choosing optimal treatment protocol for individual patient

    lidR : an R package for analysis of Airborne Laser Scanning (ALS) data

    Get PDF
    Airborne laser scanning (ALS) is a remote sensing technology known for its applicability in natural resources management. By quantifying the three-dimensional structure of vegetation and underlying terrain using laser technology, ALS has been used extensively for enhancing geospatial knowledge in the fields of forestry and ecology. Structural descriptions of vegetation provide a means of estimating a range of ecologically pertinent attributes, such as height, volume, and above-ground biomass. The efficient processing of large, often technically complex datasets requires dedicated algorithms and software. The continued promise of ALS as a tool for improving ecological understanding is often dependent on user-created tools, methods, and approaches. Due to the proliferation of ALS among academic, governmental, and private-sector communities, paired with requirements to address a growing demand for open and accessible data, the ALS community is recognising the importance of free and open-source software (FOSS) and the importance of user-defined workflows. Herein, we describe the philosophy behind the development of the lidR package. Implemented in the R environment with a C/C++ backend, lidR is free, open-source and cross-platform software created to enable simple and creative processing workflows for forestry and ecology communities using ALS data. We review current algorithms used by the research community, and in doing so raise awareness of current successes and challenges associated with parameterisation and common implementation approaches. Through a detailed description of the package, we address the key considerations and the design philosophy that enables users to implement user-defined tools. We also discuss algorithm choices that make the package representative of the ‘state-of-the-art' and we highlight some internal limitations through examples of processing time discrepancies. We conclude that the development of applications like lidR are of fundamental importance for developing transparent, flexible and open ALS tools to ensure not only reproducible workflows, but also to offer researchers the creative space required for the progress and development of the discipline
    corecore