39 research outputs found

    Graph Deep Learning: State of the Art and Challenges

    Get PDF
    The last half-decade has seen a surge in deep learning research on irregular domains and efforts to extend convolutional neural networks (CNNs) to work on irregularly structured data. The graph has emerged as a particularly useful geometrical object in deep learning, able to represent a variety of irregular domains well. Graphs can represent various complex systems, from molecular structure, to computer and social and traffic networks. Consequent on the extension of CNNs to graphs, a great amount of research has been published that improves the inferential power and computational efficiency of graph- based convolutional neural networks (GCNNs).The research is incipient, however, and our understanding is relatively rudimentary. The majority of GCNNs are designed to operate with certain properties. In this survey we review of the state of graph representation learning from the perspective of deep learning. We consider challenges in graph deep learning that have been neglected in the majority of work, largely because of the numerous theoretical difficulties they present. We identify four major challenges in graph deep learning: dynamic and evolving graphs, learning with edge signals and information, graph estimation, and the generalization of graph models. For each problem we discuss the theoretical and practical issues, survey the relevant research, while highlighting the limitations of the state of the art. Advances on these challenges would permit GCNNs to be applied to wider range of domains, in situations where graph models have previously been limited owing to the obstructions to applying a model owing to the domains’ natures

    Hierarchical Graphs as Organisational Principle and Spatial Model Applied to Pedestrian Indoor Navigation

    Get PDF
    In this thesis, hierarchical graphs are investigated from two different angles – as a general modelling principle for (geo)spatial networks and as a practical means to enhance navigation in buildings. The topics addressed are of interest from a multi-disciplinary point of view, ranging from Computer Science in general over Artificial Intelligence and Computational Geometry in particular to other fields such as Geographic Information Science. Some hierarchical graph models have been previously proposed by the research community, e.g. to cope with the massive size of road networks, or as a conceptual model for human wayfinding. However, there has not yet been a comprehensive, systematic approach for modelling spatial networks with hierarchical graphs. One particular problem is the gap between conceptual models and models which can be readily used in practice. Geospatial data is commonly modelled - if at all - only as a flat graph. Therefore, from a practical point of view, it is important to address the automatic construction of a graph hierarchy based on the predominant data models. The work presented deals with this problem: an automated method for construction is introduced and explained. A particular contribution of my thesis is the proposition to use hierarchical graphs as the basis for an extensible, flexible architecture for modelling various (geo)spatial networks. The proposed approach complements classical graph models very well in the sense that their expressiveness is extended: various graphs originating from different sources can be integrated into a comprehensive, multi-level model. This more sophisticated kind of architecture allows for extending navigation services beyond the borders of one single spatial network to a collection of heterogeneous networks, thus establishing a meta-navigation service. Another point of discussion is the impact of the hierarchy and distribution on graph algorithms. They have to be adapted to properly operate on multi-level hierarchies. By investigating indoor navigation problems in particular, the guiding principles are demonstrated for modelling networks at multiple levels of detail. Complex environments like large public buildings are ideally suited to demonstrate the versatile use of hierarchical graphs and thus to highlight the benefits of the hierarchical approach. Starting from a collection of floor plans, I have developed a systematic method for constructing a multi-level graph hierarchy. The nature of indoor environments, especially their inherent diversity, poses an additional challenge: among others, one must deal with complex, irregular, and/or three-dimensional features. The proposed method is also motivated by practical considerations, such as not only finding shortest/fastest paths across rooms and floors, but also by providing descriptions for these paths which are easily understood by people. Beyond this, two novel aspects of using a hierarchy are discussed: one as an informed heuristic exploiting the specific characteristics of indoor environments in order to enhance classical, general-purpose graph search techniques. At the same time, as a convenient by- product of this method, clusters such as sections and wings can be detected. The other reason is to better deal with irregular, complex-shaped regions in a way that instructions can also be provided for these spaces. Previous approaches have not considered this problem. In summary, the main results of this work are: • hierarchical graphs are introduced as a general spatial data infrastructure. In particular, this architecture allows us to integrate different spatial networks originating from different sources. A small but useful set of operations is proposed for integrating these networks. In order to work in a hierarchical model, classical graph algorithms are generalised. This finding also has implications on the possible integration of separate navigation services and systems; • a novel set of core data structures and algorithms have been devised for modelling indoor environments. They cater to the unique characteristics of these environments and can be specifically used to provide enhanced navigation in buildings. Tested on models of several real buildings from our university, some preliminary but promising results were gained from a prototypical implementation and its application on the models

    Generalized Feedback Vertex Set Problems on Bounded-Treewidth Graphs: Chordality is the Key to Single-Exponential Parameterized Algorithms

    Get PDF
    It has long been known that Feedback Vertex Set can be solved in time 2O(wlogw)nO(1)2^{\mathcal{O}(w\log w)}n^{\mathcal{O}(1)} on nn-vertex graphs of treewidth ww, but it was only recently that this running time was improved to 2O(w)nO(1)2^{\mathcal{O}(w)}n^{\mathcal{O}(1)}, that is, to single-exponential parameterized by treewidth. We investigate which generalizations of Feedback Vertex Set can be solved in a similar running time. Formally, for a class P\mathcal{P} of graphs, the Bounded P\mathcal{P}-Block Vertex Deletion problem asks, given a graph~GG on nn vertices and positive integers~kk and~dd, whether GG contains a set~SS of at most kk vertices such that each block of GSG-S has at most dd vertices and is in P\mathcal{P}. Assuming that P\mathcal{P} is recognizable in polynomial time and satisfies a certain natural hereditary condition, we give a sharp characterization of when single-exponential parameterized algorithms are possible for fixed values of dd: if P\mathcal{P} consists only of chordal graphs, then the problem can be solved in time 2O(wd2)nO(1)2^{\mathcal{O}(wd^2)} n^{\mathcal{O}(1)}, and if P\mathcal{P} contains a graph with an induced cycle of length 4\ell\ge 4, then the problem is not solvable in time 2o(wlogw)nO(1)2^{o(w\log w)} n^{\mathcal{O}(1)} even for fixed d=d=\ell, unless the ETH fails. We also study a similar problem, called Bounded P\mathcal{P}-Component Vertex Deletion, where the target graphs have connected components of small size rather than blocks of small size, and we present analogous results. For this problem, we also show that if dd is part of the input and P\mathcal{P} contains all chordal graphs, then it cannot be solved in time f(w)no(w)f(w)n^{o(w)} for some function ff, unless the ETH fails.Comment: 43 pages, 9 figure

    Composite Finite Elements for Trabecular Bone Microstructures

    Get PDF
    In many medical and technical applications, numerical simulations need to be performed for objects with interfaces of geometrically complex shape. We focus on the biomechanical problem of elasticity simulations for trabecular bone microstructures. The goal of this dissertation is to develop and implement an efficient simulation tool for finite element simulations on such structures, so-called composite finite elements. We will deal with both the case of material/void interfaces (complicated domains) and the case of interfaces between different materials (discontinuous coefficients). In classical finite element simulations, geometric complexity is encoded in tetrahedral and typically unstructured meshes. Composite finite elements, in contrast, encode geometric complexity in specialized basis functions on a uniform mesh of hexahedral structure. Other than alternative approaches (such as e.g. fictitious domain methods, generalized finite element methods, immersed interface methods, partition of unity methods, unfitted meshes, and extended finite element methods), the composite finite elements are tailored to geometry descriptions by 3D voxel image data and use the corresponding voxel grid as computational mesh, without introducing additional degrees of freedom, and thus making use of efficient data structures for uniformly structured meshes. The composite finite element method for complicated domains goes back to Wolfgang Hackbusch and Stefan Sauter and restricts standard affine finite element basis functions on the uniformly structured tetrahedral grid (obtained by subdivision of each cube in six tetrahedra) to an approximation of the interior. This can be implemented as a composition of standard finite element basis functions on a local auxiliary and purely virtual grid by which we approximate the interface. In case of discontinuous coefficients, the same local auxiliary composition approach is used. Composition weights are obtained by solving local interpolation problems for which coupling conditions across the interface need to be determined. These depend both on the local interface geometry and on the (scalar or tensor-valued) material coefficients on both sides of the interface. We consider heat diffusion as a scalar model problem and linear elasticity as a vector-valued model problem to develop and implement the composite finite elements. Uniform cubic meshes contain a natural hierarchy of coarsened grids, which allows us to implement a multigrid solver for the case of complicated domains. Besides simulations of single loading cases, we also apply the composite finite element method to the problem of determining effective material properties, e.g. for multiscale simulations. For periodic microstructures, this is achieved by solving corrector problems on the fundamental cells using affine-periodic boundary conditions corresponding to uniaxial compression and shearing. For statistically periodic trabecular structures, representative fundamental cells can be identified but do not permit the periodic approach. Instead, macroscopic displacements are imposed using the same set as before of affine-periodic Dirichlet boundary conditions on all faces. The stress response of the material is subsequently computed only on an interior subdomain to prevent artificial stiffening near the boundary. We finally check for orthotropy of the macroscopic elasticity tensor and identify its axes.Zusammengesetzte finite Elemente für trabekuläre Mikrostrukturen in Knochen In vielen medizinischen und technischen Anwendungen werden numerische Simulationen für Objekte mit geometrisch komplizierter Form durchgeführt. Gegenstand dieser Dissertation ist die Simulation der Elastizität trabekulärer Mikrostrukturen von Knochen, einem biomechanischen Problem. Ziel ist es, ein effizientes Simulationswerkzeug für solche Strukturen zu entwickeln, die sogenannten zusammengesetzten finiten Elemente. Wir betrachten dabei sowohl den Fall von Interfaces zwischen Material und Hohlraum (komplizierte Gebiete) als auch zwischen verschiedenen Materialien (unstetige Koeffizienten). In klassischen Finite-Element-Simulationen wird geometrische Komplexität typischerweise in unstrukturierten Tetraeder-Gittern kodiert. Zusammengesetzte finite Elemente dagegen kodieren geometrische Komplexität in speziellen Basisfunktionen auf einem gleichförmigen Würfelgitter. Anders als alternative Ansätze (wie zum Beispiel fictitious domain methods, generalized finite element methods, immersed interface methods, partition of unity methods, unfitted meshes und extended finite element methods) sind die zusammengesetzten finiten Elemente zugeschnitten auf die Geometriebeschreibung durch dreidimensionale Bilddaten und benutzen das zugehörige Voxelgitter als Rechengitter, ohne zusätzliche Freiheitsgrade einzuführen. Somit können sie effiziente Datenstrukturen für gleichförmig strukturierte Gitter ausnutzen. Die Methode der zusammengesetzten finiten Elemente geht zurück auf Wolfgang Hackbusch und Stefan Sauter. Man schränkt dabei übliche affine Finite-Element-Basisfunktionen auf gleichförmig strukturierten Tetraedergittern (die man durch Unterteilung jedes Würfels in sechs Tetraeder erhält) auf das approximierte Innere ein. Dies kann implementiert werden durch das Zusammensetzen von Standard-Basisfunktionen auf einem lokalen und rein virtuellen Hilfsgitter, durch das das Interface approximiert wird. Im Falle unstetiger Koeffizienten wird die gleiche lokale Hilfskonstruktion verwendet. Gewichte für das Zusammensetzen erhält man hier, indem lokale Interpolationsprobleme gelöst werden, wozu zunächst Kopplungsbedingungen über das Interface hinweg bestimmt werden. Diese hängen ab sowohl von der lokalen Geometrie des Interface als auch von den (skalaren oder tensorwertigen) Material-Koeffizienten auf beiden Seiten des Interface. Wir betrachten Wärmeleitung als skalares und lineare Elastizität als vektorwertiges Modellproblem, um die zusammengesetzten finiten Elemente zu entwickeln und zu implementieren. Gleichförmige Würfelgitter enthalten eine natürliche Hierarchie vergröberter Gitter, was es uns erlaubt, im Falle komplizierter Gebiete einen Mehrgitterlöser zu implementieren. Neben Simulationen einzelner Lastfälle wenden wir die zusammengesetzten finiten Elemente auch auf das Problem an, effektive Materialeigenschaften zu bestimmen, etwa für mehrskalige Simulationen. Für periodische Mikrostrukturen wird dies erreicht, indem man Korrekturprobleme auf der Fundamentalzelle löst. Dafür nutzt man affin-periodische Randwerte, die zu uniaxialem Druck oder zu Scherung korrespondieren. In statistisch periodischen trabekulären Mikrostrukturen lassen sich ebenfalls Fundamentalzellen identifizieren, sie erlauben jedoch keinen periodischen Ansatz. Stattdessen werden makroskopische Verschiebungen zu denselben affin-periodischen Randbedingungen vorgegeben, allerdings durch Dirichlet-Randwerte auf allen Seitenflächen. Die Spannungsantwort des Materials wird anschließend nur auf einem inneren Teilbereich berechnet, um künstliche Versteifung am Rand zu verhindern. Schließlich prüfen wir den makroskopischen Elastizitätstensor auf Orthotropie und identifizieren deren Achsen

    Adaptive Finite Element Methods for Variational Inequalities: Theory and Applications in Finance

    Get PDF
    We consider variational inequalities (VIs) in a bounded open domain Omega subset IR^d with a piecewise smooth obstacle constraint. To solve VIs, we formulate a fully-discrete adaptive algorithm by using the backward Euler method for time discretization and the continuous piecewise linear finite element method for space discretization. The outline of this thesis is the following. Firstly, we introduce the elliptic and parabolic variational inequalities in Hilbert spaces and briefly review general existence and uniqueness results. Then we focus on a simple but important example of VI, namely the obstacle problem. One interesting application of the obstacle problem is the American-type option pricing problem in finance. We review the classical model as well as some recent advances in option pricing. These models result in VIs with integro-differential operators. Secondly, we introduce two classical numerical methods in scientific computing: the finite element method for elliptic partial differential equations (PDEs) and the Euler method for ordinary different equations (ODEs). Then we combine these two methods to formulate a fully-discrete numerical scheme for VIs. With mild regularity assumptions, we prove optimal a priori convergence rate with respect to regularity of the solution for the proposed numerical method. Thirdly, we derive an a posteriori error estimator and show its reliability and efficiency. The error estimator is localized in the sense that the size of the elliptic residual is only relevant in the approximate noncontact region, and the approximability of the obstacle is only relevant in the approximate contact region. Based on this new a posteriori error estimator, we design a time-space adaptive algorithm and multigrid solvers for the resulting discrete problems. In the end, numerical results for d=1,2d=1,2 show that the error estimator decays with the same rate as the actual error when the space meshsize and the time step tend to zero. Also, the error indicators capture the correct local behavior of the errors in both the contact and noncontact regions
    corecore