25 research outputs found

    Cad of masks and wiring

    Get PDF

    Cad of masks and wiring

    Get PDF

    Robustness to failures in two-layer communication networks

    Get PDF
    A close look at many existing systems reveals their two- or multi-layer nature, where a number of coexisting networks interact and depend on each other. For instance, in the Internet, any application-level graph (such as a peer-to-peer network) is mapped on the underlying IP network that, in turn, is mapped on a mesh of optical fibers. This layered view sheds new light on the tolerance to errors and attacks of many complex systems. What is observed at a single layer does not necessarily reflect well the state of the entire system. On the contrary, a tiny, seemingly harmless disruption of one layer, may destroy a substantial or essential part of another layer, thus making the whole system useless in practice. In this thesis we consider such two-layer systems. We model them by two graphs at two different layers, where the upper-layer (or logical) graph is mapped onto the lower-layer (physical) graph. Our main goals are the following. First, we study the robustness to failures of existing large-scale two-layer systems. This brings us some valuable insights into the problem, e.g., by identifying common weak points in such systems. Fortunately, these two-layer problems can often be effectively alleviated by a careful system design. Therefore, our second major goal is to propose new designs that increase the robustness of two-layer systems. This thesis is organized in three main parts, where we focus on different examples and aspects of the two-layer system. In the first part, we turn our attention to the existing large-scale two-layer systems, such as peer-to-peer networks, railway networks and the human brain. Our main goal is to study the vulnerability of these systems to random errors and targeted attacks. Our simulations show that (i) two-layer systems are much more vulnerable to errors and attacks than they appear from a single layer perspective, and (ii) attacks are much more harmful than errors, especially when the logical topology is heterogeneous. These results hold across all studied systems. A natural next step consists in improving the failure robustness of two-layer systems. In particular, in the second part of this thesis, we consider the IP/WDM optical networks, where an IP backbone network is mapped on a mesh of optical fibers. The problem lies in designing a survivable mapping, such that no single physical failure disconnects the logical topology. This is an NP-complete problem. We introduce a new concept of piecewise survivability, which makes the problem much easier in practice. This leads us to an efficient and scalable algorithm called SMART, which finds a survivable mapping much faster (often by orders of magnitude) than the other approaches proposed to date. Moreover, the formal analysis of SMART allows us to prove that a given survivable mapping does or does not exist. Finally, this approach helps us to find vulnerable areas in the system, and to effectively reinforce them, e.g., by adding new links. In the third part of this thesis, we shift our attention one layer higher, to the application-over-IP setting. In particular, we consider the design of Application-Level Multicast (ALM) for interactive applications, where a single source sends a delay-constrained data stream to a number of destinations. Interactive ALM should (i) respect stringent delay requirements, and (ii) proactively protect the system against overlay node failures and against (iii) the packet losses at the IP layer. We propose a two-layer-aware approach to this problem. First, we prove that the average packet loss rate observed at the destinations can be effectively approximated by a purely topological metric that, in turn, drops with the amount of IP-level and overlay-level path diversity available in the system. Therefore, we propose a framework that accommodates and generalizes various techniques to increase the path diversity in the system. Within this framework we optimize the structure of ALM. As a result, we reduce the effective loss rate of real Internet topologies by typically 30%-70%, compared to the state of the art. Finally, in addition to the three main parts of the thesis, we also present a set of results inspired by the study of ALM systems, but not directly related to the 'two-layer' paradigm (and thus moved to the Appendix). In particular, we consider a transmission of a delay-sensitive data stream from a single source to a single destination, where the data packets are protected by a Forward Error Correction (FEC) code and sent over multiple paths. We show that the performance of such a scheme can often be further improved. Our key observation is that the propagation times on the available paths often significantly differ, typically by 10-100ms. We propose to exploit these differences by appropriate packet scheduling, which results in a two- to five-fold improvement (reduction) in the effective loss rate

    Design of testbed and emulation tools

    Get PDF
    The research summarized was concerned with the design of testbed and emulation tools suitable to assist in projecting, with reasonable accuracy, the expected performance of highly concurrent computing systems on large, complete applications. Such testbed and emulation tools are intended for the eventual use of those exploring new concurrent system architectures and organizations, either as users or as designers of such systems. While a range of alternatives was considered, a software based set of hierarchical tools was chosen to provide maximum flexibility, to ease in moving to new computers as technology improves and to take advantage of the inherent reliability and availability of commercially available computing systems

    Parallel mesh adaptive techniques for complex flow simulation

    Get PDF
    Dynamic mesh adaptation on unstructured grids, by localised refinement and derefinement, is a very efficient tool for enhancing solution accuracy and optimise computational time. One of the major drawbacks however resides in the projection of the new nodes created, during the refinement process, onto the boundary surfaces. This can be addressed by the introduction of a library capable of handling geometric properties given by a CAD (Computer Aided Design) description. This is of particular interest also to enhance the adaptation module when the mesh is being smoothed, and hence moved, to then re-project it onto the surface of the exact geometry. However, the above procedure is not always possibly due to either faulty or too complex designs, which require a higher level of complexity in the CAD library. It is therefore paramount to have a built-in algorithm able to place the new nodes, belonging to the boundary, closer to the geometric definition of it. Such a procedure is proposed in this work, based on the idea of interpolating subdivision. In order to efficiently and effectively adapt a mesh to a solution field, the criteria used for the adaptation process needs to be as accurate as possible. Due to the nature of the solution, which is obtained by discretisation of a continuum model, numerical error is intrinsic in the calculation. A posteriori error estimation allows us to somewhat assess the accuracy by using the computed solution itself. In particular, an a posteriori error estimator based on the Zienkievicz Zhu model is introduced. This can be used in the adaptation procedure to refine the mesh in those areas where the local error exceeds a set tolerance, hence further increasing the accuracy of the solution in those regions during the next computational step. Variants of this error estimator have also been studied and implemented. One of the important aspects of this project is the fact that the algorithmic concepts are developed thinking parallel, i.e. the algorithms take into account the possibility of multiprocessor implementation. Indeed these concepts require complex programming if one tries to parallelise them, once they have been devised serially. Another important and innovative aspect of this work is the consistency of the algorithms with parallel processor execution

    Aeronautical engineering: A continuing bibliography with indexes (supplement 306)

    Get PDF
    This bibliography lists 181 reports, articles, and other documents recently introduced into the NASA STI Database. Subject coverage includes the following: design, construction and testing of aircraft and aircraft engines; aircraft components, equipment, and systems; ground support systems; and theoretical and applied aspects of aerodynamics and general fluid dynamics

    Composite Finite Elements for Trabecular Bone Microstructures

    Get PDF
    In many medical and technical applications, numerical simulations need to be performed for objects with interfaces of geometrically complex shape. We focus on the biomechanical problem of elasticity simulations for trabecular bone microstructures. The goal of this dissertation is to develop and implement an efficient simulation tool for finite element simulations on such structures, so-called composite finite elements. We will deal with both the case of material/void interfaces (complicated domains) and the case of interfaces between different materials (discontinuous coefficients). In classical finite element simulations, geometric complexity is encoded in tetrahedral and typically unstructured meshes. Composite finite elements, in contrast, encode geometric complexity in specialized basis functions on a uniform mesh of hexahedral structure. Other than alternative approaches (such as e.g. fictitious domain methods, generalized finite element methods, immersed interface methods, partition of unity methods, unfitted meshes, and extended finite element methods), the composite finite elements are tailored to geometry descriptions by 3D voxel image data and use the corresponding voxel grid as computational mesh, without introducing additional degrees of freedom, and thus making use of efficient data structures for uniformly structured meshes. The composite finite element method for complicated domains goes back to Wolfgang Hackbusch and Stefan Sauter and restricts standard affine finite element basis functions on the uniformly structured tetrahedral grid (obtained by subdivision of each cube in six tetrahedra) to an approximation of the interior. This can be implemented as a composition of standard finite element basis functions on a local auxiliary and purely virtual grid by which we approximate the interface. In case of discontinuous coefficients, the same local auxiliary composition approach is used. Composition weights are obtained by solving local interpolation problems for which coupling conditions across the interface need to be determined. These depend both on the local interface geometry and on the (scalar or tensor-valued) material coefficients on both sides of the interface. We consider heat diffusion as a scalar model problem and linear elasticity as a vector-valued model problem to develop and implement the composite finite elements. Uniform cubic meshes contain a natural hierarchy of coarsened grids, which allows us to implement a multigrid solver for the case of complicated domains. Besides simulations of single loading cases, we also apply the composite finite element method to the problem of determining effective material properties, e.g. for multiscale simulations. For periodic microstructures, this is achieved by solving corrector problems on the fundamental cells using affine-periodic boundary conditions corresponding to uniaxial compression and shearing. For statistically periodic trabecular structures, representative fundamental cells can be identified but do not permit the periodic approach. Instead, macroscopic displacements are imposed using the same set as before of affine-periodic Dirichlet boundary conditions on all faces. The stress response of the material is subsequently computed only on an interior subdomain to prevent artificial stiffening near the boundary. We finally check for orthotropy of the macroscopic elasticity tensor and identify its axes.Zusammengesetzte finite Elemente für trabekuläre Mikrostrukturen in Knochen In vielen medizinischen und technischen Anwendungen werden numerische Simulationen für Objekte mit geometrisch komplizierter Form durchgeführt. Gegenstand dieser Dissertation ist die Simulation der Elastizität trabekulärer Mikrostrukturen von Knochen, einem biomechanischen Problem. Ziel ist es, ein effizientes Simulationswerkzeug für solche Strukturen zu entwickeln, die sogenannten zusammengesetzten finiten Elemente. Wir betrachten dabei sowohl den Fall von Interfaces zwischen Material und Hohlraum (komplizierte Gebiete) als auch zwischen verschiedenen Materialien (unstetige Koeffizienten). In klassischen Finite-Element-Simulationen wird geometrische Komplexität typischerweise in unstrukturierten Tetraeder-Gittern kodiert. Zusammengesetzte finite Elemente dagegen kodieren geometrische Komplexität in speziellen Basisfunktionen auf einem gleichförmigen Würfelgitter. Anders als alternative Ansätze (wie zum Beispiel fictitious domain methods, generalized finite element methods, immersed interface methods, partition of unity methods, unfitted meshes und extended finite element methods) sind die zusammengesetzten finiten Elemente zugeschnitten auf die Geometriebeschreibung durch dreidimensionale Bilddaten und benutzen das zugehörige Voxelgitter als Rechengitter, ohne zusätzliche Freiheitsgrade einzuführen. Somit können sie effiziente Datenstrukturen für gleichförmig strukturierte Gitter ausnutzen. Die Methode der zusammengesetzten finiten Elemente geht zurück auf Wolfgang Hackbusch und Stefan Sauter. Man schränkt dabei übliche affine Finite-Element-Basisfunktionen auf gleichförmig strukturierten Tetraedergittern (die man durch Unterteilung jedes Würfels in sechs Tetraeder erhält) auf das approximierte Innere ein. Dies kann implementiert werden durch das Zusammensetzen von Standard-Basisfunktionen auf einem lokalen und rein virtuellen Hilfsgitter, durch das das Interface approximiert wird. Im Falle unstetiger Koeffizienten wird die gleiche lokale Hilfskonstruktion verwendet. Gewichte für das Zusammensetzen erhält man hier, indem lokale Interpolationsprobleme gelöst werden, wozu zunächst Kopplungsbedingungen über das Interface hinweg bestimmt werden. Diese hängen ab sowohl von der lokalen Geometrie des Interface als auch von den (skalaren oder tensorwertigen) Material-Koeffizienten auf beiden Seiten des Interface. Wir betrachten Wärmeleitung als skalares und lineare Elastizität als vektorwertiges Modellproblem, um die zusammengesetzten finiten Elemente zu entwickeln und zu implementieren. Gleichförmige Würfelgitter enthalten eine natürliche Hierarchie vergröberter Gitter, was es uns erlaubt, im Falle komplizierter Gebiete einen Mehrgitterlöser zu implementieren. Neben Simulationen einzelner Lastfälle wenden wir die zusammengesetzten finiten Elemente auch auf das Problem an, effektive Materialeigenschaften zu bestimmen, etwa für mehrskalige Simulationen. Für periodische Mikrostrukturen wird dies erreicht, indem man Korrekturprobleme auf der Fundamentalzelle löst. Dafür nutzt man affin-periodische Randwerte, die zu uniaxialem Druck oder zu Scherung korrespondieren. In statistisch periodischen trabekulären Mikrostrukturen lassen sich ebenfalls Fundamentalzellen identifizieren, sie erlauben jedoch keinen periodischen Ansatz. Stattdessen werden makroskopische Verschiebungen zu denselben affin-periodischen Randbedingungen vorgegeben, allerdings durch Dirichlet-Randwerte auf allen Seitenflächen. Die Spannungsantwort des Materials wird anschließend nur auf einem inneren Teilbereich berechnet, um künstliche Versteifung am Rand zu verhindern. Schließlich prüfen wir den makroskopischen Elastizitätstensor auf Orthotropie und identifizieren deren Achsen

    Algorithm Engineering for fundamental Sorting and Graph Problems

    Get PDF
    Fundamental Algorithms build a basis knowledge for every computer science undergraduate or a professional programmer. It is a set of basic techniques one can find in any (good) coursebook on algorithms and data structures. In this thesis we try to close the gap between theoretically worst-case optimal classical algorithms and the real-world circumstances one face under the assumptions imposed by the data size, limited main memory or available parallelism

    Software for Exascale Computing - SPPEXA 2016-2019

    Get PDF
    This open access book summarizes the research done and results obtained in the second funding phase of the Priority Program 1648 "Software for Exascale Computing" (SPPEXA) of the German Research Foundation (DFG) presented at the SPPEXA Symposium in Dresden during October 21-23, 2019. In that respect, it both represents a continuation of Vol. 113 in Springer’s series Lecture Notes in Computational Science and Engineering, the corresponding report of SPPEXA’s first funding phase, and provides an overview of SPPEXA’s contributions towards exascale computing in today's sumpercomputer technology. The individual chapters address one or more of the research directions (1) computational algorithms, (2) system software, (3) application software, (4) data management and exploration, (5) programming, and (6) software tools. The book has an interdisciplinary appeal: scholars from computational sub-fields in computer science, mathematics, physics, or engineering will find it of particular interest
    corecore