1,549 research outputs found

    Robust and parallel scalable iterative solutions for large-scale finite cell analyses

    Full text link
    The finite cell method is a highly flexible discretization technique for numerical analysis on domains with complex geometries. By using a non-boundary conforming computational domain that can be easily meshed, automatized computations on a wide range of geometrical models can be performed. Application of the finite cell method, and other immersed methods, to large real-life and industrial problems is often limited due to the conditioning problems associated with these methods. These conditioning problems have caused researchers to resort to direct solution methods, which signifi- cantly limit the maximum size of solvable systems. Iterative solvers are better suited for large-scale computations than their direct counterparts due to their lower memory requirements and suitability for parallel computing. These benefits can, however, only be exploited when systems are properly conditioned. In this contribution we present an Additive-Schwarz type preconditioner that enables efficient and parallel scalable iterative solutions of large-scale multi-level hp-refined finite cell analyses.Comment: 32 pages, 17 figure

    Hybride 3D Simulationsmethoden zur Abbildung der Schädigungsvorgänge in Mehrphasen-Verbundwerkstoffen

    Get PDF
    Modern digital material approaches for the visualization and simulation of heterogeneous materials allow to investigate the behavior of complex multiphase materials with their physical nonlinear material response at various scales. However, these computational techniques require extensive hardware resources with respect to computing power and main memory to solve numerically large-scale discretized models in 3D. Due to a very high number of degrees of freedom, which may rapidly be increased to the two-digit million range, the limited hardware ressources are to be utilized in a most efficient way to enable an execution of the numerical algorithms in minimal computation time. Hence, in the field of computational mechanics, various methods and algorithms can lead to an optimized runtime behavior of nonlinear simulation models, where several approaches are proposed and investigated in this thesis. Today, the numerical simulation of damage effects in heterogeneous materials is performed by the adaption of multiscale methods. A consistent modeling in the three-dimensional space with an appropriate discretization resolution on each scale (based on a hierarchical or concurrent multiscale model), however, still contains computational challenges in respect to the convergence behavior, the scale transition or the solver performance of the weak coupled problems. The computational efficiency and the distribution among available hardware resources (often based on a parallel hardware architecture) can significantly be improved. In the past years, high-performance computing (HPC) and graphics processing unit (GPU) based computation techniques were established for the investigationof scientific objectives. Their application results in the modification of existing and the development of new computational methods for the numerical implementation, which enables to take advantage of massively clustered computer hardware resources. In the field of numerical simulation in material science, e.g. within the investigation of damage effects in multiphase composites, the suitability of such models is often restricted by the number of degrees of freedom (d.o.f.s) in the three-dimensional spatial discretization. This proves to be difficult for the type of implementation method used for the nonlinear simulation procedure and, simultaneously has a great influence on memory demand and computational time. In this thesis, a hybrid discretization technique has been developed for the three-dimensional discretization of a three-phase material, which is respecting the numerical efficiency of nonlinear (damage) simulations of these materials. The increase of the computational efficiency is enabled by the improved scalability of the numerical algorithms. Consequently, substructuring methods for partitioning the hybrid mesh were implemented, tested and adapted to the HPC computing framework using several hundred CPU (central processing units) nodes for building the finite element assembly. A memory-efficient iterative and parallelized equation solver combined with a special preconditioning technique for solving the underlying equation system was modified and adapted to enable combined CPU and GPU based computations. Hence, it is recommended by the author to apply the substructuring method for hybrid meshes, which respects different material phases and their mechanical behavior and which enables to split the structure in elastic and inelastic parts. However, the consideration of the nonlinear material behavior, specified for the corresponding phase, is limited to the inelastic domains only, and by that causes a decreased computing time for the nonlinear procedure. Due to the high numerical effort for such simulations, an alternative approach for the nonlinear finite element analysis, based on the sequential linear analysis, was implemented in respect to scalable HPC. The incremental-iterative procedure in finite element analysis (FEA) during the nonlinear step was then replaced by a sequence of linear FE analysis when damage in critical regions occured, known in literature as saw-tooth approach. As a result, qualitative (smeared) crack initiation in 3D multiphase specimens has efficiently been simulated

    Deep Learning in Visual Computing and Signal Processing

    Get PDF

    Wireless Monitoring Systems for Long-Term Reliability Assessment of Bridge Structures based on Compressed Sensing and Data-Driven Interrogation Methods.

    Full text link
    The state of the nation’s highway bridges has garnered significant public attention due to large inventories of aging assets and insufficient funds for repair. Current management methods are based on visual inspections that have many known limitations including reliance on surface evidence of deterioration and subjectivity introduced by trained inspectors. To address the limitations of current inspection practice, structural health monitoring (SHM) systems can be used to provide quantitative measures of structural behavior and an objective basis for condition assessment. SHM systems are intended to be a cost effective monitoring technology that also automates the processing of data to characterize damage and provide decision information to asset managers. Unfortunately, this realization of SHM systems does not currently exist. In order for SHM to be realized as a decision support tool for bridge owners engaged in performance- and risk-based asset management, technological hurdles must still be overcome. This thesis focuses on advancing wireless SHM systems. An innovative wireless monitoring system was designed for permanent deployment on bridges in cold northern climates which pose an added challenge as the potential for solar harvesting is reduced and battery charging is slowed. First, efforts advancing energy efficient usage strategies for WSNs were made. With WSN energy consumption proportional to the amount of data transmitted, data reduction strategies are prioritized. A novel data compression paradigm termed compressed sensing is advanced for embedment in a wireless sensor microcontroller. In addition, fatigue monitoring algorithms are embedded for local data processing leading to dramatic data reductions. In the second part of the thesis, a radical top-down design strategy (in contrast to global vibration strategies) for a monitoring system is explored to target specific damage concerns of bridge owners. Data-driven algorithmic approaches are created for statistical performance characterization of long-term bridge response. Statistical process control and reliability index monitoring are advanced as a scalable and autonomous means of transforming data into information relevant to bridge risk management. Validation of the wireless monitoring system architecture is made using the Telegraph Road Bridge (Monroe, Michigan), a multi-girder short-span highway bridge that represents a major fraction of the U.S. national inventory.PhDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/116749/1/ocosean_1.pd

    Roadmap on measurement technologies for next generation structural health monitoring systems

    Get PDF
    Structural health monitoring (SHM) is the automation of the condition assessment process of an engineered system. When applied to geometrically large components or structures, such as those found in civil and aerospace infrastructure and systems, a critical challenge is in designing the sensing solution that could yield actionable information. This is a difficult task to conduct cost-effectively, because of the large surfaces under consideration and the localized nature of typical defects and damages. There have been significant research efforts in empowering conventional measurement technologies for applications to SHM in order to improve performance of the condition assessment process. Yet, the field implementation of these SHM solutions is still in its infancy, attributable to various economic and technical challenges. The objective of this Roadmap publication is to discuss modern measurement technologies that were developed for SHM purposes, along with their associated challenges and opportunities, and to provide a path to research and development efforts that could yield impactful field applications. The Roadmap is organized into four sections: distributed embedded sensing systems, distributed surface sensing systems, multifunctional materials, and remote sensing. Recognizing that many measurement technologies may overlap between sections, we define distributed sensing solutions as those that involve or imply the utilization of numbers of sensors geometrically organized within (embedded) or over (surface) the monitored component or system. Multi-functional materials are sensing solutions that combine multiple capabilities, for example those also serving structural functions. Remote sensing are solutions that are contactless, for example cell phones, drones, and satellites. It also includes the notion of remotely controlled robots

    Matrixfreie voxelbasierte Finite-Elemente-Methode für Materialien mit komplizierter Mikrostruktur

    Get PDF
    Modern image detection techniques such as micro computer tomography (μCT), magnetic resonance imaging (MRI) and scanning electron microscopy (SEM) provide us with high resolution images of the microstructure of materials in a non-invasive and convenient way. They form the basis for the geometrical models of high-resolution analysis, so called image-based analysis. However especially in 3D, discretizations of these models reach easily the size of 100 Mill. degrees of freedoms and require extensive hardware resources in terms of main memory and computing power to solve the numerical model. Consequently, the focus of this work is to combine and adapt numerical solution methods to reduce the memory demand first and then the computation time and therewith enable an execution of the image-based analysis on modern computer desktops. Hence, the numerical model is a straightforward grid discretization of the voxel-based (pixels with a third dimension) geometry which omits the boundary detection algorithms and allows reduced storage of the finite element data structure and a matrix-free solution algorithm. This in turn reduce the effort of almost all applied grid-based solution techniques and results in memory efficient and numerically stable algorithms for the microstructural models. Two variants of the matrix-free algorithm are presented. The efficient iterative solution method of conjugate gradients is used with matrix-free applicable preconditioners such as the Jacobi and the especially suited multigrid method. The jagged material boundaries of the voxel-based mesh are smoothed through embedded boundary elements which contain different material information at the integration point and are integrated sub-cell wise though without additional boundary detection. The efficiency of the matrix-free methods can be retained.Moderne bildgebende Verfahren wie Mikro-Computertomographie (μCT), Magnetresonanztomographie (MRT) und Rasterelektronenmikroskopie (SEM) liefern nicht-invasiv hochauflösende Bilder der Mikrostruktur von Materialien. Sie bilden die Grundlage der geometrischen Modelle der hochauflösenden bildbasierten Analysis. Allerdings erreichen vor allem in 3D die Diskretisierungen dieser Modelle leicht die Größe von 100 Mill. Freiheitsgraden und erfordern umfangreiche Hardware-Ressourcen in Bezug auf Hauptspeicher und Rechenleistung, um das numerische Modell zu lösen. Der Fokus dieser Arbeit liegt daher darin, numerische Lösungsmethoden zu kombinieren und anzupassen, um den Speicherplatzbedarf und die Rechenzeit zu reduzieren und damit eine Ausführung der bildbasierten Analyse auf modernen Computer-Desktops zu ermöglichen. Daher ist als numerisches Modell eine einfache Gitterdiskretisierung der voxelbasierten (Pixel mit der Tiefe als dritten Dimension) Geometrie gewählt, die die Oberflächenerstellung weglässt und eine reduzierte Speicherung der finiten Elementen und einen matrixfreien Lösungsalgorithmus ermöglicht. Dies wiederum verringert den Aufwand von fast allen angewandten gitterbasierten Lösungsverfahren und führt zu Speichereffizienz und numerisch stabilen Algorithmen für die Mikrostrukturmodelle. Es werden zwei Varianten der Anpassung der matrixfreien Lösung präsentiert, die Element-für-Element Methode und eine Knoten-Kanten-Variante. Die Methode der konjugierten Gradienten in Kombination mit dem Mehrgitterverfahren als sehr effizienten Vorkonditionierer wird für den matrixfreien Lösungsalgorithmus adaptiert. Der stufige Verlauf der Materialgrenzen durch die voxelbasierte Diskretisierung wird durch Elemente geglättet, die am Integrationspunkt unterschiedliche Materialinformationen enthalten und über Teilzellen integriert werden (embedded boundary elements). Die Effizienz der matrixfreien Verfahren bleibt erhalten
    corecore