1,541 research outputs found

    Petri Nets Modeling of Dead-End Refinement Problems in a 3D Anisotropic hp-Adaptive Finite Element Method

    Get PDF
    We consider two graph grammar based Petri nets models for anisotropic refinements of three dimensional hexahedral grids. The first one detects possible dead-end problems during the graph grammar based anisotropic refinements of the mesh. The second one employs an enhanced graph grammar model that is actually dead-end free. We apply the resulting algorithm to the simulation of resistivity logging measurements for estimating the location of underground oil and/or gas formations. The graph grammar based Petri net models allow to fix the self-adaptive mesh refinement algorithm and finish the adaptive computations with the required accuracy needed by the numerical solution

    Anisotropic 2D mesh adaptation in hp-adaptive FEM

    Get PDF
    AbstractThe paper presents a grammar for anisotropic two-dimensional mesh adaptation in hp-adaptive Finite Element Method with rectangular elements. It occurs that a straightforward approach to modeling this process via grammar productions leads to potential deadlock in h-adaptation of the mesh. This fact is shown on a Petri net model of an exemplary adaptation. Therefore auxiliary productions are added to the grammar in order to ensure that any sequence of productions allowed by the grammar does not lead to a deadlock state. The fact that the enhanced grammar is deadlock-free is proven via a corresponding Petri net model. The proof has been performed by means of reachability graph construction and analysis. The paper is enhanced with numerical simulations of magnetolluric measurements where the deadlock problem occured

    Petri Nets Modeling of Dead-End Refinement Problems in a 3D Anisotropic hp-Adaptive Finite Element Method

    Get PDF
    We consider two graph grammar based Petri nets models for anisotropic refinements of three dimensional hexahedral grids. The first one detects possible dead-end problems during the graph grammar based anisotropic refinements of the mesh. The second one employs an enhanced graph grammar model that is actually dead-end free. We apply the resulting algorithm to the simulation of resistivity logging measurements for estimating the location of underground oil and/or gas formations. The graph grammar based Petri net models allow to fix the self-adaptive mesh refinement algorithm and finish the adaptive computations with the required accuracy needed by the numerical solution

    Hypergraph Grammars in hp-adaptive Finite Element Method

    Get PDF
    AbstractThe paper presents the hypergraph grammar for modelling the hp-adaptive finite element method algorithm with rectangular elements. The finite element mesh is represented by a hypergraph. All mesh transformations are modelled by means of hypergraph grammar rules. These rules allow to generate the initial mesh, to assign values of polynomial order to the element nodes, to generate the matrix for each element, to solve the problem and to perform the hp-adaptation

    Applications of a hyper-graph grammar system in adaptive finite-element computations

    Get PDF
    This paper describes application of a hyper-graph grammar system for modeling a three-dimensional adaptive finite element method. The hyper-graph grammar approach allows obtaining a linear computational cost of adaptive mesh transformations and computations performed over refined meshes. The computations are done by a hyper-graph grammar driven algorithm applicable to three-dimensional problems. For the case of typical refinements performed towards a point or an edge, the algorithm yields linear computational cost with respect to the mesh nodes for its sequential execution and logarithmic cost for its parallel execution. Such hyper-graph grammar productions are the mathematical formalism used to describe the computational algorithm implementing the finite element method. Each production indicates the smallest atomic task that can be executed concurrently. The mesh transformations and computations by using the hyper-graph grammar-based approach have been tested in the GALOIS environment. We conclude the paper with some numerical results performed on a shared-memory Linux cluster node, for the case of three-dimensional computational meshes refined towards a point, an edge and a face

    Hypergrammar-based parallel multi-frontal solver for grids with point singularities

    Get PDF
    This paper describes the application of hypergraph grammars to drive linear computationalcost solver for grids with point singularities. Such graph grammar productions are the rstmathematical formalism used to describe solver algorithm and each of them indicates thesmallest atomic task that can be executed in parallel, which is very useful in case of parallelexecution. In particular the partial order of execution of graph grammar productions can befound, and the sets of independent graph grammar productions can be localized. They canbe scheduled set by set into shared memory parallel machine. The graph grammar basedsolver has been implemented with NIVIDIA CUDA for GPU. Graph grammar productionsare accompanied by numerical results for 2D case. We show that our graph grammar basedsolver with GPU accelerator is order of magnitude faster than state of the art MUMPSsolver

    Graph grammar-based multi-frontal parallel direct solver for two-dimensional isogeometric analysis

    Get PDF
    This paper introduces the graph grammar based model for developing multi-thread multi-frontal parallel direct solver for two dimensional isogeometric finite element method. Execution of the solver algorithm has been expressed as the sequence of graph grammar productions. At the beginning productions construct the elimination tree with leaves corresponding to finite elements. Following sequence of graph grammar productions generates element frontal matrices at leaf nodes, merges matrices at parent nodes and eliminates rows corresponding to fully assembled degrees of freedom. Finally, there are graph grammar productions responsible for root problem solution and recursive backward substitutions. Expressing the solver algorithm by graph grammar productions allows us to explore the concurrency of the algorithm. The graph grammar productions are grouped into sets of independent tasks that can be executed concurrently. The resulting concurrent multi-frontal solver algorithm is implemented and tested on NVIDIA GPU, providing O(NlogN) execution time complexity where N is the number of degrees of freedom. We have confirmed this complexity by solving up to 1 million of degrees of freedom with 448 cores GPU. © 2012 Published by Elsevier Ltd

    A summary of my twenty years of research according to Google Scholars

    Get PDF
    I am David Pardo, a researcher from Spain working mainly on numerical analysis applied to geophysics. I am 40 years old, and over a decade ago, I realized that my performance as a researcher was mainly evaluated based on a number called \h-index". This single number contains simultaneously information about the number of publications and received citations. However, dif- ferent h-indices associated to my name appeared in di erent webpages. A quick search allowed me to nd the most convenient (largest) h-index in my case. It corresponded to Google Scholars. In this work, I naively analyze a few curious facts I found about my Google Scholars and, at the same time, this manuscript serves as an experiment to see if it may serve to increase my Google Scholars h-index

    A summary of my twenty years of research according to Google Scholars

    Get PDF
    I am David Pardo, a researcher from Spain working mainly on numerical analysis applied to geophysics. I am 40 years old, and over a decade ago, I realized that my performance as a researcher was mainly evaluated based on a number called \h-index". This single number contains simultaneously information about the number of publications and received citations. However, dif- ferent h-indices associated to my name appeared in di erent webpages. A quick search allowed me to nd the most convenient (largest) h-index in my case. It corresponded to Google Scholars. In this work, I naively analyze a few curious facts I found about my Google Scholars and, at the same time, this manuscript serves as an experiment to see if it may serve to increase my Google Scholars h-index
    corecore