816 research outputs found

    The DUNE-ALUGrid Module

    Get PDF
    In this paper we present the new DUNE-ALUGrid module. This module contains a major overhaul of the sources from the ALUgrid library and the binding to the DUNE software framework. The main changes include user defined load balancing, parallel grid construction, and an redesign of the 2d grid which can now also be used for parallel computations. In addition many improvements have been introduced into the code to increase the parallel efficiency and to decrease the memory footprint. The original ALUGrid library is widely used within the DUNE community due to its good parallel performance for problems requiring local adaptivity and dynamic load balancing. Therefore, this new model will benefit a number of DUNE users. In addition we have added features to increase the range of problems for which the grid manager can be used, for example, introducing a 3d tetrahedral grid using a parallel newest vertex bisection algorithm for conforming grid refinement. In this paper we will discuss the new features, extensions to the DUNE interface, and explain for various examples how the code is used in parallel environments.Comment: 25 pages, 11 figure

    Automatic construction of boundary parametrizations for geometric multigrid solvers

    Get PDF
    We present an algorithm that constructs parametrizations of boundary and interface surfaces automatically. Starting with high-resolution triangulated surfaces describing the computational domains, we iteratively simplify the surfaces yielding a coarse approximation of the boundaries with the same topological type. While simplifying we construct a function that is defined on the coarse surface and whose image is the original surface. This function allows access to the correct shape and surface normals of the original surface as well as to any kind of data defined on it. Such information can be used by geometric multigrid solvers doing adaptive mesh refinement. Our algorithm runs stable on all types of input surfaces, including those that describe domains consisting of several materials. We have used our method with success in different fields and we discuss examples from structural mechanics and biomechanics

    Investigation of parallel efficiency of an adaptive flow solver

    Get PDF
    AbstractParallel efficiency, in a domain decomposition based approach, strongly depends on a partitioning quality. For an adaptive simulation partitioning quality is lost due to the dynamic modification of the computational mesh. Maintaining high efficiency of parallelization requires rebalancing of the numerical load. This paper presents numerical experiment with adaptive and dynamically load balanced flow application. It is shown that through a relatively inexpensive process of repartitioning high parallel efficiency is maintained

    Two-dimensional evaluation of atham-fluidity, a nonhydrostatic atmospheric model using mixed continuous/discontinuous finite elements and anisotropic grid optimization

    Get PDF
    AbstractThis paper presents the first attempt to apply the compressible nonhydrostatic Active Tracer High-Resolution Atmospheric Model–Fluidity (ATHAM-Fluidity) solver to a series of idealized atmospheric test cases. ATHAM-Fluidity uses a hybrid finite-element discretization where pressure is solved on a continuous second-order grid while momentum and scalars are computed on a first-order discontinuous grid (also known as ). ATHAM-Fluidity operates on two- and three-dimensional unstructured meshes, using triangular or tetrahedral elements, respectively, with the possibility to employ an anisotropic mesh optimization algorithm for automatic grid refinement and coarsening during run time. The solver is evaluated using two-dimensional-only dry idealized test cases covering a wide range of atmospheric applications. The first three cases, representative of atmospheric convection, reveal the ability of ATHAM-Fluidity to accurately simulate the evolution of large-scale flow features in neutral atmospheres at rest. Grid convergence without adaptivity as well as the performances of the Hermite–Weighted Essentially Nonoscillatory (Hermite-WENO) slope limiter are discussed. These cases are also used to test the grid optimization algorithm implemented in ATHAM-Fluidity. Adaptivity can result in up to a sixfold decrease in computational time and a fivefold decrease in total element number for the same finest resolution. However, substantial discrepancies are found between the uniform and adapted grid results, thus suggesting the necessity to improve the reliability of the approach. In the last three cases, corresponding to atmospheric gravity waves with and without orography, the model ability to capture the amplitude and propagation of weak stationary waves is demonstrated. This work constitutes the first step toward the development of a new comprehensive limited area atmospheric model.This research has received funding from the European Union Seventh Framework Program (FP7/2007-2013) under Grant agreement 603663 for the research project PEARL (Preparing for Extreme And Rare events in coastaL regions). The EPSRC multiphase program grant MEMPHIS is also acknowledged.This is the author accepted manuscript. The final version is available from the American Meteorological Society via http://dx.doi.org/10.1175/MWR-D-15-0398.

    Look Before You Leap: An Adaptive Processing Strategy For Multi-Criteria Decision Support Queries

    Get PDF
    In recent years, we have witnessed a massive acquisition of data and increasing need to support multi-criteria decision support (MCDS) queries efficiently. Pareto-optimal also known as skyline queries is a popular class of MCDS queries and has received a lot of attention resulting in a flurry of efficient skyline algorithms. The vast majority of such algorithms focus entirely on the input being a single data set. In this work, we provide an adaptive query evaluation technique --- AdaptiveSky that is able to reason at different levels of abstraction thereby effectively minimizing the two primary costs, namely the cost of generating join results and the cost of dominance comparisons to compute the final skyline of the join results. Our approach hinges on two key principles. First, in the input space -- we determine the abstraction levels dynamically at run time instead of assigning a static one at compile time that may or may not work for different data distributions. This is achieved by adaptively partitioning the input data as intermediate results are being generated thereby eliminating the need to access vast majority of the input tuples. Second, we incrementally build the output space, containing the final skyline, without generating a single join result. Our approach is able to reason about the final result space and selectively drill into regions in the output space that show promise in generating result tuples to avoid generation of results that do not contribute to the query result. In this effort, we propose two alternate strategies for reasoning, namely the Euclidean Distance method and the cost-benefit driven Dominance Potential method for reasoning. Our experimental evaluation demonstrates that AdaptiveSky shows superior performance over state-of-the-art techniques over benchmark data sets
    • …
    corecore