58,948 research outputs found

    Code generation for generally mapped finite elements

    Get PDF
    Many classical finite elements such as the Argyris and Bell elements have long been absent from high-level PDE software. Building on recent theoretical work, we describe how to implement very general finite-element transformations in FInAT and hence into the Firedrake finite-element system. Numerical results evaluate the new elements, comparing them to existing methods for classical problems. For a second-order model problem, we find that new elements give smooth solutions at a mild increase in cost over standard Lagrange elements. For fourth-order problems, however, the newly enabled methods significantly outperform interior penalty formulations. We also give some advanced use cases, solving the nonlinear Cahn-Hilliard equation and some biharmonic eigenvalue problems (including Chladni plates) using C1 discretizations

    Using Patran and Supertab as pre- and postprocessors to COSMIC/NASTRAN

    Get PDF
    Patran and Supertab are interactive computer graphics pre- and postprocessors that can be used to generate NASTRAN bulk data decks and to visualize results from a NASTRAN analysis. Both of the programs are in use at the Numerical Structural Mechanics Branch of the David Taylor Research Center (DTRC). Various aspects of Patran and Supertab are discussed including: geometry modeling, finite element mesh generation, bulk data deck creation, results translation and visualization, and the user interface. Some advantages and disadvantages of both programs will be pointed out

    Stream Fusion, to Completeness

    Full text link
    Stream processing is mainstream (again): Widely-used stream libraries are now available for virtually all modern OO and functional languages, from Java to C# to Scala to OCaml to Haskell. Yet expressivity and performance are still lacking. For instance, the popular, well-optimized Java 8 streams do not support the zip operator and are still an order of magnitude slower than hand-written loops. We present the first approach that represents the full generality of stream processing and eliminates overheads, via the use of staging. It is based on an unusually rich semantic model of stream interaction. We support any combination of zipping, nesting (or flat-mapping), sub-ranging, filtering, mapping-of finite or infinite streams. Our model captures idiosyncrasies that a programmer uses in optimizing stream pipelines, such as rate differences and the choice of a "for" vs. "while" loops. Our approach delivers hand-written-like code, but automatically. It explicitly avoids the reliance on black-box optimizers and sufficiently-smart compilers, offering highest, guaranteed and portable performance. Our approach relies on high-level concepts that are then readily mapped into an implementation. Accordingly, we have two distinct implementations: an OCaml stream library, staged via MetaOCaml, and a Scala library for the JVM, staged via LMS. In both cases, we derive libraries richer and simultaneously many tens of times faster than past work. We greatly exceed in performance the standard stream libraries available in Java, Scala and OCaml, including the well-optimized Java 8 streams

    Design and Implementation of an Extensible Variable Resolution Bathymetric Estimator

    Get PDF
    For grid-based bathymetric estimation techniques, determining the right resolution at which to work is essential. Appropriate grid resolution can be related, roughly, to data density and thence to sonar characteristics, survey methodology, and depth. It is therefore variable in almost all survey scenarios, and methods of addressing this problem can have enormous impact on the correctness and efficiency of computational schemes of this kind. This paper describes the design and implementation of a bathymetric depth estimation algorithm that attempts to address this problem by combining the computational efficiency of locally regular grids with piecewise-variable estimation resolution to provide a single logical data structure and associated algorithms that can adjust to local data conditions, change resolution where required to best support the data, and operate over essentially arbitrarily large areas as a single unit. The algorithm, which is in part a development of CUBE, is modular and extensible, and is structured as a client-server application to support different implementation modalities. The algorithm is called “CUBE with Hierarchical Resolution Techniques”, or CHRT

    Functional Dynamics I : Articulation Process

    Full text link
    The articulation process of dynamical networks is studied with a functional map, a minimal model for the dynamic change of relationships through iteration. The model is a dynamical system of a function ff, not of variables, having a self-reference term fff \circ f, introduced by recalling that operation in a biological system is often applied to itself, as is typically seen in rules in the natural language or genes. Starting from an inarticulate network, two types of fixed points are formed as an invariant structure with iterations. The function is folded with time, until it has finite or infinite piecewise-flat segments of fixed points, regarded as articulation. For an initial logistic map, attracted functions are classified into step, folded step, fractal, and random phases, according to the degree of folding. Oscillatory dynamics are also found, where function values are mapped to several fixed points periodically. The significance of our results to prototype categorization in language is discussed.Comment: 48 pages, 15 figeres (5 gif files
    corecore