862 research outputs found

    FEMPAR: an object-oriented parallel finite element framework

    Get PDF
    FEMPAR is an open source object oriented Fortran200X scientific software library for the high-performance scalable simulation of complex multiphysics problems governed by partial differential equations at large scales, by exploiting state-of-the-art supercomputing resources. It is a highly modularized, flexible, and extensible library, that provides a set of modules that can be combined to carry out the different steps of the simulation pipeline. FEMPAR includes a rich set of algorithms for the discretization step, namely (arbitrary-order) grad, div, and curl-conforming finite element methods, discontinuous Galerkin methods, B-splines, and unfitted finite element techniques on cut cells, combined with h-adaptivity. The linear solver module relies on state-of-the-art bulk-asynchronous implementations of multilevel domain decomposition solvers for the different discretization alternatives and block-preconditioning techniques for multiphysics problems. FEMPAR is a framework that provides users with out-of-the-box state-of-the-art discretization techniques and highly scalable solvers for the simulation of complex applications, hiding the dramatic complexity of the underlying algorithms. But it is also a framework for researchers that want to experience with new algorithms and solvers, by providing a highly extensible framework. In this work, the first one in a series of articles about FEMPAR, we provide a detailed introduction to the software abstractions used in the discretization module and the related geometrical module. We also provide some ingredients about the assembly of linear systems arising from finite element discretizations, but the software design of complex scalable multilevel solvers is postponed to a subsequent work.Peer ReviewedPostprint (published version

    FEMPAR: an object-oriented parallel finite element framework

    Get PDF
    FEMPAR is an open source object oriented Fortran200X scientific software library for the high-performance scalable simulation of complex multiphysics problems governed by partial differential equations at large scales, by exploiting state-of-the-art supercomputing resources. It is a highly modularized, flexible, and extensible library, that provides a set of modules that can be combined to carry out the different steps of the simulation pipeline. FEMPAR includes a rich set of algorithms for the discretization step, namely (arbitrary-order) grad, div, and curl-conforming finite element methods, discontinuous Galerkin methods, B-splines, and unfitted finite element techniques on cut cells, combined with h-adaptivity. The linear solver module relies on state-of-the-art bulk-asynchronous implementations of multilevel domain decomposition solvers for the different discretization alternatives and block-preconditioning techniques for multiphysics problems. FEMPAR is a framework that provides users with out-of-the-box state-of-the-art discretization techniques and highly scalable solvers for the simulation of complex applications, hiding the dramatic complexity of the underlying algorithms. But it is also a framework for researchers that want to experience with new algorithms and solvers, by providing a highly extensible framework. In this work, the first one in a series of articles about FEMPAR, we provide a detailed introduction to the software abstractions used in the discretization module and the related geometrical module. We also provide some ingredients about the assembly of linear systems arising from finite element discretizations, but the software design of complex scalable multilevel solvers is postponed to a subsequent work

    mAPN: Modeling, Analysis, and Exploration of Algorithmic and Parallelism Adaptivity

    Full text link
    Using parallel embedded systems these days is increasing. They are getting more complex due to integrating multiple functionalities in one application or running numerous ones concurrently. This concerns a wide range of applications, including streaming applications, commonly used in embedded systems. These applications must implement adaptable and reliable algorithms to deliver the required performance under varying circumstances (e.g., running applications on the platform, input data, platform variety, etc.). Given the complexity of streaming applications, target systems, and adaptivity requirements, designing such systems with traditional programming models is daunting. This is why model-based strategies with an appropriate Model of Computation (MoC) have long been studied for embedded system design. This work provides algorithmic adaptivity on top of parallelism for dynamic dataflow to express larger sets of variants. We present a multi-Alternative Process Network (mAPN), a high-level abstract representation in which several variants of the same application coexist in the same graph expressing different implementations. We introduce mAPN properties and its formalism to describe various local implementation alternatives. Furthermore, mAPNs are enriched with metadata to Provide the alternatives with quantitative annotations in terms of a specific metric. To help the user analyze the rich space of variants, we propose a methodology to extract feasible variants under user and hardware constraints. At the core of the methodology is an algorithm for computing global metrics of an execution of different alternatives from a compact mAPN specification. We validate our approach by exploring several possible variants created for the Automatic Subtitling Application (ASA) on two hardware platforms.Comment: 26 PAGES JOURNAL PAPE

    Applications of Finite Element Modeling for Mechanical and Mechatronic Systems

    Get PDF
    Modern engineering practice requires advanced numerical modeling because, among other things, it reduces the costs associated with prototyping or predicting the occurrence of potentially dangerous situations during operation in certain defined conditions. Thus far, different methods have been used to implement the real structure into the numerical version. The most popular uses have been variations of the finite element method (FEM). The aim of this Special Issue has been to familiarize the reader with the latest applications of the FEM for the modeling and analysis of diverse mechanical problems. Authors are encouraged to provide a concise description of the specific application or a potential application of the Special Issue

    USEFUL MEASURES OF COMPLEXITY: A MODEL OF ASSESSING DEGREE OF COMPLEXITY IN ENGINEERED SYSTEMS AND ENGINEERING PROJECTS

    Get PDF
    Many modern systems are very complex, a reality which can affect their safety and reliability of operations. Systems engineers need new ways to measure problem complexity. This research lays the groundwork for measuring the complexity of systems engineering (SE) projects. This research proposes a project complexity measurement model (PCMM) and associated methods to measure complexity. To develop the PCMM, we analyze four major types of complexity (structural complexity, temporal complexity, organizational complexity, and technological complexity) and define a set of complexity metrics. Through a survey of engineering projects, we also develop project profiles for three types of software projects typically used in the U.S. Navy to provide empirical evidence for the PCMM. The results of our work on these projects show that as a project increases in complexity, the more difficult and expensive it is for a project to meet all requirements and schedules because of changing interactions and dynamics among the project participants and stakeholders. The three projects reveal reduction of project complexity by setting a priority and a baseline in requirements and project scope, concentrating on the expected deliverable, strengthening familiarity of the systems engineering process, eliminating redundant processes, and clarifying organizational roles and decision-making processes to best serve the project teams while also streamlining on business processes and information systems.Civilian, Department of the NavyApproved for public release. Distribution is unlimited

    Horseshoe-based Bayesian nonparametric estimation of effective population size trajectories

    Full text link
    Phylodynamics is an area of population genetics that uses genetic sequence data to estimate past population dynamics. Modern state-of-the-art Bayesian nonparametric methods for recovering population size trajectories of unknown form use either change-point models or Gaussian process priors. Change-point models suffer from computational issues when the number of change-points is unknown and needs to be estimated. Gaussian process-based methods lack local adaptivity and cannot accurately recover trajectories that exhibit features such as abrupt changes in trend or varying levels of smoothness. We propose a novel, locally-adaptive approach to Bayesian nonparametric phylodynamic inference that has the flexibility to accommodate a large class of functional behaviors. Local adaptivity results from modeling the log-transformed effective population size a priori as a horseshoe Markov random field, a recently proposed statistical model that blends together the best properties of the change-point and Gaussian process modeling paradigms. We use simulated data to assess model performance, and find that our proposed method results in reduced bias and increased precision when compared to contemporary methods. We also use our models to reconstruct past changes in genetic diversity of human hepatitis C virus in Egypt and to estimate population size changes of ancient and modern steppe bison. These analyses show that our new method captures features of the population size trajectories that were missed by the state-of-the-art methods.Comment: 36 pages, including supplementary informatio

    Research and Education in Computational Science and Engineering

    Get PDF
    Over the past two decades the field of computational science and engineering (CSE) has penetrated both basic and applied research in academia, industry, and laboratories to advance discovery, optimize systems, support decision-makers, and educate the scientific and engineering workforce. Informed by centuries of theory and experiment, CSE performs computational experiments to answer questions that neither theory nor experiment alone is equipped to answer. CSE provides scientists and engineers of all persuasions with algorithmic inventions and software systems that transcend disciplines and scales. Carried on a wave of digital technology, CSE brings the power of parallelism to bear on troves of data. Mathematics-based advanced computing has become a prevalent means of discovery and innovation in essentially all areas of science, engineering, technology, and society; and the CSE community is at the core of this transformation. However, a combination of disruptive developments---including the architectural complexity of extreme-scale computing, the data revolution that engulfs the planet, and the specialization required to follow the applications to new frontiers---is redefining the scope and reach of the CSE endeavor. This report describes the rapid expansion of CSE and the challenges to sustaining its bold advances. The report also presents strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie
    • …
    corecore