14,552 research outputs found

    Enhancing Energy Production with Exascale HPC Methods

    Get PDF
    High Performance Computing (HPC) resources have become the key actor for achieving more ambitious challenges in many disciplines. In this step beyond, an explosion on the available parallelism and the use of special purpose processors are crucial. With such a goal, the HPC4E project applies new exascale HPC techniques to energy industry simulations, customizing them if necessary, and going beyond the state-of-the-art in the required HPC exascale simulations for different energy sources. In this paper, a general overview of these methods is presented as well as some specific preliminary results.The research leading to these results has received funding from the European Union's Horizon 2020 Programme (2014-2020) under the HPC4E Project (www.hpc4e.eu), grant agreement n° 689772, the Spanish Ministry of Economy and Competitiveness under the CODEC2 project (TIN2015-63562-R), and from the Brazilian Ministry of Science, Technology and Innovation through Rede Nacional de Pesquisa (RNP). Computer time on Endeavour cluster is provided by the Intel Corporation, which enabled us to obtain the presented experimental results in uncertainty quantification in seismic imagingPostprint (author's final draft

    Data-Driven Model Reduction for the Bayesian Solution of Inverse Problems

    Get PDF
    One of the major challenges in the Bayesian solution of inverse problems governed by partial differential equations (PDEs) is the computational cost of repeatedly evaluating numerical PDE models, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. This paper proposes a data-driven projection-based model reduction technique to reduce this computational cost. The proposed technique has two distinctive features. First, the model reduction strategy is tailored to inverse problems: the snapshots used to construct the reduced-order model are computed adaptively from the posterior distribution. Posterior exploration and model reduction are thus pursued simultaneously. Second, to avoid repeated evaluations of the full-scale numerical model as in a standard MCMC method, we couple the full-scale model and the reduced-order model together in the MCMC algorithm. This maintains accurate inference while reducing its overall computational cost. In numerical experiments considering steady-state flow in a porous medium, the data-driven reduced-order model achieves better accuracy than a reduced-order model constructed using the classical approach. It also improves posterior sampling efficiency by several orders of magnitude compared to a standard MCMC method

    Tuning the Level of Concurrency in Software Transactional Memory: An Overview of Recent Analytical, Machine Learning and Mixed Approaches

    Get PDF
    Synchronization transparency offered by Software Transactional Memory (STM) must not come at the expense of run-time efficiency, thus demanding from the STM-designer the inclusion of mechanisms properly oriented to performance and other quality indexes. Particularly, one core issue to cope with in STM is related to exploiting parallelism while also avoiding thrashing phenomena due to excessive transaction rollbacks, caused by excessively high levels of contention on logical resources, namely concurrently accessed data portions. A means to address run-time efficiency consists in dynamically determining the best-suited level of concurrency (number of threads) to be employed for running the application (or specific application phases) on top of the STM layer. For too low levels of concurrency, parallelism can be hampered. Conversely, over-dimensioning the concurrency level may give rise to the aforementioned thrashing phenomena caused by excessive data contention—an aspect which has reflections also on the side of reduced energy-efficiency. In this chapter we overview a set of recent techniques aimed at building “application-specific” performance models that can be exploited to dynamically tune the level of concurrency to the best-suited value. Although they share some base concepts while modeling the system performance vs the degree of concurrency, these techniques rely on disparate methods, such as machine learning or analytic methods (or combinations of the two), and achieve different tradeoffs in terms of the relation between the precision of the performance model and the latency for model instantiation. Implications of the different tradeoffs in real-life scenarios are also discussed

    Forward and Backward Bisimulations for Chemical Reaction Networks

    Get PDF
    We present two quantitative behavioral equivalences over species of a chemical reaction network (CRN) with semantics based on ordinary differential equations. Forward CRN bisimulation identifies a partition where each equivalence class represents the exact sum of the concentrations of the species belonging to that class. Backward CRN bisimulation relates species that have the identical solutions at all time points when starting from the same initial conditions. Both notions can be checked using only CRN syntactical information, i.e., by inspection of the set of reactions. We provide a unified algorithm that computes the coarsest refinement up to our bisimulations in polynomial time. Further, we give algorithms to compute quotient CRNs induced by a bisimulation. As an application, we find significant reductions in a number of models of biological processes from the literature. In two cases we allow the analysis of benchmark models which would be otherwise intractable due to their memory requirements.Comment: Extended version of the CONCUR 2015 pape

    Adaptive algorithms for partial differential equations with parametric uncertainty

    Get PDF
    In this thesis, we focus on the design of efficient adaptive algorithms for the numerical approximation of solutions to elliptic partial differential equations (PDEs) with parametric inputs. Numerical discretisations are obtained using the stochastic Galerkin Finite Element Method (SGFEM) which generates approximations of the solution in tensor product spaces of finite element spaces and finite-dimensional spaces of multivariate polynomials in the random parameters. Firstly, we propose an adaptive SGFEM algorithm which employs reliable and efficient hierarchical a posteriori energy error estimates of the solution to parametric PDEs. The main novelty of the algorithm is that a balance between spatial and parametric approximations is ensured by choosing the enhancement associated with dominant error reduction estimates. Next, we introduce a two-level a posteriori estimate of the energy error in SGFEM approximations. We prove that this error estimate is reliable and efficient. Then we provide a rigorous convergence analysis of the adaptive algorithm driven by two-level error estimates. Four different marking strategies for refinement of stochastic Galerkin approximations are proposed and, in particular, for two of them, we prove that the sequence of energy errors computed by associated algorithms converges linearly. Finally, we use duality techniques for the goal-oriented error estimation in approximating linear quantities of interest derived from solutions to parametric PDEs. Adaptive enhancements in the proposed algorithm are guided by an innovative strategy that combines the error reduction estimates computed for spatial and parametric components of corresponding primal and dual solutions. The performance of all adaptive algorithms and the effectiveness of the error estimation strategies are illustrated by numerical experiments. The software used for all experiments in this work is available online

    Structural synthesis: Precursor and catalyst

    Get PDF
    More than twenty five years have elapsed since it was recognized that a rather general class of structural design optimization tasks could be properly posed as an inequality constrained minimization problem. It is suggested that, independent of primary discipline area, it will be useful to think about: (1) posing design problems in terms of an objective function and inequality constraints; (2) generating design oriented approximate analysis methods (giving special attention to behavior sensitivity analysis); (3) distinguishing between decisions that lead to an analysis model and those that lead to a design model; (4) finding ways to generate a sequence of approximate design optimization problems that capture the essential characteristics of the primary problem, while still having an explicit algebraic form that is matched to one or more of the established optimization algorithms; (5) examining the potential of optimum design sensitivity analysis to facilitate quantitative trade-off studies as well as participation in multilevel design activities. It should be kept in mind that multilevel methods are inherently well suited to a parallel mode of operation in computer terms or to a division of labor between task groups in organizational terms. Based on structural experience with multilevel methods general guidelines are suggested

    Integrating Wind Flow Analysis in Early Urban Design: Guidelines for Practitioners

    Get PDF
    The research focused on simulating wind patterns in urban planning design offers substantial contributions to both the social and economic aspects of the urban planning and design field. To begin with, it addresses a critical factor in urban development, especially in Mediterranean climates, where natural ventilation significantly influences summer comfort. By incorporating predictive numerical simulations of urban wind patterns, this study provides valuable insights into improving outdoor thermal comfort within urban areas. This holds particular importance in the context of adapting to climate change, as it equips urban planners and architects with informed decision-making tools to create more sustainable and comfortable urban environments. Additionally, this research makes an economic contribution by presenting guidelines for iterative wind simulations in the early stages of designing medium-scale urban projects. Through the validation of a simulation workflow, it streamlines the design process, potentially reducing the time and resources required for urban planning and architectural design. This enhanced efficiency can result in cost savings during project development. Moreover, the study's recommendations concerning simulation parameters, such as wind tunnel cell size and refinement levels, offer practical insights for optimizing simulation processes, potentially lowering computational expenses and improving the overall economic viability of urban design projects. To summarize, this research effectively addresses climate-related challenges, benefiting both social well-being and economic efficiency in the field of urban planning and design, while also providing guidance for more efficient simulation-driven design procedures
    • …
    corecore