10,595 research outputs found
On-the-fly adaptivity for nonlinear twoscale simulations using artificial neural networks and reduced order modeling
A multi-fidelity surrogate model for highly nonlinear multiscale problems is
proposed. It is based on the introduction of two different surrogate models and
an adaptive on-the-fly switching. The two concurrent surrogates are built
incrementally starting from a moderate set of evaluations of the full order
model. Therefore, a reduced order model (ROM) is generated. Using a hybrid
ROM-preconditioned FE solver, additional effective stress-strain data is
simulated while the number of samples is kept to a moderate level by using a
dedicated and physics-guided sampling technique. Machine learning (ML) is
subsequently used to build the second surrogate by means of artificial neural
networks (ANN). Different ANN architectures are explored and the features used
as inputs of the ANN are fine tuned in order to improve the overall quality of
the ML model. Additional ANN surrogates for the stress errors are generated.
Therefore, conservative design guidelines for error surrogates are presented by
adapting the loss functions of the ANN training in pure regression or pure
classification settings. The error surrogates can be used as quality indicators
in order to adaptively select the appropriate -- i.e. efficient yet accurate --
surrogate. Two strategies for the on-the-fly switching are investigated and a
practicable and robust algorithm is proposed that eliminates relevant technical
difficulties attributed to model switching. The provided algorithms and ANN
design guidelines can easily be adopted for different problem settings and,
thereby, they enable generalization of the used machine learning techniques for
a wide range of applications. The resulting hybrid surrogate is employed in
challenging multilevel FE simulations for a three-phase composite with
pseudo-plastic micro-constituents. Numerical examples highlight the performance
of the proposed approach
Nonlinear nonlocal multicontinua upscaling framework and its applications
In this paper, we discuss multiscale methods for nonlinear problems. The main
idea of these approaches is to use local constraints and solve problems in
oversampled regions for constructing macroscopic equations. These techniques
are intended for problems without scale separation and high contrast, which
often occur in applications. For linear problems, the local solutions with
constraints are used as basis functions. This technique is called Constraint
Energy Minimizing Generalized Multiscale Finite Element Method (CEM-GMsFEM).
GMsFEM identifies macroscopic quantities based on rigorous analysis. In
corresponding upscaling methods, the multiscale basis functions are selected
such that the degrees of freedom have physical meanings, such as averages of
the solution on each continuum.
This paper extends the linear concepts to nonlinear problems, where the local
problems are nonlinear. The main concept consists of: (1) identifying
macroscopic quantities; (2) constructing appropriate oversampled local problems
with coarse-grid constraints; (3) formulating macroscopic equations. We
consider two types of approaches. In the first approach, the solutions of local
problems are used as basis functions (in a linear fashion) to solve nonlinear
problems. This approach is simple to implement; however, it lacks the nonlinear
interpolation, which we present in our second approach. In this approach, the
local solutions are used as a nonlinear forward map from local averages
(constraints) of the solution in oversampling region. This local fine-grid
solution is further used to formulate the coarse-grid problem. Both approaches
are discussed on several examples and applied to single-phase and two-phase
flow problems, which are challenging because of convection-dominated nature of
the concentration equation
Task-based adaptive multiresolution for time-space multi-scale reaction-diffusion systems on multi-core architectures
A new solver featuring time-space adaptation and error control has been
recently introduced to tackle the numerical solution of stiff
reaction-diffusion systems. Based on operator splitting, finite volume adaptive
multiresolution and high order time integrators with specific stability
properties for each operator, this strategy yields high computational
efficiency for large multidimensional computations on standard architectures
such as powerful workstations. However, the data structure of the original
implementation, based on trees of pointers, provides limited opportunities for
efficiency enhancements, while posing serious challenges in terms of parallel
programming and load balancing. The present contribution proposes a new
implementation of the whole set of numerical methods including Radau5 and
ROCK4, relying on a fully different data structure together with the use of a
specific library, TBB, for shared-memory, task-based parallelism with
work-stealing. The performance of our implementation is assessed in a series of
test-cases of increasing difficulty in two and three dimensions on multi-core
and many-core architectures, demonstrating high scalability
Research and Education in Computational Science and Engineering
Over the past two decades the field of computational science and engineering
(CSE) has penetrated both basic and applied research in academia, industry, and
laboratories to advance discovery, optimize systems, support decision-makers,
and educate the scientific and engineering workforce. Informed by centuries of
theory and experiment, CSE performs computational experiments to answer
questions that neither theory nor experiment alone is equipped to answer. CSE
provides scientists and engineers of all persuasions with algorithmic
inventions and software systems that transcend disciplines and scales. Carried
on a wave of digital technology, CSE brings the power of parallelism to bear on
troves of data. Mathematics-based advanced computing has become a prevalent
means of discovery and innovation in essentially all areas of science,
engineering, technology, and society; and the CSE community is at the core of
this transformation. However, a combination of disruptive
developments---including the architectural complexity of extreme-scale
computing, the data revolution that engulfs the planet, and the specialization
required to follow the applications to new frontiers---is redefining the scope
and reach of the CSE endeavor. This report describes the rapid expansion of CSE
and the challenges to sustaining its bold advances. The report also presents
strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie
Semi phenomenological modelling of the behavior of TRIP steels
The authors are grateful to ArcelorMittal R&D for supporting this research.A new semi-phenomenological model is developed based on a mean-field description of the TRIP behavior for the simulation of multiaxial loads. This model intends to reduce the number of internal variables of crystalline models that cannot be used for the moment in metal forming simulations. Starting from local and crystallographic approaches, the mean-field approach is obtained at the phase level with the concept of Mean Instantaneous Transformation Strain (MITS) accompanying martensitic transformation. Within the framework of the thermodynamics of irreversible processes, driving forces, martensitic volume fraction evolution and an expression of the TRIP effect are determined for this new model. A classical self-consistent scheme is used to model the behavior of multiphased TRIP steels. The model is tested for cooling at constant loads and for multiaxial loadings at constant temperatures. The predictions reproduce the increase in ductility, the dynamic softening effect and the multiaxial behavior of a multiphased TRIP stee
A machine learning approach for efficient uncertainty quantification using multiscale methods
Several multiscale methods account for sub-grid scale features using coarse
scale basis functions. For example, in the Multiscale Finite Volume method the
coarse scale basis functions are obtained by solving a set of local problems
over dual-grid cells. We introduce a data-driven approach for the estimation of
these coarse scale basis functions. Specifically, we employ a neural network
predictor fitted using a set of solution samples from which it learns to
generate subsequent basis functions at a lower computational cost than solving
the local problems. The computational advantage of this approach is realized
for uncertainty quantification tasks where a large number of realizations has
to be evaluated. We attribute the ability to learn these basis functions to the
modularity of the local problems and the redundancy of the permeability patches
between samples. The proposed method is evaluated on elliptic problems yielding
very promising results.Comment: Journal of Computational Physics (2017
- …