325 research outputs found
ASG - Techniques of Adaptivity
The introduction of service-orientation leads to significant improvements regarding flexibility in the choice of business partners and IT-systems. This requires an increased adaptability of enterprise software landscapes as the environment is more dynamic than the ones in traditional approaches. In this paper we present different types of adaptation scenarios for service compositions and their implementation in a service provision platform. Based on experiences from the Adaptive Services Grid (ASG) project, we show how dynamic adaptation strategies are able to support an automated selection, composition and binding of services during run-time
Comparison of data-driven uncertainty quantification methods for a carbon dioxide storage benchmark scenario
A variety of methods is available to quantify uncertainties arising with\-in
the modeling of flow and transport in carbon dioxide storage, but there is a
lack of thorough comparisons. Usually, raw data from such storage sites can
hardly be described by theoretical statistical distributions since only very
limited data is available. Hence, exact information on distribution shapes for
all uncertain parameters is very rare in realistic applications. We discuss and
compare four different methods tested for data-driven uncertainty
quantification based on a benchmark scenario of carbon dioxide storage. In the
benchmark, for which we provide data and code, carbon dioxide is injected into
a saline aquifer modeled by the nonlinear capillarity-free fractional flow
formulation for two incompressible fluid phases, namely carbon dioxide and
brine. To cover different aspects of uncertainty quantification, we incorporate
various sources of uncertainty such as uncertainty of boundary conditions, of
conceptual model definitions and of material properties. We consider recent
versions of the following non-intrusive and intrusive uncertainty
quantification methods: arbitary polynomial chaos, spatially adaptive sparse
grids, kernel-based greedy interpolation and hybrid stochastic Galerkin. The
performance of each approach is demonstrated assessing expectation value and
standard deviation of the carbon dioxide saturation against a reference
statistic based on Monte Carlo sampling. We compare the convergence of all
methods reporting on accuracy with respect to the number of model runs and
resolution. Finally we offer suggestions about the methods' advantages and
disadvantages that can guide the modeler for uncertainty quantification in
carbon dioxide storage and beyond
What Automated Planning Can Do for Business Process Management
Business Process Management (BPM) is a central element of today organizations. Despite over the years its main focus has been the support of processes in highly controlled domains, nowadays many domains of interest to the BPM community are characterized by ever-changing requirements, unpredictable environments and increasing amounts of data that influence the execution of process instances. Under such dynamic conditions, BPM systems must increase their level of automation to provide the reactivity and flexibility necessary for process management. On the other hand, the Artificial Intelligence (AI) community has concentrated its efforts on investigating dynamic domains that involve active control of computational entities and physical devices (e.g., robots, software agents, etc.). In this context, Automated Planning, which is one of the oldest areas in AI, is conceived as a model-based approach to synthesize autonomous behaviours in automated way from a model. In this paper, we discuss how automated planning techniques can be leveraged to enable new levels of automation and support for business processing, and we show some concrete examples of their successful application to the different stages of the BPM life cycle
An Adaptive and a Multilevel Adaptive Sparse Grid approach to address global uncertainty and sensitivity
In the field of heterogeneous catalysis, first- principle-based microkinetic modeling has been proven to be an essential tool to provide a deeper understanding of the microscopic interplay between reactions. It avoids the bias of being fitted to experimental data, which allows us to extract information about the materials’ properties that cannot be drawn from experimental data. Unfortunately, the catalytic models draw information from electronic structure theory (e.g. Density Functional Theory) which contains a sizable error due to intrinsic approximations to make the computational costs feasible. Although the errors are commonly accepted and known, this work will analyse how significant the impact of these errors can be on the model outcome. We first explain how these errors are propagated into a model outcome, e.g., turnover-frequency (TOF), and how significant the outcome is impacted. Secondly, we quantify the propagation of single errors by a local and global sensitivity analysis, including a discussion of their dis-/advantages for a catalytic model.
The global approach requires the numerical quadrature of high dimensional integrals as the catalytic model often depends on multiple parameters. This, we tackle with a local and dimension-adaptive sg! (sg!) approach. sg!s have shown to be very useful for medium dimensional problems since their adaptivity feature allows for an accurate surrogate model with a modest number of points. Despite the models’ high dimensionality, the outcome is mostly dominated by a fraction of the input parameter, which implies a high refinement in only a fraction of the dimensions (dimension-adaptive). Additionally, the kinetic data shows characteristics of sharp transitions between "non-active" and "active" areas, which need a higher order of refinement (local-adaptive). The efficiency of the adaptive sg! is tested on different toy models and a realistic first principle model, including the Sensitivity Analysis. Results show that for catalytic models, a local derivative-based sensitivity analysis gives only limited information. However, the global approach can identify the important parameters and allows extracting information from more complex models in more detail.
The Sparse Grid approach is useful for reducing the total number of points, but what if evaluating the point itself is very expensive? The second part of this work concentrates on solving high dimensional integrals for models whose evaluations are costly due to, e.g. being only implicitly given by a Monte Carlo model. The evaluation contains an error due
to finite sampling. To lower the error, we would have to increase computational effort for a high number of samples. To tackle this problem, we extend the SG method with a multilevel approach to lower the cost. Unlike existing approaches, we will not use the telescoping sum but utilise the sparse grid’s intrinsically given hierarchical structure. We assume that not all the SG points need the same accuracy but that we can double the points’ variance and halve the drawn samples with every refinement step. We demonstrate the methodology on different toy models and a realistic kinetic Monte Carlo system for CO oxidation. Therefore, we compare the non- multilevel adaptive Sparse Grid (ASG) with the Multilevel Adaptive Sparse Grid (MLASG). Results show that ith the multilevel extension we can save up to two orders of magnitude without challenging the accuracy of the surrogate model compared to a non-mulitlevel SG.Auf dem Gebiet der heterogenen Katalyse hat sich die First-Principle-basierte mikrokinetische Model- lierung als wesentliches Werkzeug bewährt, um ein tieferes Verständnis der mikroskopischen Wech- selwirkung zwischen Reaktionen zu ermöglichen. Leider basieren die katalytischen Modelle auf Informationen aus der elektronischen Strukturtheorie (z. B. Dichtefunktionaltheorie), die aufgrund in- trinsischer Näherungen einen beträchtlichen Fehler enthalten. In dieser Arbeit werden wir analysieren wie signifikant die Auswirkungen dieser Fehler auf das Modellergebnis sein können. Dazu erklären wir zunächst, wie diese Fehler in ein Modellergebnis, wie z. B. Turnover-Frequency (TOF), übertragen werden. Des Weiteren quantifizieren wir die Auswirkung einzelner Fehler mittels einer lokalen und globalen Sensitivitätsanalyse und erklären die Unterschiede beider Methoden.
Der globale Sensitivitätsansatz erfordert das Lösen hochdimensionaler Integrale bzw. ein akkurates Ersatzmodel zum Auswerten, wofür wir einen lokalen und dimensions-adaptiven Sparse Grid-Ansatz benutzen. Sparse Grids (SG) haben sich für mitteldimensionale Probleme als sehr nützlich erwiesen, da ihre Adaptivitätsfunktion ein genaues Ersatzmodell mit einer kleinen Anzahl von Punkten er- möglicht. Trotz der hohen Dimensionalität der Modelle wird das Ergebnis meist von einem Bruchteil der Modellparameter dominiert, was eine hohe Verfeinerung in nur einem Bruchteil der Dimensionen erfordert (dimensionsadaptiv). Darüber hinaus zeigen die kinetischen Daten Charakteristiken scharfer Übergänge zwischen "nicht aktiven" und "aktiven" Bereichen, die eine höhere Verfeinerung (lokal- adaptiv) erfordern. Die Effizienz des adaptiven SG wird an verschiedenen Testmodellen und einem realistischen First-Principle-Modell, einschließlich der Sensitivitätsanalyse, getestet. Die Ergebnisse zeigen, dass für katalytische Modelle eine lokale Sensitivitätsanalyse auf Basis lokaler Ableitungen nur begrenzte Informationen liefert. Dagegen kann der globale Ansatz die wichtigen Parameter identi- fizieren und ermöglicht es, Informationen aus komplexeren Modellen detaillierter zu extrahieren. Der Sparse Grid-Ansatz reduziert die Gesamtzahl an Punkten, aber was ist, wenn die Auswertung eines Punktes schon sehr teuer ist? Deswegen konzentriert sich der zweite Teil dieser Arbeit auf die Lösung hochdimensionaler Integrale für Modelle, deren Auswertungen nur implizit, z.B. durch ein Monte-Carlo-Modell, gegeben ist. Wir erweitern die SG-Methode um einen mehrstufigen Ansatz, der die Kosten senken soll. Im Gegensatz zu bestehenden Ansätzen werden wir nicht die Teleskop- summe verwenden, sondern die intrinsisch gegebene hierarchische Struktur des SG ausnutzen. Jede Funktionsauswertung enthält einen Fehler, aufgrund einer begrenzten Probenmenge, aber nicht alle SG-Punkte benötigen die gleiche Genauigkeit. Deswegen können wir bei jedem Verfeinerungsschritt die Varianz der Punkte verdoppeln und somit die Menge der gezogenen Stichproben halbieren und die Kosten minimieren. Wir demonstrieren die Methodik an verschiedenen Testmodellen und einem realistischen kinetischen Monte-Carlo-Modell. Dabei vergleichen wir den reinen adaptiven Sparse Grid (ASG) Ansatz mit dem Multilevel Adaptive Sparse Grid (MLASG). Die Ergebnisse zeigen, dass wir mit der mehrstufigen Erweiterung im Vergleich zur ASG, bis zu zwei Größenordnungen an CPU (Central Processing Unit)- Zeit einsparen können, ohne die Genauigkeit des Ersatzmodells zu beeinflussen
Adaptive sparse grid discontinuous Galerkin method: review and software implementation
This paper reviews the adaptive sparse grid discontinuous Galerkin (aSG-DG)
method for computing high dimensional partial differential equations (PDEs) and
its software implementation. The C\texttt{++} software package called AdaM-DG,
implementing the aSG-DG method, is available on Github at
\url{https://github.com/JuntaoHuang/adaptive-multiresolution-DG}. The package
is capable of treating a large class of high dimensional linear and nonlinear
PDEs. We review the essential components of the algorithm and the functionality
of the software, including the multiwavelets used, assembling of bilinear
operators, fast matrix-vector product for data with hierarchical structures. We
further demonstrate the performance of the package by reporting numerical error
and CPU cost for several benchmark test, including linear transport equations,
wave equations and Hamilton-Jacobi equations
Greedy PIG: Adaptive Integrated Gradients
Deep learning has become the standard approach for most machine learning
tasks. While its impact is undeniable, interpreting the predictions of deep
learning models from a human perspective remains a challenge. In contrast to
model training, model interpretability is harder to quantify and pose as an
explicit optimization problem. Inspired by the AUC softmax information curve
(AUC SIC) metric for evaluating feature attribution methods, we propose a
unified discrete optimization framework for feature attribution and feature
selection based on subset selection. This leads to a natural adaptive
generalization of the path integrated gradients (PIG) method for feature
attribution, which we call Greedy PIG. We demonstrate the success of Greedy PIG
on a wide variety of tasks, including image feature attribution, graph
compression/explanation, and post-hoc feature selection on tabular data. Our
results show that introducing adaptivity is a powerful and versatile method for
making attribution methods more powerful
A planning approach to the automated synthesis of template-based process models
The design-time specification of flexible processes can be time-consuming and error-prone, due to the high number of tasks involved and their context-dependent nature. Such processes frequently suffer from potential interference among their constituents, since resources are usually shared by the process participants and it is difficult to foresee all the potential tasks interactions in advance. Concurrent tasks may not be independent from each other (e.g., they could operate on the same data at the same time), resulting in incorrect outcomes. To tackle these issues, we propose an approach for the automated synthesis of a library of template-based process models that achieve goals in dynamic and partially specified environments. The approach is based on a declarative problem definition and partial-order planning algorithms for template generation. The resulting templates guarantee sound concurrency in the execution of their activities and are reusable in a variety of partially specified contextual environments. As running example, a disaster response scenario is given. The approach is backed by a formal model and has been tested in experiment
Multi-hp adaptive discontinuous Galerkin methods for simplified PN approximations of 3D radiative transfer in non-gray media
In this paper we present a multi-hp adaptive discontinuous Galerkin method for 3D simplified approximations of radiative transfer in non-gray media capable of reaching accuracies superior to most of methods in the literature. The simplified models are a set of differential equations derived based on asymptotic expansions for the integro-differential radiative transfer equation. In a non-gray media the optical spectrum is divided into a finite set of bands with constant absorption coefficients and the simplified approximations are solved for each band in the spectrum. At high temperature, boundary layers with different magnitudes occur for each wavelength in the spectrum and developing a numerical solver to accurately capture them is challenging for the conventional finite element methods. Here we propose a class of high-order adaptive discontinuous Galerkin methods using space error estimators. The proposed method is able to solve problems where 3D meshes contain finite elements of different kind with the number of equations and polynomial orders of approximation varying locally on the finite element edges, faces, and interiors. The proposed method has also the potential to perform both isotropic and anisotropic adaptation for each band in the optical spectrum. Several numerical results are presented to illustrate the performance of the proposed method for 3D radiative simulations. The computed results confirm its capability to solve 3D simplified approximations of radiative transfer in non-gray media
Advanced techniques in scientific computing: application to electromagnetics
Mención Internacional en el tÃtulo de doctorDurante los últimos años, los componentes de radiofrecuencia que
forman parte de un sistema de comunicaciones necesitan simulaciones
cada vez más exigentes desde el punto de vista de recursos computacionales.
Para ello, se han desarrollado diferentes técnicas con el método de
los elementos finitos (FEM) como la conocida como adaptatividad hp,
que consiste en estimar el error en el problema electromagnético para
generar mallas de elementos adecuadas al problema que obtienen una
aproximación de forma más efectiva que las mallas estándar; o métodos
de descomposición de dominios (DDM), basado en la división del problema
original en problemas más pequeños que se pueden resolver en
paralelo. El principal problema de las técnicas de adaptatividad es que
ofrecen buenas prestaciones para problemas bidimensionales, mientras
que en tres dimensiones el tiempo de generación de las mallas adaptadas
es prohibitivo. Por otra parte, DDM se ha utilizado satisfactoriamente
para la simulación de problemas eléctricamente muy grandes y de gran
complejidad, convirtiéndose en uno de los temas más actuales en la comunidad
de electromagnetismo computacional.
El principal objetivo de este trabajo es estudiar la viabilidad de algoritmos
escalables (en términos de paralelización) combinando DDM no
conformes y adaptatividad automática en tres dimensiones. Esto permitir
Ãa la ejecución de algoritmos de adaptatividad independiente en cada
subdominio de DDM. En este trabajo se presenta y discute un prototipo
que combina técnicas de adaptatividad y DDM, que aún no se han tratado en detalle en la comunidad cientÃfica. Para ello, se implementan
tres bloques fundamentales: i) funciones de base para los elementos finitos
que permitan órdenes variables dentro de la misma malla; ii) DDM no
conforme y sin solapamiento; y iii) algoritmos de adaptatividad en tres
dimensiones. Estos tres bloques se han implementado satisfactoriamente
en un código FEM mediante un método sistemático basado en el método
de las soluciones manufacturadas (MMS). Además, se ha llevado a cabo
una paralelización a tres niveles: a nivel de algoritmo, con DDM; a nivel
de proceso, con MPI (Message Passing Interface); y a nivel de hebra, con
OpenMP; todo en un código modular que facilita el mantenimiento y la
introducción de nuevas caracterÃsticas.
Con respecto al primer bloque fundamental, se ha desarrollado una
familia de funciones base con un enfoque sistemático que permite la
expansión correcta del espacio de funciones. Por otra parte, se han introducido
funciones de base jerárquicas de otros autores (con los que el
grupo al que pertenece el autor de la tesis ha colaborado estrechamente
en los últimos años) para facilitar la introducción de diferentes órdenes
de aproximación en el mismo mallado.
En lo relativo a DDM, se ha realizado un estudio cuantitativo del
error generado por las disconformidades en la interfaz entre subdominios,
incluidas las discontinuidades generadas por un algoritmo de adaptatividad.
Este estudio es fundamental para el correcto funcionamiento
de la adaptatividad, y no ha sido evaluado con detalle en la comunidad
cientÃfica.
Además, se ha desarrollado un algoritmo de adaptatividad con prismas
triangulares, haciendo especial énfasis en las peculiaridades debidas
a la elección de este elemento. Finalmente, estos tres bloques básicos
se han utilizado para desarrollar, y discutir, un prototipo que une las
técnicas de adaptatividad y DDM.In the last years, more and more accurate and demanding simulations
of radiofrequency components in a system of communications are
requested by the community. To address this need, some techniques have
been introduced in finite element methods (FEM), such as hp adaptivity
(which estimates the error in the problem and generates tailored meshes
to achieve more accuracy with less unknowns than in the case of uniformly
refined meshes) or domain decomposition methods (DDM, consisting
of dividing the whole problem into more manageable subdomains
which can be solved in parallel). The performance of the adaptivity techniques
is good up to two dimensions, whereas for three dimensions the
generation time of the adapted meshes may be prohibitive. On the other
hand, large scale simulations have been reported with DDM becoming a
hot topic in the computational electromagnetics community.
The main objective of this dissertation is to study the viability of
scalable (in terms of parallel performance) algorithms combining nonconformal
DDM and automatic adaptivity in three dimensions. Specifically,
the adaptivity algorithms might be run in each subdomain independently.
This combination has not been detailed in the literature
and a proof of concept is discussed in this work. Thus, three building
blocks must be introduced: i) basis functions for the finite elements
which support non-uniform approximation orders p; ii) non-conformal
and non-overlapping DDM; and iii) adaptivity algorithms in 3D. In this
work, these three building blocks have been successfully introduced in a FEM code with a systematic procedure based on the method of manufactured
solutions (MMS). Moreover, a three-level parallelization (at the
algorithm level, with DDM; at the process level, with message passing
interface (MPI), and at the thread level, with OpenMP) has been developed
using the paradigm of modular programming which eases the
software maintenance and the introduction of new features.
Regarding first building block, a family of basis functions which follows
a sound mathematical approach to expand the correct space of
functions is developed and particularized for triangular prisms. Also,
to ease the introduction of different approximation orders in the same
mesh, hierarchical basis functions from other authors are used as a black
box. With respect to DDM, a thorough study of the error introduced
by the non-conformal interfaces between subdomains is required for the
adaptivity algorithm. Thus, a quantitative analysis is detailed including
non-conformalities generated by independent refinements in neighbor
subdomains. This error has not been assessed with detail in the literature
and it is a key factor for the adaptivity algorithm to perform properly.
An adaptivity algorithm with triangular prisms is also developed and
special considerations for the implementation are explained. Finally, on
top of these three building blocks, the proof of concept of adaptivity
with DDM is discussed.Programa Oficial de Doctorado en Multimedia y ComunicacionesPresidente: Daniel Segovia Vargas.- Secretario: David Pardo Zubiaur.- Vocal: Romanus Dyczij-Edlinge
- …