7 research outputs found

    Approximation of reachable sets using optimal control algorithms

    Get PDF
    To appearInternational audienceNumerical experiences with a method for the approximation of reachable sets of nonlinear control systems are reported. The method is based on the formulation of suitable optimal control problems with varying objective functions, whose discretization by Euler's method lead to finite dimensional non-convex nonlinear programs. These are solved by a sequential quadratic programming method. An efficient adjoint method for gradient computation is used to reduce the computational costs. The discretization of the state space is more efficiently than by usual sequential realization of Euler's method and allows adaptive calculations or refinements. The method is illustrated for two test examples. Both examples have non-linear dynamics, the first one has a convex reachable set, whereas the second one has a non-convex reachable set

    Matrixfreie voxelbasierte Finite-Elemente-Methode für Materialien mit komplizierter Mikrostruktur

    Get PDF
    Modern image detection techniques such as micro computer tomography (μCT), magnetic resonance imaging (MRI) and scanning electron microscopy (SEM) provide us with high resolution images of the microstructure of materials in a non-invasive and convenient way. They form the basis for the geometrical models of high-resolution analysis, so called image-based analysis. However especially in 3D, discretizations of these models reach easily the size of 100 Mill. degrees of freedoms and require extensive hardware resources in terms of main memory and computing power to solve the numerical model. Consequently, the focus of this work is to combine and adapt numerical solution methods to reduce the memory demand first and then the computation time and therewith enable an execution of the image-based analysis on modern computer desktops. Hence, the numerical model is a straightforward grid discretization of the voxel-based (pixels with a third dimension) geometry which omits the boundary detection algorithms and allows reduced storage of the finite element data structure and a matrix-free solution algorithm. This in turn reduce the effort of almost all applied grid-based solution techniques and results in memory efficient and numerically stable algorithms for the microstructural models. Two variants of the matrix-free algorithm are presented. The efficient iterative solution method of conjugate gradients is used with matrix-free applicable preconditioners such as the Jacobi and the especially suited multigrid method. The jagged material boundaries of the voxel-based mesh are smoothed through embedded boundary elements which contain different material information at the integration point and are integrated sub-cell wise though without additional boundary detection. The efficiency of the matrix-free methods can be retained.Moderne bildgebende Verfahren wie Mikro-Computertomographie (μCT), Magnetresonanztomographie (MRT) und Rasterelektronenmikroskopie (SEM) liefern nicht-invasiv hochauflösende Bilder der Mikrostruktur von Materialien. Sie bilden die Grundlage der geometrischen Modelle der hochauflösenden bildbasierten Analysis. Allerdings erreichen vor allem in 3D die Diskretisierungen dieser Modelle leicht die Größe von 100 Mill. Freiheitsgraden und erfordern umfangreiche Hardware-Ressourcen in Bezug auf Hauptspeicher und Rechenleistung, um das numerische Modell zu lösen. Der Fokus dieser Arbeit liegt daher darin, numerische Lösungsmethoden zu kombinieren und anzupassen, um den Speicherplatzbedarf und die Rechenzeit zu reduzieren und damit eine Ausführung der bildbasierten Analyse auf modernen Computer-Desktops zu ermöglichen. Daher ist als numerisches Modell eine einfache Gitterdiskretisierung der voxelbasierten (Pixel mit der Tiefe als dritten Dimension) Geometrie gewählt, die die Oberflächenerstellung weglässt und eine reduzierte Speicherung der finiten Elementen und einen matrixfreien Lösungsalgorithmus ermöglicht. Dies wiederum verringert den Aufwand von fast allen angewandten gitterbasierten Lösungsverfahren und führt zu Speichereffizienz und numerisch stabilen Algorithmen für die Mikrostrukturmodelle. Es werden zwei Varianten der Anpassung der matrixfreien Lösung präsentiert, die Element-für-Element Methode und eine Knoten-Kanten-Variante. Die Methode der konjugierten Gradienten in Kombination mit dem Mehrgitterverfahren als sehr effizienten Vorkonditionierer wird für den matrixfreien Lösungsalgorithmus adaptiert. Der stufige Verlauf der Materialgrenzen durch die voxelbasierte Diskretisierung wird durch Elemente geglättet, die am Integrationspunkt unterschiedliche Materialinformationen enthalten und über Teilzellen integriert werden (embedded boundary elements). Die Effizienz der matrixfreien Verfahren bleibt erhalten

    A type-based prototype compiler for telescoping languages

    Get PDF
    Scientists want to encode their applications in domain languages with high-level operators that reflect the way they conceptualize computations in their domains. Telescoping languages calls for automatically generating optimizing compilers for these languages by pre-compiling the underlying libraries that define them to generate multiple variants optimized for use in different possible contexts, including different argument types. The resulting compiler replaces calls to the high-level constructs with calls to the optimized variants. This approach aims to automatically derive high-performance executables from programs written in high-level domain-specific languages. TeleGen is a prototype telescoping-languages compiler that performs type-based specializations. For the purposes of this dissertation, types include any set of variable properties such as intrinsic type, size and array sparsity pattern. Type inference and specialization are cornerstones of the telescoping-languages strategy. Because optimization of library routines must occur before their full calling contexts are available, type inference gives critical information needed to determine which specialized variants to generate as well as how to best optimize each variant to achieve the highest performance. To build the prototype compiler, we developed a precise type-inference algorithm that infers all legal type tuples, or type configurations, for the program variables, including routine arguments, for all legal calling contexts. We use the type information inferred by our algorithm to drive specialization and optimization. We demonstrate the practical value of our type-inference algorithm and the type-based specialization strategy in TeleGen

    Advances in Artificial Intelligence: Models, Optimization, and Machine Learning

    Get PDF
    The present book contains all the articles accepted and published in the Special Issue “Advances in Artificial Intelligence: Models, Optimization, and Machine Learning” of the MDPI Mathematics journal, which covers a wide range of topics connected to the theory and applications of artificial intelligence and its subfields. These topics include, among others, deep learning and classic machine learning algorithms, neural modelling, architectures and learning algorithms, biologically inspired optimization algorithms, algorithms for autonomous driving, probabilistic models and Bayesian reasoning, intelligent agents and multiagent systems. We hope that the scientific results presented in this book will serve as valuable sources of documentation and inspiration for anyone willing to pursue research in artificial intelligence, machine learning and their widespread applications

    EMEP particulate matter assessment report

    Get PDF

    Towards a more efficient use of computational budget in large-scale black-box optimization

    Get PDF
    Evolutionary algorithms are general purpose optimizers that have been shown effective in solving a variety of challenging optimization problems. In contrast to mathematical programming models, evolutionary algorithms do not require derivative information and are still effective when the algebraic formula of the given problem is unavailable. Nevertheless, the rapid advances in science and technology have witnessed the emergence of more complex optimization problems than ever, which pose significant challenges to traditional optimization methods. The dimensionality of the search space of an optimization problem when the available computational budget is limited is one of the main contributors to its difficulty and complexity. This so-called curse of dimensionality can significantly affect the efficiency and effectiveness of optimization methods including evolutionary algorithms. This research aims to study two topics related to a more efficient use of computational budget in evolutionary algorithms when solving large-scale black-box optimization problems. More specifically, we study the role of population initializers in saving the computational resource, and computational budget allocation in cooperative coevolutionary algorithms. Consequently, this dissertation consists of two major parts, each of which relates to one of these research directions. In the first part, we review several population initialization techniques that have been used in evolutionary algorithms. Then, we categorize them from different perspectives. The contribution of each category to improving evolutionary algorithms in solving large-scale problems is measured. We also study the mutual effect of population size and initialization technique on the performance of evolutionary techniques when dealing with large-scale problems. Finally, assuming uniformity of initial population as a key contributor in saving a significant part of the computational budget, we investigate whether achieving a high-level of uniformity in high-dimensional spaces is feasible given the practical restriction in computational resources. In the second part of the thesis, we study the large-scale imbalanced problems. In many real world applications, a large problem may consist of subproblems with different degrees of difficulty and importance. In addition, the solution to each subproblem may contribute differently to the overall objective value of the final solution. When the computational budget is restricted, which is the case in many practical problems, investing the same portion of resources in optimizing each of these imbalanced subproblems is not the most efficient strategy. Therefore, we examine several ways to learn the contribution of each subproblem, and then, dynamically allocate the limited computational resources in solving each of them according to its contribution to the overall objective value of the final solution. To demonstrate the effectiveness of the proposed framework, we design a new set of 40 large-scale imbalanced problems and study the performance of some possible instances of the framework

    EMEP Particulate Matter Assessment Report

    Get PDF
    This EMEP PM Assessment Report addresses the adequacy and completeness of the underpinning science upon which models currently used for policy development have been built. An important issue has been to strike a balance between the need to resolve a number of key scientific uncertainties and the desire to make progress with the integrated assessment modelling. It has been recognised that striking this balance is important for policy-making within the Working Group on Strategies and Review. The purpose of the report is to inform the policy process about the state of current understanding on PM issues and the level of confidence in PM models.JRC.H.2-Climate chang
    corecore