20 research outputs found

    Permutationally Invariant, Reproducing Kernel-Based Potential Energy Surfaces for Polyatomic Molecules: From Formaldehyde to Acetone

    Full text link
    Constructing accurate, high dimensional molecular potential energy surfaces (PESs) for polyatomic molecules is challenging. Reproducing Kernel Hilbert space (RKHS) interpolation is an efficient way to construct such PESs. However, the scheme is most effective when the input energies are available on a regular grid. Thus the number of reference energies required can become very large even for penta-atomic systems making such an approach computationally prohibitive when using high-level electronic structure calculations. Here an efficient and robust scheme is presented to overcome these limitations and is applied to constructing high dimensional PESs for systems with up to 10 atoms. Using energies as well as gradients reduces the number of input data required and thus keeps the number of coefficients at a manageable size. Correct implementation of permutational symmetry in the kernel products is tested and explicitly demonstrated for the highly symmetric CH4_4 molecule.Comment: 40 pages, 13 Figure

    Uncertainty Quantification with Applications to Engineering Problems

    Get PDF

    Analysis of Reactor Simulations Using Surrogate Models.

    Full text link
    The relatively recent abundance of computing resources has driven computational scientists to build more complex and approximation-free computer models of physical phenomenon. Often times, multiple high fidelity computer codes are coupled together in hope of improving the predictive powers of simulations with respect to experimental data. To improve the predictive capacity of computer codes experimental data should be folded back into the parameters processed by the codes through optimization and calibration algorithms. However, application of such algorithms may be prohibitive since they generally require thousands of evaluations of computationally expensive, coupled, multiphysics codes. Surrogates models for expensive computer codes have shown promise towards making optimization and calibration feasible. In this thesis, non-intrusive surrogate building techniques are investigated for their applicability in nuclear engineering applications. Specifically, Kriging and the coupling of the anchored-ANOVA decomposition with collocation are utilized as surrogate building approaches. Initially, these approaches are applied and naively tested on simple reactor applications with analytic solutions. Ultimately, Kriging is applied to construct a surrogate to analyze fission gas release during the Risø AN3 power ramp experiment using the fuel performance modeling code Bison. To this end, Kriging is extended from building surrogates for scalar quantities to entire time series using principal component analysis. A surrogate model is built for fission gas kinetics time series and the true values of relevant parameters are inferred by folding experimental data with the surrogate. Sensitivity analysis is also performed on the fission gas release parameters to gain insight into the underlying physics.PhDNuclear Engineering and Radiological SciencesUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111485/1/yankovai_1.pd

    Roadmap on Machine learning in electronic structure

    Get PDF
    AbstractIn recent years, we have been witnessing a paradigm shift in computational materials science. In fact, traditional methods, mostly developed in the second half of the XXth century, are being complemented, extended, and sometimes even completely replaced by faster, simpler, and often more accurate approaches. The new approaches, that we collectively label by machine learning, have their origins in the fields of informatics and artificial intelligence, but are making rapid inroads in all other branches of science. With this in mind, this Roadmap article, consisting of multiple contributions from experts across the field, discusses the use of machine learning in materials science, and share perspectives on current and future challenges in problems as diverse as the prediction of materials properties, the construction of force-fields, the development of exchange correlation functionals for density-functional theory, the solution of the many-body problem, and more. In spite of the already numerous and exciting success stories, we are just at the beginning of a long path that will reshape materials science for the many challenges of the XXIth century

    Roadmap on machine learning in electronic structure

    Get PDF
    In recent years, we have been witnessing a paradigm shift in computational materials science. In fact, traditional methods, mostly developed in the second half of the XXth century, are being complemented, extended, and sometimes even completely replaced by faster, simpler, and often more accurate approaches. The new approaches, that we collectively label by machine learning, have their origins in the fields of informatics and artificial intelligence, but are making rapid inroads in all other branches of science. With this in mind, this Roadmap article, consisting of multiple contributions from experts across the field, discusses the use of machine learning in materials science, and share perspectives on current and future challenges in problems as diverse as the prediction of materials properties, the construction of force-fields, the development of exchange correlation functionals for density-functional theory, the solution of the many-body problem, and more. In spite of the already numerous and exciting success stories, we are just at the beginning of a long path that will reshape materials science for the many challenges of the XXIth century

    An Application of Kolmogorov's Superposition Theorem to Function Reconstruction in Higher Dimensions

    Get PDF
    In this thesis we present a Regularization Network approach to reconstruct a continuous function ƒ:[0,1]n→R from its function values ƒ(xj) on discrete data points xj, j=1,…,P. The ansatz is based on a new constructive version of Kolmogorov's superposition theorem. Typically, the numerical solution of mathematical problems underlies the so--called curse of dimensionality. This term describes the exponential dependency of the involved numerical costs on the dimensionality n. To circumvent the curse at least to some extend, typically higher regularity assumptions on the function ƒ are made which however are unrealistic in most cases. Therefore, we employ a representation of the function as superposition of one--dimensional functions which does not require higher smoothness assumptions on ƒ than continuity. To this end, a constructive version of Kolmogorov's superposition theorem which is based on D. Sprecher is adapted in such a manner that one single outer function Φ and a universal inner function ψ suffice to represent the function ƒ. Here, ψ is the extension of a function which was defined by M. Köppen on a dense subset of the real line. The proofs of existence, continuity, and monotonicity are presented in this thesis. To compute the outer function Φ, we adapt a constructive algorithm by Sprecher such that in each iteration step, depending on ƒ, an element of a sequence of univariate functions { Φr}r is computed. It will be shown that this sequence converges to a continuous limit Φ:R→R. This constructively proves Kolmogorov's superposition theorem with a single outer and inner function. Due to the fact that the numerical complexity to compute the outer function Φ by the algorithm grows exponentially with the dimensionality, we alternatively present a Regularization Network approach which is based on this representation. Here, the outer function is computed from discrete function samples (xj,ƒ(xj)), j=1,…,P. The model to reconstruct ƒ will be introduced in two steps. First, the outer function Φ is represented in a finite basis with unknown coefficients which are then determined by a variational formulation, i.e. by the minimization of a regularized empirical error functional. A detailed numerical analysis of this model shows that the dimensionality of ƒ is transformed by Kolmogorov's representation into oscillations of Φ. Thus, the use of locally supported basis functions leads to an exponential growth of the complexity since the spatial mesh resolution has to resolve the strong oscillations. Furthermore, a numerical analysis of the Fourier transform of Φ shows that the locations of the relevant frequencies in Fourier space can be determined a priori and are independent of ƒ. It also reveals a product structure of the outer function and directly motivates the definition of the final model. Therefore, Φ is replaced in the second step by a product of functions for which each factor is expanded in a Fourier basis with appropriate frequency numbers. Again, the coefficients in the expansions are determined by the minimization of a regularized empirical error functional. For both models, the underlying approximation spaces are developed by means of reproducing kernel Hilbert spaces and the corresponding norms are the respective regularization terms in the empirical error functionals. Thus, both approaches can be interpreted as Regularization Networks. However, it is important to note that the error functional for the second model is not convex and that nonlinear minimizers have to be used for the computation of the model parameters. A detailed numerical analysis of the product model shows that it is capable of reconstructing functions which depend on up to ten variables.Eine Anwendung von Kolmogorovs Superpositionen Theorem zur Funktionsrekonstruktion in höheren Dimensionen In der vorliegenden Arbeit wird ein Regularisierungsnetzwerk zur Rekonstruktion von stetigen Funktionen ƒ:[0,1]n→R vorgestellt, welches direkt auf einer neuen konstruktiven Version von Kolmogorovs Superpositionen Theorem basiert. Dabei sind lediglich die Funktionswerte ƒ(xj) an diskreten Datenpunktenxj, j=1,…,P bekannt. Typischerweise leidet die numerische Lösung mathematischer Probleme unter dem sogenannten Fluch der Dimension. Dieser Begriff beschreibt das exponentielle Wachstum der Komplexität des verwendeten Verfahrens mit der Dimension n. Um dies zumindest teilweise zu vermeiden, werden üblicherweise höhere Regularitätsannahmen an die Lösung des Problems gemacht, was allerdings häufig unrealistisch ist. Daher wird in dieser Arbeit eine Darstellung der Funktion ƒ als Superposition eindimensionaler Funktionen verwendet, welche keiner höheren Regularitätsannahmen als Stetigkeit bedarf. Zu diesem Zweck wird eine konstruktive Variante des Kolmogorov Superpositionen Theorems, welche auf D. Sprecher zurückgeht, so angepasst, dass nur eine äußere Funktion Φ sowie eine universelle innere Funktion ψ zur Darstellung von ƒ notwendig ist. Die Funktion ψ ist nach einer Definition von M. Köppen explizit und unabhängig von ƒ als Fortsetzung einer Funktion, welche auf einer Dichten Teilmenge der reellen Achse definiert ist, gegeben. Der fehlende Beweis von Existenz, Stetigkeit und Monotonie von ψ wird in dieser Arbeit geführt. Zur Berechnung der äußeren Funktion Φ wird ein iterativer Algorithmus von Sprecher so modifiziert, dass jeder Iterationsschritt, abhängig von ƒ, ein Element einer Folge univariater Funktionen{ Φr}r liefert. Es wird gezeigt werden, dass die Folge gegen einen stetigen Grenzwert Φ:R→R konvergiert. Dies liefert einen konstruktiven Beweis einer neuen Version des Kolmogorov Superpositionen Theorems mit einer äußeren und einer inneren Funktion. Da die numerische Komplexität des Algorithmus zur Berechnung von Φ exponentiell mit der Dimension wächst, stellen wir alternativ ein Regularisierungsnetzwerk, basierend auf dieser Darstellung, vor. Dabei wird die äußere Funktion aus gegebenen Daten (xj,ƒ(xj)), j=1,…,P berechnet. Das Modell zur Rekonstruktion von ƒ wird in zwei Schritten eingeführt. Zunächst wird zur Definition eines vorläufigen Modells die äußere Funktion, bzw. eine Approximation an Φ, in einer endlichen Basis mit unbekannten Koeffizienten dargestellt. Diese werden dann durch eine Variationsformulierung bestimmt, d.h. durch die Minimierung eines regularisierten empirischen Fehlerfunktionals. Eine detaillierte numerische Analyse zeigt dann, dass Kolmogorovs Darstellung die Dimensionalität von ƒ in Oszillationen von F transformiert. Somit ist die Verwendung von Basisfunktionen mit lokalem Träger nicht geeignet, da die räumliche Auflösung der Approximation die starken Oszillationen erfassen muss. Des Weiteren zeigt eine Analyse der Fouriertransformation von Φ, dass die relevanten Frequenzen, unabhängig von ƒ, a priori bestimmbar sind, und dass die äußere Funktion Produktstruktur aufweist. Dies motiviert die Definition des endgültigen Modells. Dazu wird Φ nun durch ein Produkt von Funktionen ersetzt und jeder Faktor in einer Fourierbasis entwickelt. Die Koeffizienten werden ebenfalls durch Minimierung eines regularisierten empirischen Fehlerfunktionals bestimmt. Für beide Modelle wird ein theoretischer Rahmen in Form von Hilberträumen mit reproduzierendem Kern entwickelt. Die zugehörigen Normen bilden dabei jeweils den Regularisierungsterm der entsprechenden Fehlerfunktionale. Somit können beide Ansätze als Regularisierungsnetzwerke interpretiert werden. Allerdings ist zu beachten, dass das Fehlerfunktional für den Produktansatz nicht konvex ist und nichtlineare Minimierungsverfahren zur Berechnung der Koeffizienten notwendig sind. Weitere ausführliche numerische Tests zeigen, dass dieses Modell in der Lage ist Funktionen zu rekonstruieren welche von bis zu zehn Variablen abhängen

    Experimental and Chemical Kinetic Modelling Study on the Combustion of Alternative Fuels in Fundamental Systems and Practical Engines

    Get PDF
    In this work, experimental data of ignition delay times of n-butanol, gasoline, toluene reference fuel (TRF), a gasoline/n-butanol blend and a TRF/n-butanol blend were obtained using the Leeds University Rapid Compression Machine (RCM) while autoignition (knock) onsets and knock intensities of gasoline, TRF, gasoline/n-butanol and TRF/n-butanol blends were measured using the Leeds University Optical Engine (LUPOE). The work showed that within the RCM, the 3-component TRF surrogate captures the trend of gasoline data well across the temperature range. However, based on results obtained in the engine, it appears that the chosen TRF may not be an excellent representation of gasoline under engine conditions as the knock boundary of TRF as well as the measured knock onsets are significantly lower than those of gasoline. The ignition delay times measured in the RCM for the blend, lay between those of gasoline and n-butanol under stoichiometric conditions across the temperature range studied and at lower temperatures, n-butanol acts as an octane enhancer over and above what might be expected from a simple linear blending law. In the engine, the measured knock onsets for the blend were higher than those of gasoline at the more retarded spark timing of 6 CA bTDC but the effect disappears at higher spark advances. Future studies exploring the blending effect of n-butanol across a range of blending ratios is required since it is difficult to conclude on the overall effect of n-butanol blending on gasoline based on the single blend that has been considered in this study. The chemical kinetic modelling of the fuels investigated has also been evaluated by comparing results from simulations employing the relevant reaction mechanisms with the experimental data sourced from either the open literature or measured in-house. Local as well as global uncertainty/sensitivity methods accounting for the impact of uncertainties in the input parameters, were also employed within the framework of ignition delay time modelling in an RCM and species concentration prediction in a JSR, for analysis of the chemical kinetic modelling of DME, n-butanol, TRF and TRF/n-butanol oxidation in order to advance the understanding of the key reactions rates that are crucial for the accurate prediction of the combustion of alternative fuels in internal combustion engines. The results showed that uncertainties in predicting key target quantities for the various fuels studied are currently large but driven by few reactions. Further studies of the key reaction channels identified in this work at the P-T conditions of relevance to combustion applications could help to improve current mechanisms. Moreover, the chemical kinetic modelling of the autoignition and species concentration of TRF, TRF/n-butanol and n-butanol fuels was carried out using the adopted TRF/n-butanol mechanism as input in the engine simulations of a recently developed commercial engine software known as LOGEengine. Similar to the results obtained in the RCM modelling work, the knock onsets predicted for TRF and TRF/n-butanol blend under engine conditions were consistently higher than the measured data. Overall, the work demonstrated that accurate representation of the low temperature chemistry in current chemical kinetic models of alternative fuels is very crucial for the accurate description of the chemical processes and autoignition of the end gas in the engine
    corecore