334,250 research outputs found

    Rigorous numerical approaches in electronic structure theory

    Get PDF
    Electronic structure theory concerns the description of molecular properties according to the postulates of quantum mechanics. For practical purposes, this is realized entirely through numerical computation, the scope of which is constrained by computational costs that increases rapidly with the size of the system. The significant progress made in this field over the past decades have been facilitated in part by the willingness of chemists to forego some mathematical rigour in exchange for greater efficiency. While such compromises allow large systems to be computed feasibly, there are lingering concerns over the impact that these compromises have on the quality of the results that are produced. This research is motivated by two key issues that contribute to this loss of quality, namely i) the numerical errors accumulated due to the use of finite precision arithmetic and the application of numerical approximations, and ii) the reliance on iterative methods that are not guaranteed to converge to the correct solution. Taking the above issues in consideration, the aim of this thesis is to explore ways to perform electronic structure calculations with greater mathematical rigour, through the application of rigorous numerical methods. Of which, we focus in particular on methods based on interval analysis and deterministic global optimization. The Hartree-Fock electronic structure method will be used as the subject of this study due to its ubiquity within this domain. We outline an approach for placing rigorous bounds on numerical error in Hartree-Fock computations. This is achieved through the application of interval analysis techniques, which are able to rigorously bound and propagate quantities affected by numerical errors. Using this approach, we implement a program called Interval Hartree-Fock. Given a closed-shell system and the current electronic state, this program is able to compute rigorous error bounds on quantities including i) the total energy, ii) molecular orbital energies, iii) molecular orbital coefficients, and iv) derived electronic properties. Interval Hartree-Fock is adapted as an error analysis tool for studying the impact of numerical error in Hartree-Fock computations. It is used to investigate the effect of input related factors such as system size and basis set types on the numerical accuracy of the Hartree-Fock total energy. Consideration is also given to the impact of various algorithm design decisions. Examples include the application of different integral screening thresholds, the variation between single and double precision arithmetic in two-electron integral evaluation, and the adjustment of interpolation table granularity. These factors are relevant to both the usage of conventional Hartree-Fock code, and the development of Hartree-Fock code optimized for novel computing devices such as graphics processing units. We then present an approach for solving the Hartree-Fock equations to within a guaranteed margin of error. This is achieved by treating the Hartree-Fock equations as a non-convex global optimization problem, which is then solved using deterministic global optimization. The main contribution of this work is the development of algorithms for handling quantum chemistry specific expressions such as the one and two-electron integrals within the deterministic global optimization framework. This approach was implemented as an extension to an existing open source solver. Proof of concept calculations are performed for a variety of problems within Hartree-Fock theory, including those in i) point energy calculation, ii) geometry optimization, iii) basis set optimization, and iv) excited state calculation. Performance analyses of these calculations are also presented and discussed

    Data to Understand the Nature of Non-Covalent Interactions in the Thiophene Clusters

    Get PDF
    We have reported herein the data to understand the nature and number of non-covalent interactions that stabilize the structures of the thiophene clusters. In addition, we have also provided the optimized Cartesian coordinates of all the structures of the investigated thiophene clusters. Initially, the geometries have been generated using the ABCluster code which performs a global optimization to locate local and global minima structures of molecular clusters. The located geometries have been optimized at the MP2/aug-ccpVDZ level of theory using Gaussian 16 suite of programs. To understand the nature of non-covalent interactions, we have performed a quantum theory of atoms in molecules (QTAIM) analysis on all the structures of the thiophene dimer. Furthermore, the QTAIM analysis has been performed also on the most stable structure of the thiophene trimer and tetramer. We have used the AIMAll program to perform the QTAIM analysis. The data reported in this paper contains the critical points, the bonds paths and their related properties, for each investigated structures. Besides, the data contains the optimized Cartesian coordinates of all the investigated structures of the thiophene clusters. This can be use for any further investigations involving thiophene clusters. For further information and analysis, the reader is referred to the original related research article (Malloum and Conradie, 2022)

    Multidisciplinary design optimization of a fighter aircraft with damage tolerance constraints and a probabilistic model of the fatigue environment.

    Get PDF
    Damage tolerance analysis (DTA) was considered in the global design optimization of an aircraft wing structure. Residual strength and fatigue life requirements, based on the damage tolerance philosophy, were investigated as new design constraints. In general, accurate fatigue prediction is difficult if the load environment is not known with a high degree of certainty. To address this issue, a probabilistic approach was used to describe the uncertain load environment. Probabilistic load spectra models were developed from flight recorder data. The global/local finite element approach allowed local fatigue requirements to be considered in the global design optimization. AFGROW fatigue crack growth analysis provided a new strength criterion for satisfying damage tolerance requirements within a global optimization environment. Initial research with the ASTROS program used the probabilistic load model and this damage tolerance constraint to optimize cracked skin panels on the lower wing of a fighter/attack aircraft. For an aerodynamic and structural model similar to an F-16, ASTROS simulated symmetric and asymmetric maneuvers during the optimization. Symmetric maneuvers, without underwing stores, produced the highest stresses and drove the optimization of the inboard lower wing skin. Asymmetric maneuvers, with underwing stores, affected the optimum thickness of the outboard hard points. Subsequent design optimizations included von Mises stress, aileron effectiveness, and lift effectiveness constraints simultaneously. This optimization was driven by the DTA and von Mises stress constraints and, therefore, DTA requirements can have an active role to play in preliminary aircraft design

    An Efficient Algorithm for Automatic Structure Optimization in X-ray Standing-Wave Experiments

    Full text link
    X-ray standing-wave photoemission experiments involving multilayered samples are emerging as unique probes of the buried interfaces that are ubiquitous in current device and materials research. Such data require for their analysis a structure optimization process comparing experiment to theory that is not straightforward. In this work, we present a new computer program for optimizing the analysis of standing-wave data, called SWOPT, that automates this trial-and-error optimization process. The program includes an algorithm that has been developed for computationally expensive problems: so-called black-box simulation optimizations. It also includes a more efficient version of the Yang X-ray Optics Program (YXRO) [Yang, S.-H., Gray, A.X., Kaiser, A.M., Mun, B.S., Sell, B.C., Kortright, J.B., Fadley, C.S., J. Appl. Phys. 113, 1 (2013)] which is about an order of magnitude faster than the original version. Human interaction is not required during optimization. We tested our optimization algorithm on real and hypothetical problems and show that it finds better solutions significantly faster than a random search approach. The total optimization time ranges, depending on the sample structure, from minutes to a few hours on a modern laptop computer, and can be up to 100x faster than a corresponding manual optimization. These speeds make the SWOPT program a valuable tool for realtime analyses of data during synchrotron experiments

    Conic Optimization Theory: Convexification Techniques and Numerical Algorithms

    Full text link
    Optimization is at the core of control theory and appears in several areas of this field, such as optimal control, distributed control, system identification, robust control, state estimation, model predictive control and dynamic programming. The recent advances in various topics of modern optimization have also been revamping the area of machine learning. Motivated by the crucial role of optimization theory in the design, analysis, control and operation of real-world systems, this tutorial paper offers a detailed overview of some major advances in this area, namely conic optimization and its emerging applications. First, we discuss the importance of conic optimization in different areas. Then, we explain seminal results on the design of hierarchies of convex relaxations for a wide range of nonconvex problems. Finally, we study different numerical algorithms for large-scale conic optimization problems.Comment: 18 page

    Modeling and Analysis Generic Interface for eXternal numerical codes (MAGIX)

    Full text link
    The modeling and analysis generic interface for external numerical codes (MAGIX) is a model optimizer developed under the framework of the coherent set of astrophysical tools for spectroscopy (CATS) project. The MAGIX package provides a framework of an easy interface between existing codes and an iterating engine that attempts to minimize deviations of the model results from available observational data, constraining the values of the model parameters and providing corresponding error estimates. Many models (and, in principle, not only astrophysical models) can be plugged into MAGIX to explore their parameter space and find the set of parameter values that best fits observational/experimental data. MAGIX complies with the data structures and reduction tools of ALMA (Atacama Large Millimeter Array), but can be used with other astronomical and with non-astronomical data.Comment: 12 pages, 15 figures, 2 tables, paper is also available at http://www.aanda.org/articles/aa/pdf/forth/aa20063-12.pd
    corecore