766 research outputs found

    Bridging microscopy with molecular dynamics and quantum simulations: An AtomAI based pipeline

    Full text link
    Recent advances in (scanning) transmission electron microscopy have enabled routine generation of large volumes of high-veracity structural data on 2D and 3D materials, naturally offering the challenge of using these as starting inputs for atomistic simulations. In this fashion, theory will address experimentally emerging structures, as opposed to the full range of theoretically possible atomic configurations. However, this challenge is highly non-trivial due to the extreme disparity between intrinsic time scales accessible to modern simulations and microscopy, as well as latencies of microscopy and simulations per se. Addressing this issue requires as a first step bridging the instrumental data flow and physics-based simulation environment, to enable the selection of regions of interest and exploring them using physical simulations. Here we report the development of the machine learning workflow that directly bridges the instrument data stream into Python-based molecular dynamics and density functional theory environments using pre-trained neural networks to convert imaging data to physical descriptors. The pathways to ensure the structural stability and compensate for the observational biases universally present in the data are identified in the workflow. This approach is used for a graphene system to reconstruct optimized geometry and simulate temperature-dependent dynamics including adsorption of Cr as an ad-atom and graphene healing effects. However, it is universal and can be used for other material systems

    Evaluating and Enabling Scalable High Performance Computing Workloads on Commercial Clouds

    Get PDF
    Performance, usability, and accessibility are critical components of high performance computing (HPC). Usability and performance are especially important to academic researchers as they generally have little time to learn a new technology and demand a certain type of performance in order to ensure the quality and quantity of their research results. We have observed that while not all workloads run well in the cloud, some workloads perform well. We have also observed that although commercial cloud adoption by industry has been growing at a rapid pace, its use by academic researchers has not grown as quickly. We aim to help close this gap and enable researchers to utilize the commercial cloud more efficiently and effectively. We present our results on architecting and benchmarking an HPC environment on Amazon Web Services (AWS) where we observe that there are particular types of applications that are and are not suited for the commercial cloud. Then, we present our results on architecting and building a provisioning and workflow management tool (PAW), where we developed an application that enables a user to launch an HPC environment in the cloud, execute a customizable workflow, and after the workflow has completed delete the HPC environment automatically. We then present our results on the scalability of PAW and the commercial cloud for compute intensive workloads by deploying a 1.1 million vCPU cluster. We then discuss our research into the feasibility of utilizing commercial cloud infrastructure to help tackle the large spikes and data-intensive characteristics of Transportation Cyberphysical Systems (TCPS) workloads. Then, we present our research in utilizing the commercial cloud for urgent HPC applications by deploying a 1.5 million vCPU cluster to process 211TB of traffic video data to be utilized by first responders during an evacuation situation. Lastly, we present the contributions and conclusions drawn from this work

    Optimization Of Two-Dimensional Dual Beam Scanning System Using Genetic Algorithms

    Get PDF
    This thesis presents a new approach to optimize the performance of a dual beam optical scanning system in terms of its scanning combinations and speed, using Genetic Algorithm (GA). The problem has been decomposed into two sub problems; task segregation, where the scanning tasks need to be segregated and assigned for each scanner head, and path planning where the best combinatorial paths for each scanner are determined in order to minimize the total motion of scanning time. The knowledge acquired by the process is interpreted and mapped into vectors, which are kept in the database and used by the system to guide its reasoning process. Also, this research involves in developing a machine-learning system and program via genetic algorithm that is capable of performing independent learning capability and optimization for scanning sequence using novel GA operators. The main motivation for this research is to introduce and evaluate an advance new customized GA. Comparison results of different combinatorial operators, and tests with different probability factors are shown. Also, proposed are the new modifications to existing genetic operator called DPPC (Dynamic Pre-Populated Crossover) together with modification of a simple method of representation, called MLR (Multi-Layered Representation). In addition, the performance of the new operators called GA_INSP (GA Inspection Module), DTC (Dynamic Tuning Crossover), and BCS (Bi-Cycle Selection Method) for a better evolutionary approach to the time-based problem has been discussed in the thesis. The simulation results indicate that the algorithm is able to segregate and assign the tasks for each scanning head and also able to find the shortest scanning path for different types of objects coordination. Besides that, the implementation of the new genetic operators helps to converge faster and produce better results. The representation approach has been implemented via a computer program in order to achieve optimized scanning performance. This algorithm has been tested and implemented successfully via a dual beam optical scanning system

    Self-consistent Hubbard parameters from density-functional perturbation theory in the ultrasoft and projector-augmented wave formulations

    Full text link
    The self-consistent evaluation of Hubbard parameters using linear-response theory is crucial for quantitatively predictive calculations based on Hubbard-corrected density-functional theory. Here, we extend a recently-introduced approach based on density-functional perturbation theory (DFPT) for the calculation of the on-site Hubbard UU to also compute the inter-site Hubbard VV. DFPT allows to reduce significantly computational costs, improve numerical accuracy, and fully automate the calculation of the Hubbard parameters by recasting the linear response of a localized perturbation into an array of monochromatic perturbations that can be calculated in the primitive cell. In addition, here we generalize the entire formalism from norm-conserving to ultrasoft and projector-augmented wave formulations, and to metallic ground states. After benchmarking DFPT against the conventional real-space Hubbard linear response in a supercell, we demonstrate the effectiveness of the present extended Hubbard formulation in determining the equilibrium crystal structure of Lix_xMnPO4_4 (x=0,1) and the subtle energetics of Li intercalation.Comment: 15 pages, 3 figure

    Quantum ESPRESSO: a modular and open-source software project for quantum simulations of materials

    Get PDF
    Quantum ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density-functional theory, plane waves, and pseudopotentials (norm-conserving, ultrasoft, and projector-augmented wave). Quantum ESPRESSO stands for "opEn Source Package for Research in Electronic Structure, Simulation, and Optimization". It is freely available to researchers around the world under the terms of the GNU General Public License. Quantum ESPRESSO builds upon newly-restructured electronic-structure codes that have been developed and tested by some of the original authors of novel electronic-structure algorithms and applied in the last twenty years by some of the leading materials modeling groups worldwide. Innovation and efficiency are still its main focus, with special attention paid to massively-parallel architectures, and a great effort being devoted to user friendliness. Quantum ESPRESSO is evolving towards a distribution of independent and inter-operable codes in the spirit of an open-source project, where researchers active in the field of electronic-structure calculations are encouraged to participate in the project by contributing their own codes or by implementing their own ideas into existing codes.Comment: 36 pages, 5 figures, resubmitted to J.Phys.: Condens. Matte

    How to verify the precision of density-functional-theory implementations via reproducible and universal workflows

    Full text link
    In the past decades many density-functional theory methods and codes adopting periodic boundary conditions have been developed and are now extensively used in condensed matter physics and materials science research. Only in 2016, however, their precision (i.e., to which extent properties computed with different codes agree among each other) was systematically assessed on elemental crystals: a first crucial step to evaluate the reliability of such computations. We discuss here general recommendations for verification studies aiming at further testing precision and transferability of density-functional-theory computational approaches and codes. We illustrate such recommendations using a greatly expanded protocol covering the whole periodic table from Z=1 to 96 and characterizing 10 prototypical cubic compounds for each element: 4 unaries and 6 oxides, spanning a wide range of coordination numbers and oxidation states. The primary outcome is a reference dataset of 960 equations of state cross-checked between two all-electron codes, then used to verify and improve nine pseudopotential-based approaches. Such effort is facilitated by deploying AiiDA common workflows that perform automatic input parameter selection, provide identical input/output interfaces across codes, and ensure full reproducibility. Finally, we discuss the extent to which the current results for total energies can be reused for different goals (e.g., obtaining formation energies).Comment: Main text: 23 pages, 4 figures. Supplementary: 68 page
    corecore