31 research outputs found

    Determining whether metals nucleate homogeneously on graphite: A case study with copper

    Get PDF
    We observe that Cu clusters grow on surface terraces of graphite as a result of physical vapor deposition in ultrahigh vacuum. We show that the observation is incompatible with a variety of models incorporating homogeneous nucleation and calculations of atomic-scale energetics. An alternative explanation, ion-mediated heterogeneous nucleation, is proposed and validated, both with theory and experiment. This serves as a case study in identifying when and whether the simple, common observation of metal clusters on carbon-rich surfaces can be interpreted in terms of homogeneous nucleation. We describe a general approach for making system-specific and laboratory-specific predictions

    Accurate implementation of leaping in space: The spatial partitioned-leaping algorithm

    Full text link
    There is a great need for accurate and efficient computational approaches that can account for both the discrete and stochastic nature of chemical interactions as well as spatial inhomogeneities and diffusion. This is particularly true in biology and nanoscale materials science, where the common assumptions of deterministic dynamics and well-mixed reaction volumes often break down. In this article, we present a spatial version of the partitioned-leaping algorithm (PLA), a multiscale accelerated-stochastic simulation approach built upon the tau-leaping framework of Gillespie. We pay special attention to the details of the implementation, particularly as it pertains to the time step calculation procedure. We point out conceptual errors that have been made in this regard in prior implementations of spatial tau-leaping and illustrate the manifestation of these errors through practical examples. Finally, we discuss the fundamental difficulties associated with incorporating efficient exact-stochastic techniques, such as the next-subvolume method, into a spatial-leaping framework and suggest possible solutions.Comment: 15 pages, 9 figures, 2 table

    Spatial multi-level interacting particle simulations and information theory-based error quantification

    Get PDF
    We propose a hierarchy of multi-level kinetic Monte Carlo methods for sampling high-dimensional, stochastic lattice particle dynamics with complex interactions. The method is based on the efficient coupling of different spatial resolution levels, taking advantage of the low sampling cost in a coarse space and by developing local reconstruction strategies from coarse-grained dynamics. Microscopic reconstruction corrects possibly significant errors introduced through coarse-graining, leading to the controlled-error approximation of the sampled stochastic process. In this manner, the proposed multi-level algorithm overcomes known shortcomings of coarse-graining of particle systems with complex interactions such as combined long and short-range particle interactions and/or complex lattice geometries. Specifically, we provide error analysis for the approximation of long-time stationary dynamics in terms of relative entropy and prove that information loss in the multi-level methods is growing linearly in time, which in turn implies that an appropriate observable in the stationary regime is the information loss of the path measures per unit time. We show that this observable can be either estimated a priori, or it can be tracked computationally a posteriori in the course of a simulation. The stationary regime is of critical importance to molecular simulations as it is relevant to long-time sampling, obtaining phase diagrams and in studying metastability properties of high-dimensional complex systems. Finally, the multi-level nature of the method provides flexibility in combining rejection-free and null-event implementations, generating a hierarchy of algorithms with an adjustable number of rejections that includes well-known rejection-free and null-event algorithms.Comment: 34 page

    Multiscale Model of CVD Growth of Graphene on Cu(111) Surface

    Get PDF
    Due to its outstanding properties, graphene has emerged as one of the most promising 2D materials in a large variety of research fields. Among the available fabrication protocols, chemical vapor deposition (CVD) enables the production of high quality single-layered large area graphene. To better understand the kinetics of CVD graphene growth, multiscale modeling approaches are sought after. Although a variety of models have been developed to study the growth mechanism, prior studies are either limited to very small systems, are forced to simplify the model to eliminate the fast process, or they simplify reactions. While it is possible to rationalize these approximations, it is important to note that they have non-trivial consequences on the overall growth of graphene. Therefore, a comprehensive understanding of the kinetics of graphene growth in CVD remains a challenge. Here, we introduce a kinetic Monte Carlo protocol that permits, for the first time, the representation of relevant reactions on the atomic scale, without additional approximations, while still reaching very long time and length scales of the simulation of graphene growth. The quantum-mechanics-based multiscale model, which links kinetic Monte Carlo growth processes with the rates of occurring chemical reactions, calculated from first principles makes it possible to investigate the contributions of the most important species in graphene growth. It permits the proper investigation of the role of carbon and its dimer in the growth process, thus indicating the carbon dimer to be the dominant species. The consideration of hydrogenation and dehydrogenation reactions enables us to correlate the quality of the material grown within the CVD control parameters and to demonstrate an important role of these reactions in the quality of the grown graphene in terms of its surface roughness, hydrogenation sites, and vacancy defects. The model developed is capable of providing additional insights to control the graphene growth mechanism on Cu(111), which may guide further experimental and theoretical developments

    Multiscale Kinetic Monte Carlo Simulation of Self-Organized Growth of GaN/AlN Quantum Dots

    Get PDF
    A three-dimensional kinetic Monte Carlo methodology is developed to study the strained epitaxial growth of wurtzite GaN/AlN quantum dots. It describes the kinetics of effective GaN adatoms on an hexagonal lattice. The elastic strain energy is evaluated by a purposely devised procedure: first, we take advantage of the fact that the deformation in a lattice-mismatched heterostructure is equivalent to that obtained by assuming that one of the regions of the system is subjected to a properly chosen uniform stress (Eshelby inclusion concept), and then the strain is obtained by applying the Green’s function method. The standard Monte Carlo method has been modified to implement a multiscale algorithm that allows the isolated adatoms to perform long diffusion jumps. With these state-of-the art modifications, it is possible to perform efficiently simulations over large areas and long elapsed times. We have taylored the model to the conditions of molecular beam epitaxy under N-rich conditions. The corresponding simulations reproduce the different stages of the Stranski–Krastanov transition, showing quantitative agreement with the experimental findings concerning the critical deposition, and island size and density. The influence of growth parameters, such as the relative fluxes of Ga and N and the substrate temperature, is also studied and found to be consistent with the experimental observations. In addition, the growth of stacked layers of quantum dots is also simulated and the conditions for their vertical alignment and homogenization are illustrated. In summary, the developed methodology allows one to reproduce the main features of the self-organized quantum dot growth and to understand the microscopic mechanisms at play

    An Examination of Kinetic Monte Carlo Methods with Application to a Model of Epitaxial Growth

    Get PDF
    Through the assembly of procedural information about physical processes, the kinetic Monte Carlo method offers a simple and efficient stochastic approach to model the temporal evolution of a system. While suitable for a variety of systems, the approach has found widespread use in the simulation of epitaxial growth. Motivated by chem- ically reacting systems, we discuss the developments and elaborations of the kinetic Monte Carlo method, highlighting the computational cost associated with realizing a given algorithm. We then formulate a solid-on-solid bond counting model of epitax- ial growth which permits surface atoms to advance the state of the system through three events: hopping, evaporation, and condensation. Finally, we institute the ki- netic Monte Carlo method to describe the evolution of a crystalline structure and to examine how temperature influences the mobility of surface atoms

    Exact Hybrid Particle/Population Simulation of Rule-Based Models of Biochemical Systems

    Get PDF
    Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This "network-free" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of "partial network expansion" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility. © 2014 Hogg et al
    corecore