24 research outputs found

    New Development of Theoretical and Computational Methods for Probing Strong-Field Multiphoton Processes

    Get PDF
    The study of the strong-field multiphoton processes is a subject of much current significance in physics and chemistry. Recent progress of laser technology has triggered a burst of attosecond science where the electron dynamics plays a vital role in underlying physics. The nonlinear strong-field phenomena, such as multiphoton ionization, multiphoton resonance, high-order harmonic generation, etc, are beyond the perturbative regime and demand novel theoretical approaches for better understanding. This dissertation aims at developing new theoretical and computational methods with innovative spatial and temporal treatments, and delivering comprehensive studies of strong-field multiphoton processes explored by the proposed methods. The time-dependent Voronoi-cell finite difference method is a new grid-based method for electronic structure and dynamics calculations of polyatomic molecules. The spatial part is accurately treated by the Voronoi-cell finite difference method on multicenter molecular grids, featuring high adaptivity and simplicity. The temporal part is solved by the split-operator time propagation technique, allowing accurate and efficient non-perturbative treatment of electronic dynamics in strong fields. The method is applied to self-interaction-free time-dependent density-functional calculations to probe multiphoton processes of polyatomic molecules in intense ultrashort laser fields with arbitrary field-molecule orientation, highlighting the importance of multielectron effects. The generalized Floquet theory is extended for the investigations of an atom in intense frequency-comb laser fields and a qubit system driven by intense oscillating fields. For the frequency-comb laser generated by a temporal train of pulses, the many-mode Floquet theory is extended to treat the interaction of an atom and a series of comb frequencies, demonstrating coherent control of simultaneous multiphoton resonance processes. For the strongly driven qubit, the Floquet theory is extended and its analytic solution is derived to explore multiphoton quantum interference in the superconducting flux qubit

    Efficient electronic structure calculation for molecular ionization dynamics at high x-ray intensity

    Full text link
    We present the implementation of an electronic-structure approach dedicated to ionization dynamics of molecules interacting with x-ray free-electron laser (XFEL) pulses. In our scheme, molecular orbitals for molecular core-hole states are represented by linear combination of numerical atomic orbitals that are solutions of corresponding atomic core-hole states. We demonstrate that our scheme efficiently calculates all possible multiple-hole configurations of molecules formed during XFEL pulses. The present method is suitable to investigate x-ray multiphoton multiple ionization dynamics and accompanying nuclear dynamics, providing essential information on the chemical dynamics relevant for high-intensity x-ray imaging.Comment: 28 pages, 6 figure

    Multielectron effects on the orientation dependence and photoelectron angular distribution of multiphoton ionization of CO2 in strong laser fields

    Get PDF
    This is the publisher's version, also available electronically from http://journals.aps.org/pra/abstract/10.1103/PhysRevA.80.011403.We perform an ab initio study of multiphoton ionization (MPI) of carbon dioxide in intense linearly polarized laser pulses with arbitrary molecular orientation by means of a time-dependent density-functional theory (TDDFT) with proper long-range potential. We develop a time-dependent Voronoi-cell finite difference method with highly adaptive molecular grids for accurate solution of the TDDFT equations. Our results demonstrate that the orientation dependence of MPI is determined by multiple orbital contributions and that the electron correlation effects are significant. The maximum peak of MPI is predicted to be at 40° in good agreement with recent experimental data. Photoelectron angular distribution reveals the delicate relation between the orientation dependence and the molecular orbital symmetry

    Development of an Unstructured 3-D Direct Simulation Monte Carlo/Particle-in-Cell Code and the Simulation of Microthruster Flows

    Get PDF
    This work is part of an effort to develop an unstructured, three-dimensional, direct simulation Monte Carlo/particle-in-cell (DSMC/PIC) code for the simulation of non-ionized, fully ionized and partially-ionized flows in micropropulsion devices. Flows in microthrusters are often in the transitional to rarefied regimes, requiring numerical techniques based on the kinetic description of the gaseous or plasma propellants. The code is implemented on unstructured tetrahedral grids to allow discretization of arbitrary surface geometries and includes an adaptation capability. In this study, an existing 3D DSMC code for rarefied gasdynamics is improved with the addition of the variable hard sphere model for elastic collisions and a vibrational relaxation model based on discrete harmonic oscillators. In addition the existing unstructured grid generation module of the code is enhanced with grid-quality algorithms. The unstructured DSMC code is validated with simulation of several gaseous micronozzles and comparisons with previous experimental and numerical results. Rothe s 5-mm diameter micronozzle operating at 80 Pa is simulated and results are compared favorably with the experiments. The Gravity Probe-B micronozzle is simulated in a domain that includes the injection chamber and plume region. Stagnation conditions include a pressure of 7 Pa and mass flow rate of 0.012 mg/s. The simulation examines the role of injection conditions in micronozzle simulations and results are compared with previous Monte Carlo simulations. The code is also applied to the simulation of a parabolic planar micronozzle with a 15.4-micron throat and results are compared with previous 2D Monte Carlo simulations. Finally, the code is applied to the simulation of a 34-micron throat MEMS-fabricated micronozzle. The micronozzle is planar in profile with sidewalls binding the upper and lower surfaces. The stagnation pressure is set at 3.447 kPa and represents an order of magnitude lower pressure than used in previous experiments. The simulation demonstrates the formation of large viscous boundary layers in the sidewalls. A particle-in-cell model for the simulation of electrostatic plasmas is added to the DSMC code. Solution to Poisson\u27s equation on unstructured grids is obtained with a finite volume implementation. The Poisson solver is validated by comparing results with analytic solutions. The integration of the ionized particle equations of motion is performed via the leapfrog method. Particle gather and scatter operations use volume weighting with linear Lagrange polynomial to obtain an acceptable level of accuracy. Several methods are investigated and implemented to calculate the electric field on unstructured meshes. Boundary conditions are discussed and include a formulation of plasma in bounded domains with external circuits. The unstructured PIC code is validated with the simulation of a high voltage sheath formation

    TCP-Carson: A loss-event based Adaptive AIMD algorithm for Long-lived Flows

    Get PDF
    The diversity of network applications over the Internet has propelled researchers to rethink the strategies in the transport layer protocols. Current applications either use UDP without end-to-end congestion control mechanisms or, more commonly, use TCP. TCP continuously probes for bandwidth even at network steady state and thereby causes variation in the transmission rate and losses. This thesis proposes TCP Carson, a modification of the window-scaling approach of TCP Reno to suit long-lived flows using loss-events as indicators of congestion. We analyzed and evaluated TCP Carson using NS-2 over a wide range of test conditions. We show that TCP Carson reduces loss, improves throughput and reduces window-size variance. We believe that this adaptive approach will improve both network and application performance

    Markov models from the square root approximation of the Fokker–Planck equation: calculating the grid-dependent flux

    Get PDF
    Abstract Molecular dynamics (MD) are extremely complex, yet understanding the slow components of their dynamics is essential to understanding their macroscopic properties. To achieve this, one models the MD as a stochastic process and analyses the dominant eigenfunctions of the associated Fokker–Planck operator, or of closely related transfer operators. So far, the calculation of the discretized operators requires extensive MD simulations. The square-root approximation of the Fokker–Planck equation is a method to calculate transition rates as a ratio of the Boltzmann densities of neighboring grid cells times a flux, and can in principle be calculated without a simulation. In a previous work we still used MD simulations to determine the flux. Here, we propose several methods to calculate the exact or approximate flux for various grid types, and thus estimate the rate matrix without a simulation. Using model potentials we test computational efficiency of the methods, and the accuracy with which they reproduce the dominant eigenfunctions and eigenvalues. For these model potentials, rate matrices with up to O(106) states can be obtained within seconds on a single high-performance compute server if regular grids are used

    A Three-dimensional Direct Simulation Monte Carlo Methodology on Unstructured Delaunay Grids with Applications to Micro and Nanoflows

    Get PDF
    The focus of this work is to present in detail the implementation of a three dimensional direct simulation Monte Carlo methodology on unstructured Delaunay meshes (U-DSMC). The validation and verification of the implementation are shown using a series of fundamental flow cases. The numerical error associated with the implementation is also studied using a fundamental flow configuration. Gas expansion from microtubes is studied using the U-DSMC code for tube diameters ranging from 100Æ’ÃÂ�m down to 100nm. Simulations are carried out for a range of inlet Knudsen numbers and the effect of aspect ratio and inlet Reynolds number on the plume structure is investigated. The effect of scaling the geometry is also examined. Gas expansion from a conical nozzle is studied using the U-DSMC code for throat diameters ranging from 250 Æ’ÃÂ�m down to 250 nm. Simulations are carried out for a range of inlet Knudsen numbers and the effect of inlet speed ratio and inlet Reynolds number on the plume structure is investigated. The effect of scaling the geometry is examined. Results of a numerical study using the U-DSMC code are employed to guide the design of a micropitot probe intended for use in analyzing rarefied gaseous microjet flow. The flow conditions considered correspond to anticipated experimental test cases for a probe that is currently under development. The expansion of nitrogen from an orifice with a diameter of 100Æ’ÃÂ�m is modeled using U-DSMC. From these results, local ¡¥free stream¡¦ conditions are obtained for use in U-DSMC simulations of the flow in the vicinity of the micropitot probe. Predictions of the pressure within the probe are made for a number of locations in the orifice plume. The predictions from the U-DSMC simulations are used for evaluating the geometrical design of the probe as well as aiding in pressure sensor selection. The effect of scale on the statistical fluctuation of the U-DSMC data is studied using Poiseuille flow. The error in the predicted velocity profile is calculated with respect to both first and second-order slip formulations. Simulations are carried out for a range of channel heights and the error between the U-DSMC predictions and theory are calculated for each case. From this error, a functional dependence is shown between the scale-induced statistical fluctuations and the decreasing channel height

    Hybrid Simulation Methods for Systems in Condensed Phase

    Get PDF

    Bayesian Methods for Gas-Phase Tomography

    Get PDF
    Gas-phase tomography refers to a set of techniques that determine the 2D or 3D distribution of a target species in a jet, plume, or flame using measurements of light, made around the boundary of a flow area. Reconstructed quantities may include the concentration of one or more species, temperature, pressure, and optical density, among others. Tomography is increasingly used to study fundamental aspects of turbulent combustion and monitor emissions for regulatory compliance. This thesis develops statistical methods to improve gas-phase tomography and reports two novel experimental applications. Tomography is an inverse problem, meaning that a forward model (calculating measurements of light for a known distribution of gas) is inverted to estimate the model parameters (transforming experimental data into a gas distribution). The measurement modality varies with the problem geometry and objective of the experiment. For instance, transmittance data from an array of laser beams that transect a jet may be inverted to recover 2D fields of concentration and temperature; and multiple high-resolution images of a flame, captured from different angles, are used to reconstruct wrinkling of the 3D reacting zone. Forward models for gas-phase tomography modalities share a common mathematical form, that of a Fredholm integral equation of the first-kind (IFK). The inversion of coupled IFKs is necessarily ill-posed, however, meaning that solutions are either unstable or non-unique. Measurements are thus insufficient in themselves to generate a realistic image of the gas and additional information must be incorporated into the reconstruction procedure. Statistical inversion is an approach to inverse problems in which the measurements, experimental parameters, and quantities of interest are treated as random variables, characterized by a probability distribution. These distributions reflect uncertainty about the target due to fluctuations in the flow field, noise in the data, errors in the forward model, and the ill-posed nature of reconstruction. The Bayesian framework for tomography features a likelihood probability density function (pdf), which describes the chance of observing a measurement for a given distribution of gas, and prior pdf, which assigns a relative plausibility to candidate distributions based on assumptions about the flow physics. Bayes’ equation updates information about the target in response to measurement data, combining the likelihood and prior functions to form a posterior pdf. The posterior is usually summarized by the maximum a posteriori (MAP) estimate, which is the most likely distribution of gas for a set of data, subject to the effects of noise, model errors, and prior information. The framework can be used to estimate credibility intervals for a reconstruction and the form of Bayes’ equation suggests procedures for improving gas tomography. The accuracy of reconstructions depends on the information content of the data, which is a function of the experimental design, as well as the specificity and validity of the prior. This thesis employs theoretical arguments and experimental measurements of scalar fluctuations to justify joint-normal likelihood and prior pdfs for gas-phase tomography. Three methods are introduced to improve each stage of the inverse problem: to develop priors, design optimal experiments, and select a discretization scheme. First, a self-similarity analysis of turbulent jets—common targets in gas tomography—is used to construct an advanced prior, informed by an estimate of the jet’s spatial covariance. Next, a Bayesian objective function is proposed to optimize beam positions in limited-data arrays, which are necessary in scenarios where optical access to the flow area is restricted. Finally, a Bayesian expression for model selection is derived from the joint-normal pdfs and employed to select a mathematical basis to reconstruct a flow. Extensive numerical evidence is presented to validate these methods. The dissertation continues with two novel experiments, conducted in a Bayesian way. Broadband absorption tomography is a new technique intended for quantitative emissions detection from spectrally-convolved absorption signals. Theoretical foundations for the diagnostic are developed and the results of a proof-of-concept emissions detection experiment are reported. Lastly, background-oriented schlieren (BOS) tomography is applied to combustion for the first time. BOS tomography employs measurements of beam steering to reconstruct a fluid’s optical density field, which can be used to infer temperature and density. The application of BOS tomography to flame imaging sets the stage for instantaneous 3D combustion thermometry. Numerical and experimental results reported in this thesis support a Bayesian approach to gas-phase tomography. Bayesian tomography makes the role of prior information explicit, which can be leveraged to optimize reconstructions and design better imaging systems in support of research on fluid flow and combustion dynamics

    TOPOLOGICAL DESCRIPTORS ENABLING NOVEL DISSECTIONS OF ELECTRON POSITION AND SPIN PROPERTIES IN COMPLEX MOLECULAR SYSTEMS

    Get PDF
    Macroscopic and microscopic properties of molecular and solid-state systems are intimately related to the their electronic structure. The electron position and spin densities, which represent the probability distributions to find all or unpaired electrons in the space, contain information concerning several chemical-relevant properties, such as the chemical bonding and the magnetic behaviour. Understanding the fine atomic-level mechanism behind these properties is a key step to design chemical modifications to properly tune and develop materials or molecules with specific features. Topological descriptors can be used to extract information from these electron distributions. In this work, novel applications of the source function descriptor have been developed to gain further insights on the electron and spin density-related properties. These developments, together with other topological descriptors, were used to get further insights on relevant chemical systems. Firstly, the source function reconstruction was enlarged to a multi-dimensional grid of points with a particular focus on the two-dimensional maps. This analysis allows to see the ability of chosen subsets of atoms to reconstruct the density in the selected area within a cause-effect relationship and to rationalise the chemical or magnetic behaviours. The source function partial reconstructed maps depict if in a molecular region the atomic contributions are important, modest or negligible. Besides, they may also be useful for a proper selection of the reference points and for a full understanding of the source function percentages analysis. In fact, the choice of the reference point where to reconstruct the studied density is neither easy nor objective for non-standard situations, such as for the spin density. This novel application was applied to the study of the spin density on a couple of azido Cu complexes. The source function partial reconstructed maps allow to unravel the different role played by the paramagnetic centre Cu and the ligand atoms and to explain the spin transmission mechanism at a molecular level. Moreover, they enable to highlight the nature of the spin density differences between the two complexes and among adopted computational approaches. DFT functionals tend to over-delocalise the spin density towards the ligand atoms introducing a biased spin-polarization mechanism between the Cu and the ligand atoms. The same descriptor was then applied to the study of the hydrogen bonds in the DNA base pairs. The source function reveals the delocalised nature of these interactions, highlighting that distant groups and rings have non-negligible effects on the reconstruction of the electron density in the intermolecular region. Besides, the analysis demonstrates that the purine and pyrimidine bases equally contribute to the reconstruction of the electron density at the hydrogen bond critical points. The source function also reveals that subtle variations of the atomic source contributions occur when the pairs are ionized, revealing that sources and sinks effects redistribution plays an important role in the stabilization of the DNA base pairs. The source function was also used to develop a method to extract full population matrices purely based on the electron density distribution and then amenable to experimental determination. The peculiar features of this descriptor, in particular the cause-effect relationship, assign a profound chemical meaning to the matrix elements in contrast with other population analyses such as the Mulliken's one, where the matrix elements are associated to orbital overlaps. The latest breakthroughs on the development of this method are shown together with some numerical examples on very simple compounds. The full population matrices obtained using the source function descriptor are able to retrieve the major chemical features. A detailed analysis on the intermolecular interactions involved in the in vivo molecular recognition of the antimalarial drug chloroquine with the heme moiety has been carried out using a combined topological-energetic analysis. This work reveals that charged-assisted hydrogen bonds set up between the lateral chains of the chloroquine and the propionate group of the heme are the most important interactions in the drug:substrate recognition process
    corecore