12 research outputs found
A quantum crystallographic approach to study properties of molecules in crystals
In this dissertation, the behaviour of atoms, bonds, functional groups and molecules in vacuo
but especially also in the crystal is studied using quantum crystallographic methods. The goal
is to deepen the understanding of the properties of these building blocks as well as of the
interactions among them, because good comprehension of the microscopic units and their
interplay also enables us to explain the macroscopic properties of crystals.
The first part (chapters 1-3) and second part (chapter 4) of this dissertation contain theoretical
introductions about quantum crystallography. On the one hand, this expression contains
the termquantum referring to quantumchemistry. Therefore, the very first chapter gives a
brief overview about this field. The second chapter addresses different options to partition
quantum chemical entities, such as the electron density or the bonding energy, into their
components. On the other hand, quantumcrystallography consists obviously of the crystallographic
part and chapter 3 covers these aspects focusing predominantly on X-ray diffraction.
A more detailed introduction to quantum crystallography itself is presented in the second part
(chapter 4).
The third part (chapters 5-9) starts with an overview of the goals of this work followed by the
results organized in four chapters.
The goal is to deepen the understanding of properties of crystals by theoretically analysing
their building block. It is for example studied how electrons and orbitals rearrange due to the
electric field in a crystal or how high pressure leads to the formation of new bonds. Ultimately,
these findings shall help to rationally design materials with desired properties such as high
refractive index or semiconductivity.Mithilfe quantenkristallografischer Methoden werden Atome, Bindungen, funktionellen Gruppen
und MolekĂĽle in vacuo aber vor allem auch in Kristallen untersucht. Das Ziel ist es die
Eigenschaften dieser Bestandteile zu verstehen und wie sie miteinander interagieren. Das
Verständnis der Verhaltensweise der einzelnen Bausteine sowie deren Zusammenspiel auf
mikroskopischer Ebene kann auch die makroskopischen Eigenschaften von Kristallen erklären.
Der erste Teil dieser Doktorarbeit (Kapitel 1-3) beinhaltet eine theoretische Einleitung in die
verschiedenen Bereiche der Quantenkristallografie. Wie der Name Quantenkristallografie
besagt, besteht diese zum einen aus dem quantenchemischen Teil, weswegen das erste Kapitel
eine kurze Einführung in die Quantenchemie gibt. Das zweite Kapitel widmet sich den verschiedenen Möglichkeiten
quantenchemische Grössen wie zum Beispiel die Elektronendichte
oder Bindungsenergien in Einzelteile zu zerlegen. Zum anderen trägt der kristallografische
Teil zur Quantenkristallografie bei. Kapitel drei besteht daher aus einem kurzen Ăśberblick
über die Kristallografie mit Fokus auf der Röntgenbeugung.
Anschliessend folgt im zweiten Teil (Kapitel 4) eine ausfĂĽhrlichere Einleitung in die Quantenkristallografie
selbst.
Der dritte Teil (Kapitel 5-9) beginnt mit einer kurzen Ăśbersicht ĂĽber die Ziele dieser Arbeit
worauf die Resultate, gegliedert in vier verschiedene Kapitel, folgen.
Das Ziel dieser Arbeit ist es die Eigenschaften von Kristallen besser zu verstehen, indem
man ihre Einzelteile theoretisch analysiert und mit verschiedenen Methoden rationalisiert.
Beispielsweise wird untersucht wie sich Elektronen und Orbitale aufgrund des elektrischen
Feldes in Kristallen neu anordnen oder wie unter hohem Druck Bindungen neu geformt
werden. Schlussendlich können all diese Erkenntnisse helfen, Materialien mit spezifischen
gewünschten Eigenschaften herzustellen.Les atomes, les liaisons entre eux, les groupes fonctionnels et les molécules sont examinés
en utilisant des méthodes de la cristallographie quantique. Le but est de comprendre les
propriétés de ces composants et comment ils interagissent in vacuo mais surtout aussi dans
les cristaux. En comprenant leurs caractéristiques et interactions au niveau microscopique,
on peut aussi rationaliser les propriétés macroscopiques des cristaux.
La première partie (chapitres 1-3) de cette thèse de doctorat contient une introduction brève Ă
la cristallographie quantique. Comme le noml’indique, ce domaine de recherche est composé
de la chimie quantique et la cristallographie. Pour cette raison le premier chapitre donne
une introduction à la chimie quantique. Le deuxième chapitre présente quelques méthodes
de décomposition des quantités de la chimie quantique comme la densité électronique ou
l’énergie de liaison. Le troisième chapitre couvre la partie cristallographique.
Ensuite dans la deuxième partie (chapitre 4) une introduction plus détaillée sur la cristallographie
quantique elle-même est donnée.
La troisième partie (chapitres 5-9) commence par un aperçu des objectives de cette dissertation
suivis des résultats structurés en quatre chapitres.
Le but est de comprendre les propriétés des cristaux en analysant leurs building blocks avec
différentes méthodes théoriques. Il était par example examiné comment les électrons et
les orbitales se réorganisent dans un cristal à cause du champ électrique ou comment des
nouvelles liaisons sont formées sous pression. Finalement on peut utiliser ces conclusions
pour modeler des matériaux avec des propriétés désirées
HPCCP/CAS Workshop Proceedings 1998
This publication is a collection of extended abstracts of presentations given at the HPCCP/CAS (High Performance Computing and Communications Program/Computational Aerosciences Project) Workshop held on August 24-26, 1998, at NASA Ames Research Center, Moffett Field, California. The objective of the Workshop was to bring together the aerospace high performance computing community, consisting of airframe and propulsion companies, independent software vendors, university researchers, and government scientists and engineers. The Workshop was sponsored by the HPCCP Office at NASA Ames Research Center. The Workshop consisted of over 40 presentations, including an overview of NASA's High Performance Computing and Communications Program and the Computational Aerosciences Project; ten sessions of papers representative of the high performance computing research conducted within the Program by the aerospace industry, academia, NASA, and other government laboratories; two panel sessions; and a special presentation by Mr. James Bailey
Développement et implémentation parallèle de méthodes d'interaction de configurations sélectionnées
Cette thèse, ayant pour thème les algorithmes de la chimie quantique, s'inscrit dans le cade du changement de paradigme observé depuis une douzaines d'années, dans lequel les
méthodes de calcul séquentielles se doivent d'être progressivement remplacées par des méthodes parallèles. En effet, l'augmentation de la fréquences des processeurs se heurtant à des barrières physiques difficilement franchissables, l'augmentation de la puissance de calcul se fait par l'augmentation du nombre d'unités de calcul.
Toutefois, là où une augmentation de la fréquence conduisait mécaniquement à une exécution plus rapide d'un code, l'augmentation du nombre de cœurs peut se heurter à des
barrières algorithmiques, qui peuvent nécessiter une adaptation ou un changement d'algorithme. Parmi les méthodes développées afin de contourner ce problème, on trouve en
particulier celles de type Monte-Carlo (stochastiques), qui sont intrinsèquement "embarrassingly parallel", c'est à dire qu'elles sont par construction constituées d'une multitudes de tâches indépendantes, et de ce fait particulièrement adaptées aux architectures massivement parallèles. Elles ont également l'avantage, dans de nombreux cas, d'être capables de produire un résultat approché pour une fraction du coût calculatoire de l'équivalent déterministe exacte. Lors de cette thèse, des implémentations massivement parallèles de certains algorithmes déterministes de chimie quantique ont été réalisées. Il s'agit des algorithmes suivants : CIPSI, diagonalisation de Davidson, calcul de la perturbation au second ordre, shifted-Bk, et Coupled Cluster Multi Références. Pour certains, une composante stochastique a été
introduite en vue d'améliorer leur efficacité. Toutes ces méthodes ont été implémentées sur un modèle de tâches distribuées en TCP, où un processus central distribue des tâches par le réseau et collecte les résultats. En d'autres termes, des nœuds esclaves peuvent être ajoutés au cours du calcul depuis n'importe quelle machine accessible depuis
internet. L'efficacité parallèle des algorithmes implémentés dans cette thèse a été étudiée, et le programme a pu donner lieu à de nombreuses applications, notamment pour permettre d'obtenir des énergies de références pour des systèmes moléculaires difficiles.This thesis, whose topic is quantum chemistry algorithms, is made in the context of the change in paradigm that has been going on for the last decade, in which the usual sequential algorithms are progressively replaced by parallel equivalents. Indeed, the increase in processors' frequency is challenged by physical barriers, so increase in
computational power is achieved through increasing the number of cores. However, where an increase of frequency mechanically leads to a faster execution of a code, an increase in number of cores may be challenged by algorithmic barriers, which may require adapting of even changing the algorithm. Among methods developed to circumvent this issue, we find in particular Monte-Carlo methods (stochastic methods), which are intrinsically "embarrassingly parallel", meaning they are by design composed of a large number of independent tasks, and thus, particularly well-adapted to massively parallel architectures. In addition, they often are able to yield an approximate result for just a fraction of the cost of the equivalent deterministic, exact computation. During this thesis, massively parallel implementations of some deterministic quantum chemistry algorithms were realized. Those methods are: CIPSI, Davidson diagonalization, computation of second-order perturbation, shifted-Bk, Multi-Reference Coupled-Cluster. For some of these, a stochastic aspect was introduced in order to improve their efficiency. All of them were implemented on a distributed task model, with a central process distributing tasks and collecting results. In other words, slave nodes can be added during the computation from any location reachable through Internet. The efficiency for the implemented algorithms has been studied, and the code could give way to numerous applications, in particular to obtain reference energies for difficult molecular systems
Introduction to string field theory
The 1988 book, now free, with corrections and bookmarks (for pdf).Comment: 247 page
Statistical analysis for longitudinal MR imaging of dementia
Serial Magnetic Resonance (MR) Imaging can reveal structural atrophy in the brains of
subjects with neurodegenerative diseases such as Alzheimer’s Disease (AD). Methods of
computational neuroanatomy allow the detection of statistically significant patterns of
brain change over time and/or over multiple subjects. The focus of this thesis is the
development and application of statistical and supporting methodology for the analysis
of three-dimensional brain imaging data. There is a particular emphasis on longitudinal
data, though much of the statistical methodology is more general.
New methods of voxel-based morphometry (VBM) are developed for serial MR data,
employing combinations of tissue segmentation and longitudinal non-rigid registration.
The methods are evaluated using novel quantitative metrics based on simulated data.
Contributions to general aspects of VBM are also made, and include a publication concerning
guidelines for reporting VBM studies, and another examining an issue in the
selection of which voxels to include in the statistical analysis mask for VBM of atrophic
conditions.
Research is carried out into the statistical theory of permutation testing for application
to multivariate general linear models, and is then used to build software for the analysis
of multivariate deformation- and tensor-based morphometry data, efficiently correcting
for the multiple comparison problem inherent in voxel-wise analysis of images. Monte
Carlo simulation studies extend results available in the literature regarding the different
strategies available for permutation testing in the presence of confounds.
Theoretical aspects of longitudinal deformation- and tensor-based morphometry are
explored, such as the options for combining within- and between-subject deformation
fields. Practical investigation of several different methods and variants is performed for a
longitudinal AD study
Recommended from our members
Crystal structure prediction at high pressures: stability, superconductivity and superionicity
The physical and chemical properties of materials are intimately related to their underlying crystal structure: the detailed arrangement of atoms and chemical bonds within. This thesis uses computational methods to predict crystal structure, with a particular focus on structures and stable phases that emerge at high pressure. We explore three distinct systems.
We first apply the ab initio random structure searching (AIRSS) technique and density functional theory (DFT) calculations to investigate the high-pressure behaviour of beryllium, magnesium and calcium difluorides. We find that beryllium fluoride is extensively polymorphic at low pressures, and predict two new phases for this compound - the silica moganite and CaCl structures - to be stable over the wide pressure range 12-57 GPa. For magnesium fluoride, our results show that the orthorhombic `O-I' TiO structure (, ) is stable for this compound between 40 and 44 GPa. Our searches find no new phases at the static-lattice level for calcium difluoride between 0 and 70 GPa; however, a phase with symmetry is energetically close to stability over this pressure range, and our calculations predict that this phase is stabilised at high temperature. The structure exhibits an unstable phonon mode at large volumes which may signal a transition to a superionic state at high temperatures. The Group-II difluorides are isoelectronic to a number of other AB-type compounds such as SiO and TiO, and we discuss our results in light of these similarities.
Compressed hydrogen sulfide (HS) has recently attracted experimental and theoretical interest due to the observation of high-temperature superconductivity in this compound ( = 203 K) at high pressure (155 GPa). We use the AIRSS technique and DFT calculations to determine the stable phases and chemical stoichiometries formed in the hydrogen-sulfur system as a function of pressure. We find that this system supports numerous stable compounds: HS, HS, HS, HS, HS, HS and HS, at various pressures. Working as part of a collaboration, our predicted HS and HS structures are shown to be consistent with XRD data for this system, with HS identified as a major decomposition product of HS in the lead-up to the superconducting state.
Calcium and oxygen are two elements of generally high terrestrial and cosmic abundance, and we explore structures of calcium peroxide (CaO) in the pressure range 0-200 GPa. Stable structures for CaO with , and symmetries emerge at pressures below 40 GPa, which we find are thermodynamically stable against decomposition into CaO and O. The stability of CaO with respect to decomposition increases with pressure, with peak stability occurring at the CaO B1-B2 phase transition at 65 GPa. Phonon calculations using the quasiharmonic approximation show that CaO is a stable oxide of calcium at mantle temperatures and pressures, highlighting a possible role for CaO in planetary geochemistry, as a mineral redox buffer. We sketch the phase diagram for CaO, and find at least five new stable phases in the pressure/temperature ranges 0 60 GPa, 0 600 K, including two new candidates for the zero-pressure ground state structure.Cambridge Commonwealth Trust
Engineering and Physical Sciences Research Council (EPSRC
New methods for econometric inference
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Economics, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 201-208).Monotonicity is a key qualitative prediction of a wide array of economic models derived via robust comparative statics. It is therefore important to design effective and practical econometric methods for testing this prediction in empirical analysis. Chapter 1 develops a general nonparametric framework for testing monotonicity of a regression function. Using this framework, a broad class of new tests is introduced, which gives an empirical researcher a lot of flexibility to incorporate ex ante information she might have. Chapter 1 also develops new methods for simulating critical values, which are based on the combination of a bootstrap procedure and new selection algorithms. These methods yield tests that have correct asymptotic size and are asymptotically nonconservative. It is also shown how to obtain an adaptive rate optimal test that has the best attainable rate of uniform consistency against models whose regression function has Lipschitz-continuous first-order derivatives and that automatically adapts to the unknown smoothness of the regression function. Simulations show that the power of the new tests in many cases significantly exceeds that of some prior tests, e.g. that of Ghosal, Sen, and Van der Vaart (2000). An application of the developed procedures to the dataset of Ellison and Ellison (2011) shows that there is some evidence of strategic entry deterrence in pharmaceutical industry where incumbents may use strategic investment to prevent generic entries when their patents expire. Many economic models yield conditional moment inequalities that can be used for inference on parameters of these models. In chapter 2, I construct a new test of conditional moment inequalities based on studentized kernel estimates of moment functions. The test automatically adapts to the unknown smoothness of the moment functions, has uniformly correct asymptotic size, and is rate optimal against certain classes of alternatives. Some existing tests have nontrivial power against n-1/2 -local alternatives of a certain type whereas my method only allows for nontrivial testing against (n/ log n)-1/2-local alternatives of this type. There exist, however, large classes of sequences of well-behaved alternatives against which the test developed in this paper is consistent and those tests are not. In chapter 3 (coauthored with Victor Chernozhukov and Kengo Kato), we derive a central limit theorem for the maximum of a sum of high dimensional random vectors. Specifically, we establish conditions under which the distribution of the maximum is approximated by that of the maximum of a sum of the Gaussian random vectors with the same covariance matrices as the original vectors. The key innovation of this result is that it applies even when the dimension of random vectors (p) is large compared to the sample size (n); in fact, p can be much larger than n. We also show that the distribution of the maximum of a sum of the random vectors with unknown covariance matrices can be consistently estimated by the distribution of the maximum of a sum of the conditional Gaussian random vectors obtained by multiplying the original vectors with i.i.d. Gaussian multipliers. This is the multiplier bootstrap procedure. Here too, p can be large or even much larger than n. These distributional approximations, either Gaussian or conditional Gaussian, yield a high-quality approximation to the distribution of the original maximum, often with approximation error decreasing polynomially in the sample size, and hence are of interest in many applications. We demonstrate how our central limit theorem and the multiplier bootstrap can be used for high dimensional estimation, multiple hypothesis testing, and adaptive specification testing. All these results contain non-asymptotic bounds on approximation errors.by Denis Chetverikov.Ph.D
Recommended from our members
Radiation Hydrodynamics
The discipline of radiation hydrodynamics is the branch of hydrodynamics in which the moving fluid absorbs and emits electromagnetic radiation, and in so doing modifies its dynamical behavior. That is, the net gain or loss of energy by parcels of the fluid material through absorption or emission of radiation are sufficient to change the pressure of the material, and therefore change its motion; alternatively, the net momentum exchange between radiation and matter may alter the motion of the matter directly. Ignoring the radiation contributions to energy and momentum will give a wrong prediction of the hydrodynamic motion when the correct description is radiation hydrodynamics. Of course, there are circumstances when a large quantity of radiation is present, yet can be ignored without causing the model to be in error. This happens when radiation from an exterior source streams through the problem, but the latter is so transparent that the energy and momentum coupling is negligible. Everything we say about radiation hydrodynamics applies equally well to neutrinos and photons (apart from the Einstein relations, specific to bosons), but in almost every area of astrophysics neutrino hydrodynamics is ignored, simply because the systems are exceedingly transparent to neutrinos, even though the energy flux in neutrinos may be substantial. Another place where we can do ''radiation hydrodynamics'' without using any sophisticated theory is deep within stars or other bodies, where the material is so opaque to the radiation that the mean free path of photons is entirely negligible compared with the size of the system, the distance over which any fluid quantity varies, and so on. In this case we can suppose that the radiation is in equilibrium with the matter locally, and its energy, pressure and momentum can be lumped in with those of the rest of the fluid. That is, it is no more necessary to distinguish photons from atoms, nuclei and electrons, than it is to distinguish hydrogen atoms from helium atoms, for instance. There are all just components of a mixed fluid in this case. So why do we have a special subject called ''radiation hydrodynamics'', when photons are just one of the many kinds of particles that comprise our fluid? The reason is that photons couple rather weakly to the atoms, ions and electrons, much more weakly than those particles couple with each other. Nor is the matter-radiation coupling negligible in many problems, since the star or nebula may be millions of mean free paths in extent. Radiation hydrodynamics exists as a discipline to treat those problems for which the energy and momentum coupling terms between matter and radiation are important, and for which, since the photon mean free path is neither extremely large nor extremely small compared with the size of the system, the radiation field is not very easy to calculate. In the theoretical development of this subject, many of the relations are presented in a form that is described as approximate, and perhaps accurate only to order of {nu}/c. This makes the discussion cumbersome. Why are we required to do this? It is because we are using Newtonian mechanics to treat our fluid, yet its photon component is intrinsically relativistic; the particles travel at the speed of light. There is a perfectly consistent relativistic kinetic theory, and a corresponding relativistic theory of fluid mechanics, which is perfectly suited to describing the photon gas. But it is cumbersome to use this for the fluid in general, and we prefer to avoid it for cases in which the flow velocity satisfies {nu} << c. The price we pay is to spend extra effort making sure that the source-sink terms relating to our relativistic gas component are included in the equations of motion in a form that preserves overall conservation of energy and momentum, something that would be automatic if the relativistic equations were used throughout