12 research outputs found

    A quantum crystallographic approach to study properties of molecules in crystals

    Get PDF
    In this dissertation, the behaviour of atoms, bonds, functional groups and molecules in vacuo but especially also in the crystal is studied using quantum crystallographic methods. The goal is to deepen the understanding of the properties of these building blocks as well as of the interactions among them, because good comprehension of the microscopic units and their interplay also enables us to explain the macroscopic properties of crystals. The first part (chapters 1-3) and second part (chapter 4) of this dissertation contain theoretical introductions about quantum crystallography. On the one hand, this expression contains the termquantum referring to quantumchemistry. Therefore, the very first chapter gives a brief overview about this field. The second chapter addresses different options to partition quantum chemical entities, such as the electron density or the bonding energy, into their components. On the other hand, quantumcrystallography consists obviously of the crystallographic part and chapter 3 covers these aspects focusing predominantly on X-ray diffraction. A more detailed introduction to quantum crystallography itself is presented in the second part (chapter 4). The third part (chapters 5-9) starts with an overview of the goals of this work followed by the results organized in four chapters. The goal is to deepen the understanding of properties of crystals by theoretically analysing their building block. It is for example studied how electrons and orbitals rearrange due to the electric field in a crystal or how high pressure leads to the formation of new bonds. Ultimately, these findings shall help to rationally design materials with desired properties such as high refractive index or semiconductivity.Mithilfe quantenkristallografischer Methoden werden Atome, Bindungen, funktionellen Gruppen und Moleküle in vacuo aber vor allem auch in Kristallen untersucht. Das Ziel ist es die Eigenschaften dieser Bestandteile zu verstehen und wie sie miteinander interagieren. Das Verständnis der Verhaltensweise der einzelnen Bausteine sowie deren Zusammenspiel auf mikroskopischer Ebene kann auch die makroskopischen Eigenschaften von Kristallen erklären. Der erste Teil dieser Doktorarbeit (Kapitel 1-3) beinhaltet eine theoretische Einleitung in die verschiedenen Bereiche der Quantenkristallografie. Wie der Name Quantenkristallografie besagt, besteht diese zum einen aus dem quantenchemischen Teil, weswegen das erste Kapitel eine kurze Einführung in die Quantenchemie gibt. Das zweite Kapitel widmet sich den verschiedenen Möglichkeiten quantenchemische Grössen wie zum Beispiel die Elektronendichte oder Bindungsenergien in Einzelteile zu zerlegen. Zum anderen trägt der kristallografische Teil zur Quantenkristallografie bei. Kapitel drei besteht daher aus einem kurzen Überblick über die Kristallografie mit Fokus auf der Röntgenbeugung. Anschliessend folgt im zweiten Teil (Kapitel 4) eine ausführlichere Einleitung in die Quantenkristallografie selbst. Der dritte Teil (Kapitel 5-9) beginnt mit einer kurzen Übersicht über die Ziele dieser Arbeit worauf die Resultate, gegliedert in vier verschiedene Kapitel, folgen. Das Ziel dieser Arbeit ist es die Eigenschaften von Kristallen besser zu verstehen, indem man ihre Einzelteile theoretisch analysiert und mit verschiedenen Methoden rationalisiert. Beispielsweise wird untersucht wie sich Elektronen und Orbitale aufgrund des elektrischen Feldes in Kristallen neu anordnen oder wie unter hohem Druck Bindungen neu geformt werden. Schlussendlich können all diese Erkenntnisse helfen, Materialien mit spezifischen gewünschten Eigenschaften herzustellen.Les atomes, les liaisons entre eux, les groupes fonctionnels et les molécules sont examinés en utilisant des méthodes de la cristallographie quantique. Le but est de comprendre les propriétés de ces composants et comment ils interagissent in vacuo mais surtout aussi dans les cristaux. En comprenant leurs caractéristiques et interactions au niveau microscopique, on peut aussi rationaliser les propriétés macroscopiques des cristaux. La première partie (chapitres 1-3) de cette thèse de doctorat contient une introduction brève à la cristallographie quantique. Comme le noml’indique, ce domaine de recherche est composé de la chimie quantique et la cristallographie. Pour cette raison le premier chapitre donne une introduction à la chimie quantique. Le deuxième chapitre présente quelques méthodes de décomposition des quantités de la chimie quantique comme la densité électronique ou l’énergie de liaison. Le troisième chapitre couvre la partie cristallographique. Ensuite dans la deuxième partie (chapitre 4) une introduction plus détaillée sur la cristallographie quantique elle-même est donnée. La troisième partie (chapitres 5-9) commence par un aperçu des objectives de cette dissertation suivis des résultats structurés en quatre chapitres. Le but est de comprendre les propriétés des cristaux en analysant leurs building blocks avec différentes méthodes théoriques. Il était par example examiné comment les électrons et les orbitales se réorganisent dans un cristal à cause du champ électrique ou comment des nouvelles liaisons sont formées sous pression. Finalement on peut utiliser ces conclusions pour modeler des matériaux avec des propriétés désirées

    HPCCP/CAS Workshop Proceedings 1998

    Get PDF
    This publication is a collection of extended abstracts of presentations given at the HPCCP/CAS (High Performance Computing and Communications Program/Computational Aerosciences Project) Workshop held on August 24-26, 1998, at NASA Ames Research Center, Moffett Field, California. The objective of the Workshop was to bring together the aerospace high performance computing community, consisting of airframe and propulsion companies, independent software vendors, university researchers, and government scientists and engineers. The Workshop was sponsored by the HPCCP Office at NASA Ames Research Center. The Workshop consisted of over 40 presentations, including an overview of NASA's High Performance Computing and Communications Program and the Computational Aerosciences Project; ten sessions of papers representative of the high performance computing research conducted within the Program by the aerospace industry, academia, NASA, and other government laboratories; two panel sessions; and a special presentation by Mr. James Bailey

    Développement et implémentation parallèle de méthodes d'interaction de configurations sélectionnées

    Get PDF
    Cette thèse, ayant pour thème les algorithmes de la chimie quantique, s'inscrit dans le cade du changement de paradigme observé depuis une douzaines d'années, dans lequel les méthodes de calcul séquentielles se doivent d'être progressivement remplacées par des méthodes parallèles. En effet, l'augmentation de la fréquences des processeurs se heurtant à des barrières physiques difficilement franchissables, l'augmentation de la puissance de calcul se fait par l'augmentation du nombre d'unités de calcul. Toutefois, là où une augmentation de la fréquence conduisait mécaniquement à une exécution plus rapide d'un code, l'augmentation du nombre de cœurs peut se heurter à des barrières algorithmiques, qui peuvent nécessiter une adaptation ou un changement d'algorithme. Parmi les méthodes développées afin de contourner ce problème, on trouve en particulier celles de type Monte-Carlo (stochastiques), qui sont intrinsèquement "embarrassingly parallel", c'est à dire qu'elles sont par construction constituées d'une multitudes de tâches indépendantes, et de ce fait particulièrement adaptées aux architectures massivement parallèles. Elles ont également l'avantage, dans de nombreux cas, d'être capables de produire un résultat approché pour une fraction du coût calculatoire de l'équivalent déterministe exacte. Lors de cette thèse, des implémentations massivement parallèles de certains algorithmes déterministes de chimie quantique ont été réalisées. Il s'agit des algorithmes suivants : CIPSI, diagonalisation de Davidson, calcul de la perturbation au second ordre, shifted-Bk, et Coupled Cluster Multi Références. Pour certains, une composante stochastique a été introduite en vue d'améliorer leur efficacité. Toutes ces méthodes ont été implémentées sur un modèle de tâches distribuées en TCP, où un processus central distribue des tâches par le réseau et collecte les résultats. En d'autres termes, des nœuds esclaves peuvent être ajoutés au cours du calcul depuis n'importe quelle machine accessible depuis internet. L'efficacité parallèle des algorithmes implémentés dans cette thèse a été étudiée, et le programme a pu donner lieu à de nombreuses applications, notamment pour permettre d'obtenir des énergies de références pour des systèmes moléculaires difficiles.This thesis, whose topic is quantum chemistry algorithms, is made in the context of the change in paradigm that has been going on for the last decade, in which the usual sequential algorithms are progressively replaced by parallel equivalents. Indeed, the increase in processors' frequency is challenged by physical barriers, so increase in computational power is achieved through increasing the number of cores. However, where an increase of frequency mechanically leads to a faster execution of a code, an increase in number of cores may be challenged by algorithmic barriers, which may require adapting of even changing the algorithm. Among methods developed to circumvent this issue, we find in particular Monte-Carlo methods (stochastic methods), which are intrinsically "embarrassingly parallel", meaning they are by design composed of a large number of independent tasks, and thus, particularly well-adapted to massively parallel architectures. In addition, they often are able to yield an approximate result for just a fraction of the cost of the equivalent deterministic, exact computation. During this thesis, massively parallel implementations of some deterministic quantum chemistry algorithms were realized. Those methods are: CIPSI, Davidson diagonalization, computation of second-order perturbation, shifted-Bk, Multi-Reference Coupled-Cluster. For some of these, a stochastic aspect was introduced in order to improve their efficiency. All of them were implemented on a distributed task model, with a central process distributing tasks and collecting results. In other words, slave nodes can be added during the computation from any location reachable through Internet. The efficiency for the implemented algorithms has been studied, and the code could give way to numerous applications, in particular to obtain reference energies for difficult molecular systems

    Statistical analysis for longitudinal MR imaging of dementia

    Get PDF
    Serial Magnetic Resonance (MR) Imaging can reveal structural atrophy in the brains of subjects with neurodegenerative diseases such as Alzheimer’s Disease (AD). Methods of computational neuroanatomy allow the detection of statistically significant patterns of brain change over time and/or over multiple subjects. The focus of this thesis is the development and application of statistical and supporting methodology for the analysis of three-dimensional brain imaging data. There is a particular emphasis on longitudinal data, though much of the statistical methodology is more general. New methods of voxel-based morphometry (VBM) are developed for serial MR data, employing combinations of tissue segmentation and longitudinal non-rigid registration. The methods are evaluated using novel quantitative metrics based on simulated data. Contributions to general aspects of VBM are also made, and include a publication concerning guidelines for reporting VBM studies, and another examining an issue in the selection of which voxels to include in the statistical analysis mask for VBM of atrophic conditions. Research is carried out into the statistical theory of permutation testing for application to multivariate general linear models, and is then used to build software for the analysis of multivariate deformation- and tensor-based morphometry data, efficiently correcting for the multiple comparison problem inherent in voxel-wise analysis of images. Monte Carlo simulation studies extend results available in the literature regarding the different strategies available for permutation testing in the presence of confounds. Theoretical aspects of longitudinal deformation- and tensor-based morphometry are explored, such as the options for combining within- and between-subject deformation fields. Practical investigation of several different methods and variants is performed for a longitudinal AD study

    New methods for econometric inference

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Economics, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 201-208).Monotonicity is a key qualitative prediction of a wide array of economic models derived via robust comparative statics. It is therefore important to design effective and practical econometric methods for testing this prediction in empirical analysis. Chapter 1 develops a general nonparametric framework for testing monotonicity of a regression function. Using this framework, a broad class of new tests is introduced, which gives an empirical researcher a lot of flexibility to incorporate ex ante information she might have. Chapter 1 also develops new methods for simulating critical values, which are based on the combination of a bootstrap procedure and new selection algorithms. These methods yield tests that have correct asymptotic size and are asymptotically nonconservative. It is also shown how to obtain an adaptive rate optimal test that has the best attainable rate of uniform consistency against models whose regression function has Lipschitz-continuous first-order derivatives and that automatically adapts to the unknown smoothness of the regression function. Simulations show that the power of the new tests in many cases significantly exceeds that of some prior tests, e.g. that of Ghosal, Sen, and Van der Vaart (2000). An application of the developed procedures to the dataset of Ellison and Ellison (2011) shows that there is some evidence of strategic entry deterrence in pharmaceutical industry where incumbents may use strategic investment to prevent generic entries when their patents expire. Many economic models yield conditional moment inequalities that can be used for inference on parameters of these models. In chapter 2, I construct a new test of conditional moment inequalities based on studentized kernel estimates of moment functions. The test automatically adapts to the unknown smoothness of the moment functions, has uniformly correct asymptotic size, and is rate optimal against certain classes of alternatives. Some existing tests have nontrivial power against n-1/2 -local alternatives of a certain type whereas my method only allows for nontrivial testing against (n/ log n)-1/2-local alternatives of this type. There exist, however, large classes of sequences of well-behaved alternatives against which the test developed in this paper is consistent and those tests are not. In chapter 3 (coauthored with Victor Chernozhukov and Kengo Kato), we derive a central limit theorem for the maximum of a sum of high dimensional random vectors. Specifically, we establish conditions under which the distribution of the maximum is approximated by that of the maximum of a sum of the Gaussian random vectors with the same covariance matrices as the original vectors. The key innovation of this result is that it applies even when the dimension of random vectors (p) is large compared to the sample size (n); in fact, p can be much larger than n. We also show that the distribution of the maximum of a sum of the random vectors with unknown covariance matrices can be consistently estimated by the distribution of the maximum of a sum of the conditional Gaussian random vectors obtained by multiplying the original vectors with i.i.d. Gaussian multipliers. This is the multiplier bootstrap procedure. Here too, p can be large or even much larger than n. These distributional approximations, either Gaussian or conditional Gaussian, yield a high-quality approximation to the distribution of the original maximum, often with approximation error decreasing polynomially in the sample size, and hence are of interest in many applications. We demonstrate how our central limit theorem and the multiplier bootstrap can be used for high dimensional estimation, multiple hypothesis testing, and adaptive specification testing. All these results contain non-asymptotic bounds on approximation errors.by Denis Chetverikov.Ph.D
    corecore