53 research outputs found

    A quantum crystallographic approach to study properties of molecules in crystals

    Get PDF
    In this dissertation, the behaviour of atoms, bonds, functional groups and molecules in vacuo but especially also in the crystal is studied using quantum crystallographic methods. The goal is to deepen the understanding of the properties of these building blocks as well as of the interactions among them, because good comprehension of the microscopic units and their interplay also enables us to explain the macroscopic properties of crystals. The first part (chapters 1-3) and second part (chapter 4) of this dissertation contain theoretical introductions about quantum crystallography. On the one hand, this expression contains the termquantum referring to quantumchemistry. Therefore, the very first chapter gives a brief overview about this field. The second chapter addresses different options to partition quantum chemical entities, such as the electron density or the bonding energy, into their components. On the other hand, quantumcrystallography consists obviously of the crystallographic part and chapter 3 covers these aspects focusing predominantly on X-ray diffraction. A more detailed introduction to quantum crystallography itself is presented in the second part (chapter 4). The third part (chapters 5-9) starts with an overview of the goals of this work followed by the results organized in four chapters. The goal is to deepen the understanding of properties of crystals by theoretically analysing their building block. It is for example studied how electrons and orbitals rearrange due to the electric field in a crystal or how high pressure leads to the formation of new bonds. Ultimately, these findings shall help to rationally design materials with desired properties such as high refractive index or semiconductivity.Mithilfe quantenkristallografischer Methoden werden Atome, Bindungen, funktionellen Gruppen und Moleküle in vacuo aber vor allem auch in Kristallen untersucht. Das Ziel ist es die Eigenschaften dieser Bestandteile zu verstehen und wie sie miteinander interagieren. Das Verständnis der Verhaltensweise der einzelnen Bausteine sowie deren Zusammenspiel auf mikroskopischer Ebene kann auch die makroskopischen Eigenschaften von Kristallen erklären. Der erste Teil dieser Doktorarbeit (Kapitel 1-3) beinhaltet eine theoretische Einleitung in die verschiedenen Bereiche der Quantenkristallografie. Wie der Name Quantenkristallografie besagt, besteht diese zum einen aus dem quantenchemischen Teil, weswegen das erste Kapitel eine kurze Einführung in die Quantenchemie gibt. Das zweite Kapitel widmet sich den verschiedenen Möglichkeiten quantenchemische Grössen wie zum Beispiel die Elektronendichte oder Bindungsenergien in Einzelteile zu zerlegen. Zum anderen trägt der kristallografische Teil zur Quantenkristallografie bei. Kapitel drei besteht daher aus einem kurzen Überblick über die Kristallografie mit Fokus auf der Röntgenbeugung. Anschliessend folgt im zweiten Teil (Kapitel 4) eine ausführlichere Einleitung in die Quantenkristallografie selbst. Der dritte Teil (Kapitel 5-9) beginnt mit einer kurzen Übersicht über die Ziele dieser Arbeit worauf die Resultate, gegliedert in vier verschiedene Kapitel, folgen. Das Ziel dieser Arbeit ist es die Eigenschaften von Kristallen besser zu verstehen, indem man ihre Einzelteile theoretisch analysiert und mit verschiedenen Methoden rationalisiert. Beispielsweise wird untersucht wie sich Elektronen und Orbitale aufgrund des elektrischen Feldes in Kristallen neu anordnen oder wie unter hohem Druck Bindungen neu geformt werden. Schlussendlich können all diese Erkenntnisse helfen, Materialien mit spezifischen gewünschten Eigenschaften herzustellen.Les atomes, les liaisons entre eux, les groupes fonctionnels et les molécules sont examinés en utilisant des méthodes de la cristallographie quantique. Le but est de comprendre les propriétés de ces composants et comment ils interagissent in vacuo mais surtout aussi dans les cristaux. En comprenant leurs caractéristiques et interactions au niveau microscopique, on peut aussi rationaliser les propriétés macroscopiques des cristaux. La première partie (chapitres 1-3) de cette thèse de doctorat contient une introduction brève à la cristallographie quantique. Comme le noml’indique, ce domaine de recherche est composé de la chimie quantique et la cristallographie. Pour cette raison le premier chapitre donne une introduction à la chimie quantique. Le deuxième chapitre présente quelques méthodes de décomposition des quantités de la chimie quantique comme la densité électronique ou l’énergie de liaison. Le troisième chapitre couvre la partie cristallographique. Ensuite dans la deuxième partie (chapitre 4) une introduction plus détaillée sur la cristallographie quantique elle-même est donnée. La troisième partie (chapitres 5-9) commence par un aperçu des objectives de cette dissertation suivis des résultats structurés en quatre chapitres. Le but est de comprendre les propriétés des cristaux en analysant leurs building blocks avec différentes méthodes théoriques. Il était par example examiné comment les électrons et les orbitales se réorganisent dans un cristal à cause du champ électrique ou comment des nouvelles liaisons sont formées sous pression. Finalement on peut utiliser ces conclusions pour modeler des matériaux avec des propriétés désirées

    Ranking Viscous Finger Simulations to an Acquired Ground Truth with Topology-aware Matchings

    Get PDF
    International audienceThis application paper presents a novel framework based on topological data analysis for the automatic evaluation and ranking of viscous finger simulation runs in an ensemble with respect to a reference acquisition. Individual fingers in a given time-step are associated with critical point pairs in the distance field to the injection point, forming persistence diagrams. Different metrics, based on optimal transport, for comparing time-varying persistence diagrams in this specific applicative case are introduced. We evaluate the relevance of the rankings obtained with these metrics, both qualitatively thanks to a lightweight web visual interface, and quantitatively by studying the deviation from a reference ranking suggested by experts. Extensive experiments show the quantitative superiority of our approach compared to traditional alternatives. Our web interface allows experts to conveniently explore the produced rankings. We show a complete viscous fingering case study demonstrating the utility of our approach in the context of porous media fluid flow, where our framework can be used to automatically discard physically-irrelevant simulation runs from the ensemble and rank the most plausible ones. We document an in-situ implementation to lighten I/O and performance constraints arising in the context of parametric studies

    Visual Analysis of Variability and Features of Climate Simulation Ensembles

    Get PDF
    This PhD thesis is concerned with the visual analysis of time-dependent scalar field ensembles as occur in climate simulations. Modern climate projections consist of multiple simulation runs (ensemble members) that vary in parameter settings and/or initial values, which leads to variations in the resulting simulation data. The goal of ensemble simulations is to sample the space of possible futures under the given climate model and provide quantitative information about uncertainty in the results. The analysis of such data is challenging because apart from the spatiotemporal data, also variability has to be analyzed and communicated. This thesis presents novel techniques to analyze climate simulation ensembles visually. A central question is how the data can be aggregated under minimized information loss. To address this question, a key technique applied in several places in this work is clustering. The first part of the thesis addresses the challenge of finding clusters in the ensemble simulation data. Various distance metrics lend themselves for the comparison of scalar fields which are explored theoretically and practically. A visual analytics interface allows the user to interactively explore and compare multiple parameter settings for the clustering and investigate the resulting clusters, i.e. prototypical climate phenomena. A central contribution here is the development of design principles for analyzing variability in decadal climate simulations, which has lead to a visualization system centered around the new Clustering Timeline. This is a variant of a Sankey diagram that utilizes clustering results to communicate climatic states over time coupled with ensemble member agreement. It can reveal several interesting properties of the dataset, such as: into how many inherently similar groups the ensemble can be divided at any given time, whether the ensemble diverges in general, whether there are different phases in the time lapse, maybe periodicity, or outliers. The Clustering Timeline is also used to compare multiple climate simulation models and assess their performance. The Hierarchical Clustering Timeline is an advanced version of the above. It introduces the concept of a cluster hierarchy that may group the whole dataset down to the individual static scalar fields into clusters of various sizes and densities recording the nesting relationship between them. One more contribution of this work in terms of visualization research is, that ways are investigated how to practically utilize a hierarchical clustering of time-dependent scalar fields to analyze the data. To this end, a system of different views is proposed which are linked through various interaction possibilities. The main advantage of the system is that a dataset can now be inspected at an arbitrary level of detail without having to recompute a clustering with different parameters. Interesting branches of the simulation can be expanded to reveal smaller differences in critical clusters or folded to show only a coarse representation of the less interesting parts of the dataset. The last building block of the suit of visual analysis methods developed for this thesis aims at a robust, (largely) automatic detection and tracking of certain features in a scalar field ensemble. Techniques are presented that I found can identify and track super- and sub-levelsets. And I derive “centers of action” from these sets which mark the location of extremal climate phenomena that govern the weather (e.g. Icelandic Low and Azores High). The thesis also presents visual and quantitative techniques to evaluate the temporal change of the positions of these centers; such a displacement would be likely to manifest in changes in weather. In a preliminary analysis with my collaborators, we indeed observed changes in the loci of the centers of action in a simulation with increased greenhouse gas concentration as compared to pre-industrial concentration levels

    Multimodal Biomedical Data Visualization: Enhancing Network, Clinical, and Image Data Depiction

    Get PDF
    In this dissertation, we present visual analytics tools for several biomedical applications. Our research spans three types of biomedical data: reaction networks, longitudinal multidimensional clinical data, and biomedical images. For each data type, we present intuitive visual representations and efficient data exploration methods to facilitate visual knowledge discovery. Rule-based simulation has been used for studying complex protein interactions. In a rule-based model, the relationships of interacting proteins can be represented as a network. Nevertheless, understanding and validating the intended behaviors in large network models are ineffective and error prone. We have developed a tool that first shows a network overview with concise visual representations and then shows relevant rule-specific details on demand. This strategy significantly improves visualization comprehensibility and disentangles the complex protein-protein relationships by showing them selectively alongside the global context of the network. Next, we present a tool for analyzing longitudinal multidimensional clinical datasets, that we developed for understanding Parkinson's disease progression. Detecting patterns involving multiple time-varying variables is especially challenging for clinical data. Conventional computational techniques, such as cluster analysis and dimension reduction, do not always generate interpretable, actionable results. Using our tool, users can select and compare patient subgroups by filtering patients with multiple symptoms simultaneously and interactively. Unlike conventional visualizations that use local features, many targets in biomedical images are characterized by high-level features. We present our research characterizing such high-level features through multiscale texture segmentation and deep-learning strategies. First, we present an efficient hierarchical texture segmentation approach that scales up well to gigapixel images to colorize electron microscopy (EM) images. This enhances visual comprehensibility of gigapixel EM images across a wide range of scales. Second, we use convolutional neural networks (CNNs) to automatically derive high-level features that distinguish cell states in live-cell imagery and voxel types in 3D EM volumes. In addition, we present a CNN-based 3D segmentation method for biomedical volume datasets with limited training samples. We use factorized convolutions and feature-level augmentations to improve model generalization and avoid overfitting

    Adaptive multiresolution visualization of large multidimensional multivariate scientific datasets

    Get PDF
    The sizes of today\u27s scientific datasets range from megabytes to terabytes, making it impossible to directly browse the raw datasets visually. This presents significant challenges for visualization scientists who are interested in supporting these datasets. In this thesis, we present an adaptive data representation model which can be utilized with many of the commonly employed visualization techniques when dealing with large amounts of data. Our hierarchical design also alleviates the long standing visualization problem due to limited display space. The idea is based on using compactly supported orthogonal wavelets and additional downsizing techniques to generate a hierarchy of fine to coarse approximations of a very large dataset for visualization. An adaptive data hierarchy, which contains authentic multiresolution approximations and the corresponding error, has many advantages over the original data. First, it allows scientists to visualize the overall structure of a dataset by browsing its coarse approximations. Second, the fine approximations of the hierarchy provide local details of the interesting data subsets. Third, the error of the data representation can provide the scientist with information about the authenticity of the data approximation. Finally, in a client-server network environment, a coarse representation can increase the efficiency of a visualization process by quickly giving users a rough idea of the dataset before they decide whether to continue the transmission or to abort it. For datasets which require long rendering time, an authentic approximation of a very large dataset can speed up the visualization process greatly. Variations on the main wavelet-based multiresolution hierarchy described in this thesis also lead to other multiresolution representation mechanisms. For example, we investigate the uses of norm projections and principal components to build multiresolution data hierarchies of large multivariate datasets. This leads to the development of a more flexible dual multiresolution visualization environment for large data exploration. We present the results of experimental studies of our adaptive multiresolution representation using wavelets. Utilizing a multiresolution data hierarchy, we illustrate that information access from a dataset with tens of millions of data values can be achieved in real time. Based on these results, we propose procedures to assist in generating a multiresolution hierarchy of a large dataset. For example, the findings indicate that an ordinary computed tomography volume dataset can be represented effectively for some tasks by an adaptive data hierarchy with less than 1.5% of its original size

    Comparative Uncertainty Visualization for High-Level Analysis of Scalar- and Vector-Valued Ensembles

    Get PDF
    With this thesis, I contribute to the research field of uncertainty visualization, considering parameter dependencies in multi valued fields and the uncertainty of automated data analysis. Like uncertainty visualization in general, both of these fields are becoming more and more important due to increasing computational power, growing importance and availability of complex models and collected data, and progress in artificial intelligence. I contribute in the following application areas: Uncertain Topology of Scalar Field Ensembles. The generalization of topology-based visualizations to multi valued data involves many challenges. An example is the comparative visualization of multiple contour trees, complicated by the random nature of prevalent contour tree layout algorithms. I present a novel approach for the comparative visualization of contour trees - the Fuzzy Contour Tree. Uncertain Topological Features in Time-Dependent Scalar Fields. Tracking features in time-dependent scalar fields is an active field of research, where most approaches rely on the comparison of consecutive time steps. I created a more holistic visualization for time-varying scalar field topology by adapting Fuzzy Contour Trees to the time-dependent setting. Uncertain Trajectories in Vector Field Ensembles. Visitation maps are an intuitive and well-known visualization of uncertain trajectories in vector field ensembles. For large ensembles, visitation maps are not applicable, or only with extensive time requirements. I developed Visitation Graphs, a new representation and data reduction method for vector field ensembles that can be calculated in situ and is an optimal basis for the efficient generation of visitation maps. This is accomplished by bringing forward calculation times to the pre-processing. Visually Supported Anomaly Detection in Cyber Security. Numerous cyber attacks and the increasing complexity of networks and their protection necessitate the application of automated data analysis in cyber security. Due to uncertainty in automated anomaly detection, the results need to be communicated to analysts to ensure appropriate reactions. I introduce a visualization system combining device readings and anomaly detection results: the Security in Process System. To further support analysts I developed an application agnostic framework that supports the integration of knowledge assistance and applied it to the Security in Process System. I present this Knowledge Rocks Framework, its application and the results of evaluations for both, the original and the knowledge assisted Security in Process System. For all presented systems, I provide implementation details, illustrations and applications

    ANALYSIS AND VISUALIZATION OF FLOW FIELDS USING INFORMATION-THEORETIC TECHNIQUES AND GRAPH-BASED REPRESENTATIONS

    Get PDF
    Three-dimensional flow visualization plays an essential role in many areas of science and engineering, such as aero- and hydro-dynamical systems which dominate various physical and natural phenomena. For popular methods such as the streamline visualization to be effective, they should capture the underlying flow features while facilitating user observation and understanding of the flow field in a clear manner. My research mainly focuses on the analysis and visualization of flow fields using various techniques, e.g. information-theoretic techniques and graph-based representations. Since the streamline visualization is a popular technique in flow field visualization, how to select good streamlines to capture flow patterns and how to pick good viewpoints to observe flow fields become critical. We treat streamline selection and viewpoint selection as symmetric problems and solve them simultaneously using the dual information channel [81]. To the best of my knowledge, this is the first attempt in flow visualization to combine these two selection problems in a unified approach. This work selects streamline in a view-independent manner and the selected streamlines will not change for all viewpoints. My another work [56] uses an information-theoretic approach to evaluate the importance of each streamline under various sample viewpoints and presents a solution for view-dependent streamline selection that guarantees coherent streamline update when the view changes gradually. When projecting 3D streamlines to 2D images for viewing, occlusion and clutter become inevitable. To address this challenge, we design FlowGraph [57, 58], a novel compound graph representation that organizes field line clusters and spatiotemporal regions hierarchically for occlusion-free and controllable visual exploration. We enable observation and exploration of the relationships among field line clusters, spatiotemporal regions and their interconnection in the transformed space. Most viewpoint selection methods only consider the external viewpoints outside of the flow field. This will not convey a clear observation when the flow field is clutter on the boundary side. Therefore, we propose a new way to explore flow fields by selecting several internal viewpoints around the flow features inside of the flow field and then generating a B-Spline curve path traversing these viewpoints to provide users with closeup views of the flow field for detailed observation of hidden or occluded internal flow features [54]. This work is also extended to deal with unsteady flow fields. Besides flow field visualization, some other topics relevant to visualization also attract my attention. In iGraph [31], we leverage a distributed system along with a tiled display wall to provide users with high-resolution visual analytics of big image and text collections in real time. Developing pedagogical visualization tools forms my other research focus. Since most cryptography algorithms use sophisticated mathematics, it is difficult for beginners to understand both what the algorithm does and how the algorithm does that. Therefore, we develop a set of visualization tools to provide users with an intuitive way to learn and understand these algorithms

    Visual analytics methods for shape analysis of biomedical images exemplified on rodent skull morphology

    Get PDF
    In morphometrics and its application fields like medicine and biology experts are interested in causal relations of variation in organismic shape to phylogenetic, ecological, geographical, epidemiological or disease factors - or put more succinctly by Fred L. Bookstein, morphometrics is "the study of covariances of biological form". In order to reveal causes for shape variability, targeted statistical analysis correlating shape features against external and internal factors is necessary but due to the complexity of the problem often not feasible in an automated way. Therefore, a visual analytics approach is proposed in this thesis that couples interactive visualizations with automated statistical analyses in order to stimulate generation and qualitative assessment of hypotheses on relevant shape features and their potentially affecting factors. To this end long established morphometric techniques are combined with recent shape modeling approaches from geometry processing and medical imaging, leading to novel visual analytics methods for shape analysis. When used in concert these methods facilitate targeted analysis of characteristic shape differences between groups, co-variation between different structures on the same anatomy and correlation of shape to extrinsic attributes. Here a special focus is put on accurate modeling and interactive rendering of image deformations at high spatial resolution, because that allows for faithful representation and communication of diminutive shape features, large shape differences and volumetric structures. The utility of the presented methods is demonstrated in case studies conducted together with a collaborating morphometrics expert. As exemplary model structure serves the rodent skull and its mandible that are assessed via computed tomography scans
    corecore