17 research outputs found

    Introducing the gMix Open Source Framework for Mix Implementations

    Get PDF
    Abstract. In this paper we introduce the open source software framework gMix which aims to simplify the implementation and evaluation of mix-based systems. gMix is targeted at researchers who want to evaluate new ideas and developers interested in building practical mix systems. The framework consists of a generic architecture structured in logical layers with a clear separation of concerns. Implementations of mix variants and supportive components are organized as plug-ins that can easily be exchanged and extended. We provide reference implementations for several well-known mix concepts

    Integrating Privacy-Enhancing Technologies into the Internet Infrastructure

    Get PDF
    The AN.ON-Next project aims to integrate privacy-enhancing technologies into the internet’s infrastructure and establish them in the consumer mass market. The technologies in focus include a basis protection at internet service provider level, an improved overlay network-based protection and a concept for privacy protection in the emerging 5G mobile network. A crucial success factor will be the viable adjustment and development of standards, business models and pricing strategies for those new technologies

    GNSS array-based acquisition: theory and implementation

    Get PDF
    This Dissertation addresses the signal acquisition problem using antenna arrays in the general framework of Global Navigation Satellite Systems (GNSS) receivers. The term GNSS classi es those navigation systems based on a constellation of satellites, which emit ranging signals useful for positioning. Although the American GPS is already available, which coexists with the renewed Russian Glonass, the forthcoming European contribution (Galileo) along with the Chinese Compass will be operative soon. Therefore, a variety of satellite constellations and signals will be available in the next years. GNSSs provide the necessary infrastructures for a myriad of applications and services that demand a robust and accurate positioning service. The positioning availability must be guaranteed all the time, specially in safety-critical and mission-critical services. Examining the threats against the service availability, it is important to take into account that all the present and the forthcoming GNSSs make use of Code Division Multiple Access (CDMA) techniques. The ranging signals are received with very low precorrelation signal-to-noise ratio (in the order of ���22 dB for a receiver operating at the Earth surface). Despite that the GNSS CDMA processing gain o ers limited protection against Radio Frequency interferences (RFI), an interference with a interference-to-signal power ratio that exceeds the processing gain can easily degrade receivers' performance or even deny completely the GNSS service, specially conventional receivers equipped with minimal or basic level of protection towards RFIs. As a consequence, RFIs (either intentional or unintentional) remain as the most important cause of performance degradation. A growing concern of this problem has appeared in recent times. Focusing our attention on the GNSS receiver, it is known that signal acquisition has the lowest sensitivity of the whole receiver operation, and, consequently, it becomes the performance bottleneck in the presence of interfering signals. A single-antenna receiver can make use of time and frequency diversity to mitigate interferences, even though the performance of these techniques is compromised in low SNR scenarios or in the presence of wideband interferences. On the other hand, antenna arrays receivers can bene t from spatial-domain processing, and thus mitigate the e ects of interfering signals. Spatial diversity has been traditionally applied to the signal tracking operation of GNSS receivers. However, initial tracking conditions depend on signal acquisition, and there are a number of scenarios in which the acquisition process can fail as stated before. Surprisingly, to the best of our knowledge, the application of antenna arrays to GNSS signal acquisition has not received much attention. This Thesis pursues a twofold objective: on the one hand, it proposes novel arraybased acquisition algorithms using a well-established statistical detection theory framework, and on the other hand demonstrates both their real-time implementation feasibility and their performance in realistic scenarios. The Dissertation starts with a brief introduction to GNSS receivers fundamentals, providing some details about the navigation signals structure and the receiver's architecture of both GPS and Galileo systems. It follows with an analysis of GNSS signal acquisition as a detection problem, using the Neyman-Pearson (NP) detection theory framework and the single-antenna acquisition signal model. The NP approach is used here to derive both the optimum detector (known as clairvoyant detector ) and the sov called Generalized Likelihood Ratio Test (GLRT) detector, which is the basis of almost all of the current state-of-the-art acquisition algorithms. Going further, a novel detector test statistic intended to jointly acquire a set of GNSS satellites is obtained, thus reducing both the acquisition time and the required computational resources. The eff ects of the front-end bandwidth in the acquisition are also taken into account. Then, the GLRT is extended to the array signal model to obtain an original detector which is able to mitigate temporally uncorrelated interferences even if the array is unstructured and moderately uncalibrated, thus becoming one of the main contributions of this Dissertation. The key statistical feature is the assumption of an arbitrary and unknown covariance noise matrix, which attempts to capture the statistical behavior of the interferences and other non-desirable signals, while exploiting the spatial dimension provided by antenna arrays. Closed form expressions for the detection and false alarm probabilities are provided. Performance and interference rejection capability are modeled and compared both to their theoretical bound. The proposed array-based acquisition algorithm is also compared to conventional acquisition techniques performed after blind null-steering beamformer approaches, such as the power minimization algorithm. Furthermore, the detector is analyzed under realistic conditions, accounting for the presence of errors in the covariance matrix estimation, residual Doppler and delay errors, and signal quantization e ects. Theoretical results are supported by Monte Carlo simulations. As another main contribution of this Dissertation, the second part of the work deals with the design and the implementation of a novel Field Programmable Gate Array (FPGA)-based GNSS real-time antenna-array receiver platform. The platform is intended to be used as a research tool tightly coupled with software de ned GNSS receivers. A complete signal reception chain including the antenna array and the multichannel phase-coherent RF front-end for the GPS L1/ Galileo E1 was designed, implemented and tested. The details of the digital processing section of the platform, such as the array signal statistics extraction modules, are also provided. The design trade-o s and the implementation complexities were carefully analyzed and taken into account. As a proof-of-concept, the problem of GNSS vulnerability to interferences was addressed using the presented platform. The array-based acquisition algorithms introduced in this Dissertation were implemented and tested under realistic conditions. The performance of the algorithms were compared to single antenna acquisition techniques, measured under strong in-band interference scenarios, including narrow/wide band interferers and communication signals. The platform was designed to demonstrate the implementation feasibility of novel array-based acquisition algorithms, leaving the rest of the receiver operations (mainly, tracking, navigation message decoding, code and phase observables, and basic Position, Velocity and Time (PVT) solution) to a Software De ned Radio (SDR) receiver running in a personal computer, processing in real-time the spatially- ltered signal sample stream coming from the platform using a Gigabit Ethernet bus data link. In the last part of this Dissertation, we close the loop by designing and implementing such software receiver. The proposed software receiver targets multi-constellation/multi-frequency architectures, pursuing the goals of e ciency, modularity, interoperability, and exibility demanded by user domains that require non-standard features, such as intermediate signals or data extraction and algorithms interchangeability. In this context, we introduce an open-source, real-time GNSS software de ned receiver (so-named GNSS-SDR) that contributes with several novel features such as the use of software design patterns and shared memory techniques to manage e ciently the data ow between receiver blocks, the use of hardware-accelerated instructions for time-consuming vector operations like carrier wipe-o and code correlation, and the availability to compile and run on multiple software platforms and hardware architectures. At this time of writing (April 2012), the receiver enjoys of a 2-dimensional Distance Root Mean Square (DRMS) error lower than 2 meters for a GPS L1 C/A scenario with 8 satellites in lock and a Horizontal Dilution Of Precision (HDOP) of 1.2.Esta tesis aborda el problema de la adquisición de la señal usando arrays de antenas en el marco general de los receptores de Sistemas Globales de Navegación por Satélite (GNSS). El término GNSS engloba aquellos sistemas de navegación basados en una constelación de satélites que emiten señales útiles para el posicionamiento. Aunque el GPS americano ya está disponible, coexistiendo con el renovado sistema ruso GLONASS, actualmente se está realizando un gran esfuerzo para que la contribución europea (Galileo), junto con el nuevo sistema chino Compass, estén operativos en breve. Por lo tanto, una gran variedad de constelaciones de satélites y señales estarán disponibles en los próximos años. Estos sistemas proporcionan las infraestructuras necesarias para una multitud de aplicaciones y servicios que demandan un servicio de posicionamiento confiable y preciso. La disponibilidad de posicionamiento se debe garantizar en todo momento, especialmente en los servicios críticos para la seguridad de las personas y los bienes. Cuando examinamos las amenazas de la disponibilidad del servicio que ofrecen los GNSSs, es importante tener en cuenta que todos los sistemas presentes y los sistemas futuros ya planificados hacen uso de técnicas de multiplexación por división de código (CDMA). Las señales transmitidas por los satélites son recibidas con una relación señal-ruido (SNR) muy baja, medida antes de la correlación (del orden de -22 dB para un receptor ubicado en la superficie de la tierra). A pesar de que la ganancia de procesado CDMA ofrece una protección inherente contra las interferencias de radiofrecuencia (RFI), esta protección es limitada. Una interferencia con una relación de potencia de interferencia a potencia de la señal que excede la ganancia de procesado puede degradar el rendimiento de los receptores o incluso negar por completo el servicio GNSS. Este riesgo es especialmente importante en receptores convencionales equipados con un nivel mínimo o básico de protección frente las RFIs. Como consecuencia, las RFIs (ya sean intencionadas o no intencionadas), se identifican como la causa más importante de la degradación del rendimiento en GNSS. El problema esta causando una preocupación creciente en los últimos tiempos, ya que cada vez hay más servicios que dependen de los GNSSs Si centramos la atención en el receptor GNSS, es conocido que la adquisición de la señal tiene la menor sensibilidad de todas las operaciones del receptor, y, en consecuencia, se convierte en el factor limitador en la presencia de señales interferentes. Un receptor de una sola antena puede hacer uso de la diversidad en tiempo y frecuencia para mitigar las interferencias, aunque el rendimiento de estas técnicas se ve comprometido en escenarios con baja SNR o en presencia de interferencias de banda ancha. Por otro lado, los receptores basados en múltiples antenas se pueden beneficiar del procesado espacial, y por lo tanto mitigar los efectos de las señales interferentes. La diversidad espacial se ha aplicado tradicionalmente a la operación de tracking de la señal en receptores GNSS. Sin embargo, las condiciones iniciales del tracking dependen del resultado de la adquisición de la señal, y como hemos visto antes, hay un número de situaciones en las que el proceso de adquisición puede fallar. En base a nuestro grado de conocimiento, la aplicación de los arrays de antenas a la adquisición de la señal GNSS no ha recibido mucha atención, sorprendentemente. El objetivo de esta tesis doctoral es doble: por un lado, proponer nuevos algoritmos para la adquisición basados en arrays de antenas, usando como marco la teoría de la detección de señal estadística, y por otro lado, demostrar la viabilidad de su implementación y ejecución en tiempo real, así como su medir su rendimiento en escenarios realistas. La tesis comienza con una breve introducción a los fundamentos de los receptores GNSS, proporcionando algunos detalles sobre la estructura de las señales de navegación y la arquitectura del receptor aplicada a los sistemas GPS y Galileo. Continua con el análisis de la adquisición GNSS como un problema de detección, aplicando la teoría del detector Neyman-Pearson (NP) y el modelo de señal de una única antena. El marco teórico del detector NP se utiliza aquí para derivar tanto el detector óptimo (conocido como detector clarividente) como la denominada Prueba Generalizada de la Razón de Verosimilitud (en inglés, Generalized Likelihood Ratio Test (GLRT)), que forma la base de prácticamente todos los algoritmos de adquisición del estado del arte actual. Yendo más lejos, proponemos un nuevo detector diseñado para adquirir simultáneamente un conjunto de satélites, por lo tanto, obtiene una reducción del tiempo de adquisición y de los recursos computacionales necesarios en el proceso, respecto a las técnicas convencionales. El efecto del ancho de banda del receptor también se ha tenido en cuenta en los análisis. A continuación, el detector GLRT se extiende al modelo de señal de array de antenas para obtener un detector nuevo que es capaz de mitigar interferencias no correladas temporalmente, incluso utilizando arrays no estructurados y moderadamente descalibrados, convirtiéndose así en una de las principales aportaciones de esta tesis. La clave del detector es asumir una matriz de covarianza de ruido arbitraria y desconocida en el modelo de señal, que trata de captar el comportamiento estadístico de las interferencias y otras señales no deseadas, mientras que utiliza la dimensión espacial proporcionada por los arrays de antenas. Se han derivado las expresiones que modelan las probabilidades teóricas de detección y falsa alarma. El rendimiento del detector y su capacidad de rechazo a interferencias se han modelado y comparado con su límite teórico. El algoritmo propuesto también ha sido comparado con técnicas de adquisición convencionales, ejecutadas utilizando la salida de conformadores de haz que utilizan algoritmos de filtrado de interferencias, como el algoritmo de minimización de la potencia. Además, el detector se ha analizado bajo condiciones realistas, representadas con la presencia de errores en la estimación de covarianzas, errores residuales en la estimación del Doppler y el retardo de señal, y los efectos de la cuantificación. Los resultados teóricos se apoyan en simulaciones de Monte Carlo. Como otra contribución principal de esta tesis, la segunda parte del trabajo trata sobre el diseño y la implementación de una nueva plataforma para receptores GNSS en tiempo real basados en array de antenas que utiliza la tecnología de matriz programable de puertas lógicas (en ingles Field Programmable Gate Array (FPGA)). La plataforma está destinada a ser utilizada como una herramienta de investigación estrechamente acoplada con receptores GNSS definidos por software. Se ha diseñado, implementado y verificado la cadena completa de recepción, incluyendo el array de antenas y el front-end multi-canal para las señales GPS L1 y Galileo E1. El documento explica en detalle el procesado de señal que se realiza, como por ejemplo, la implementación del módulo de extracción de estadísticas de la señal. Los compromisos de diseño y las complejidades derivadas han sido cuidadosamente analizadas y tenidas en cuenta. La plataforma ha sido utilizada como prueba de concepto para solucionar el problema presentado de la vulnerabilidad del GNSS a las interferencias. Los algoritmos de adquisición introducidos en esta tesis se han implementado y probado en condiciones realistas. El rendimiento de los algoritmos se comparó con las técnicas de adquisición basadas en una sola antena. Se han realizado pruebas en escenarios que contienen interferencias dentro de la banda GNSS, incluyendo interferencias de banda estrecha y banda ancha y señales de comunicación. La plataforma fue diseñada para demostrar la viabilidad de la implementación de nuevos algoritmos de adquisición basados en array de antenas, dejando el resto de las operaciones del receptor (principalmente, los módulos de tracking, decodificación del mensaje de navegación, los observables de código y fase, y la solución básica de Posición, Velocidad y Tiempo (PVT)) a un receptor basado en el concepto de Radio Definida por Software (SDR), el cual se ejecuta en un ordenador personal. El receptor procesa en tiempo real las muestras de la señal filltradas espacialmente, transmitidas usando el bus de datos Gigabit Ethernet. En la última parte de esta Tesis, cerramos ciclo diseñando e implementando completamente este receptor basado en software. El receptor propuesto está dirigido a las arquitecturas de multi-constalación GNSS y multi-frecuencia, persiguiendo los objetivos de eficiencia, modularidad, interoperabilidad y flexibilidad demandada por los usuarios que requieren características no estándar, tales como la extracción de señales intermedias o de datos y intercambio de algoritmos. En este contexto, se presenta un receptor de código abierto que puede trabajar en tiempo real, llamado GNSS-SDR, que contribuye con varias características nuevas. Entre ellas destacan el uso de patrones de diseño de software y técnicas de memoria compartida para administrar de manera eficiente el uso de datos entre los bloques del receptor, el uso de la aceleración por hardware para las operaciones vectoriales más costosas, como la eliminación de la frecuencia Doppler y la correlación de código, y la disponibilidad para compilar y ejecutar el receptor en múltiples plataformas de software y arquitecturas de hardware. A fecha de la escritura de esta Tesis (abril de 2012), el receptor obtiene un rendimiento basado en la medida de la raíz cuadrada del error cuadrático medio en la distancia bidimensional (en inglés, 2-dimensional Distance Root Mean Square (DRMS) error) menor de 2 metros para un escenario GPS L1 C/A con 8 satélites visibles y una dilución de la precisión horizontal (en inglés, Horizontal Dilution Of Precision (HDOP)) de 1.2

    Doctor of Philosophy

    Get PDF
    dissertationEnergy generation through combustion of hydrocarbons continues to dominate as the most common method for energy generation. In the U.S., nearly 84% of the energy consumption comes from the combustion of fossil fuels. Because of this demand, there is a continued need for improvement, enhancement, and understanding of the combustion process. As computational power increases, and our methods for modelling these complex combustion systems improve, combustion modelling has become an important tool in gaining deeper insight and understanding of these complex systems. The constant state of change in computational ability leads to a continual need for new combustion models that can take full advantage of the latest computational resources. To this end, the research presented here encompasses the development of new models which can be tailored to the available resources, allowing one to increase or decrease the amount of modelling error based on the available computational resources and desired accuracy. Principal component analysis (PCA) is used to identify the low-dimensional manifolds which exist in turbulent combustion systems. These manifolds are unique in there ability to represent a larger dimensional space with fewer components, resulting in a minimal addition of error. PCA is well-suited for the problem at hand because of its ability to allow the user to define the amount of error in approximation, depending on the resources at hand. The research presented here looks into various methods which exploit the benefits of PCA in modelling combustion systems, demonstrating several models, and providing new and interesting perspectives for the PCA-based approaches to modelling turbulent combustion

    Complementary use of computer simulations and molecular-thermodynamic theory to model surfactant and solubilizate self-assembly

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering, February 2007.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references.Surfactants, or surface active agents, are used in many pharmaceutical, industrial, and environmental applications. Selection of the appropriate surfactant or mixture of surfactants for any given application is driven by the need to control bulk solution micellization and solubilization characteristics. The goal of this thesis has been to develop computer simulations and molecular-thermodynamic modeling approaches to predict these solution characteristics based on knowledge of surfactant and solubilizate chemical structure. The ability to make such predictions would give formulators in industry the ability to design and optimize surfactant formulations with a minimum of effort and expense. This thesis has explored the application of three theoretical approaches to model surfactant micellization and micellar solubilization. The first theoretical approach involves the use of computer simulations (CS) to obtain input parameters for molecular-thermodynamic (MT) modeling of surfactant micellization and micellar solubilization. This approach was motivated by the limitations inherent in computer simulations (the high computational expense of modeling self-assembly) and in MT modeling approaches (their restriction to structurally and chemically simple surfactants and solubilizates).(cont.) A key input required for traditional MT modeling is the identification of the hydrated and the unhydrated portions (head and tail) of surfactants and solubilizates in a self-assembled micellar aggregate. By conducting simulations of surfactants and solubilizates at an oil/water interface (modeling the micelle core/water interface) or in a micellar environment, I have determined head and tail input parameters for simple and complex surfactants and solubilizates. This information has been successfully used as an input to MT modeling, and has been shown to extend the applicability of the traditional MT modeling approach to more complex surfactant and solubilizate systems than had been possible to date. A wide range of surfactant and solubilizate systems have been modeled with this approach, including ionic, zwitterionic, and nonionic surfactant/solubilizate systems. For each of the systems modeled, theoretical predictions were in reasonable agreement with the experimental data. A novel, alternative approach has also been developed to more accurately quantify the hydrophobic driving force for micelle formation by using atomistic molecular dynamics (MD) simulations to quantify the hydration changes that take place during micelle self-assembly.(cont.) This new approach is referred to as the computer simulation/molecular-thermodynamic (CS-MT) model. In the CS-MT model, hydration information determined through computer simulation is used in a new MT model to quantify the hydrophobic effect, which is decomposed into two components: 9dehydr, the free-energy change associated with the dehydration of hydrophobic groups that accompanies aggregate self-assembly, and 9ydr, the change in hydration free energy experienced during aggregate self-assembly. The CS-MT model is formulated to allow the prediction of the free-energy change associated with the formation of aggregates of any shape and size after performing only two computer simulations if one of the surfactant/solubilizate in bulk water and the second of the surfactant/solubilizate in an aggregate of arbitrary shape and size. The CS-MT modeling approach has been validated by using it to model the formation of oil aggregates, the micellization behavior of nonionic surfactants in aqueous solution, and the micellization behavior of ionic and zwitterionic surfactants in aqueous solution. For each of the systems modeled, the CS-MT model predictions were in reasonable agreement with the experimental data, and in almost all cases were in better agreement with the experimental data than the predictions of the traditional MT model.(cont.) The second theoretical approach explored in this thesis is the application of computer simulation free-energy (FE) methods to quantify the thermodynamics of mixed micelle formation. In this theoretical approach, referred to as the CS-FE/MT modeling approach, the traditional MT modeling approach, or experimental data, is first used to determine the free energy of formation of a pure (single) surfactant micelle. Subsequently, computer simulations are used to determine the free-energy change associated with alchemically changing the identity of individual surfactants present in the micelle to that of a second surfactant or solubilizate. This free-energy change, when added to the free energy of single surfactant micellization, yields the free energy associated with mixed micelle formation. The free energy of mixed micelle formation can then be used in the context of a thermodynamic description of the micellar solution to predict bulk solution properties such as the CMC and the equilibrium composition of the mixed micelle. The CS-FE/MT model has been used to model both binary surfactant micellization and micellar solubilization. The CS-FE/MT model was shown to be most accurate when the chemical structures of the mixed micelle components were similar and when small alchemical transformations were performed.(cont.) The third theoretical approach explored in this thesis is the use of all-atomistic computer simulations to make direct predictions of surfactant solution properties. Although the computational expense associated with atomistic-level MD simulations restricts their use to the evaluation of a limited subset of surfactant solution properties, these simulations can provide significant insight into the structural characteristics of preformed surfactant aggregates and the self-assembly behavior of surfactant molecules over limited timescales. Simulation of monolayers of a homologous series of structurally complex fiuorosurfactants has been conducted in order to explore their behavior at a water/air interface and the origin of their ability to reduce surface tension. In addition, atomistic-level MD simulations have been conducted to study the self-assembly behavior of the triterpenoids asiatic acid (AA) and madecassic acid (MA) in aqueous solution. The computer simulation results were used to obtain information about: i) the kinetics of micelle formation, ii) the structural characteristics of the self-assembled micelles, and iii) micellization thermodynamics.(cont.) This thesis presents a detailed, atomistic-level computer simulation and molecular-thermodynamic investigation of the micellar solution behavior of nonionic, zwitterionic, and ionic surfactants in aqueous solutions, as well as of the aqueous micellar solubilization of solubilizates by surfactants. It is hoped that the approaches developed in this thesis to use computer simulations and molecular-thermodynamic theory in a complementary way will not only extend our ability to make accurate predictions of surfactant solution behavior, but will also contribute to our fundamental knowledge of the solution behavior of surfactants and solubilizates. It is further hoped that this thesis will provide a solid foundation for future research in the area of surfactant science, and, more generally, that it will assist future researchers working to connect atomistic-level computer simulation methods with continuum thermodynamic models.by Brian C. Stephenson.Ph.D

    Doctor of Philosophy

    Get PDF
    dissertationEnergy generation through combustion of hydrocarbons continues to dominate as the most common method for energy generation. In the U.S., nearly 84% of the energy consumption comes from the combustion of fossil fuels. Because of this demand, there is a continued need for improvement, enhancement, and understanding of the combustion process. As computational power increases, and our methods for modelling these complex combustion systems improve, combustion modelling has become an important tool in gaining deeper insight and understanding of these complex systems. The constant state of change in computational ability leads to a continual need for new combustion models that can take full advantage of the latest computational resources. To this end, the research presented here encompasses the development of new models which can be tailored to the available resources, allowing one to increase or decrease the amount of modelling error based on the available computational resources and desired accuracy. Principal component analysis (PCA) is used to identify the low-dimensional manifolds which exist in turbulent combustion systems. These manifolds are unique in there ability to represent a larger dimensional space with fewer components, resulting in a minimal addition of error. PCA is well-suited for the problem at hand because of its ability to allow the user to define the amount of error in approximation, depending on the resources at hand. The research presented here looks into various methods which exploit the benefits of PCA in modelling combustion systems, demonstrating several models, and providing new and interesting perspectives for the PCA-based approaches to modelling turbulent combustion

    Coarse grained hydrogels

    Get PDF

    A Computational Study of Material Transformations in Glass Forming Systems

    Get PDF
    Amorphous solids (glasses) are a class of materials that lack the traditional long-range order found in crystals, and are primarily formed by rapid cooling of a liquid to bypass crystal nucleation. Their lack of crystallinity and associated defects gives them useful electromagnetic and mechanical properties. However, the affinity of a material to vitrification is only loosely understood, and structural detail is difficult to obtain via traditional methods. This thesis firstly investigates the promotion of glass formation via crystal inhibition. Molecular dynamics simulations of binary alloys are used to show crystal frustration via specific interactions of interaction range and particle softness, resulting in a lower enthalpic drive and complex crystal structures. Secondly, a facilitated kinetic Ising model is used to investigate the dynamics of organic glasses in solution. Glass dissolution is shown to have a non-linear dependence on the effective temperature of the solute, switching between a front-like dissolution at low temperatures, and a diffuse interface at higher temperatures. Also shown is a method of preparing an enhanced glass via precipitation from a solution, capable of creating a much lower energy glass than simple bulk cooling
    corecore